YOLO model, Convert ETLT into an Engine

I’ve been training my YOLO model and once everything is done, I’m trying to create an engine file, it ended up crashing with the same error:

[ERROR] IPluginV2DynamicExt requires network without implicit batch dimension

I’ve tried using the tlt-converter and tried using deepstream app’s config file in order to run this, still no success.
I’m currently running this on the Xavier/NX. Any help would be appreciated, thank you.

Hi @wimpy,
Can you provide the setup info?
Is your YoloV3 trained by TLT?

Can you try “force-implicit-batch-dim=1” in the nvinfer configure?

I get the same issue, I am converting using tlt-converter on the Jetson.

The .etlt was retrained using TLT on amd64 and can be converted on amd64 ok. But when converting the same .etlt on the Jetson, with the same params (using the arm64 tlt-converter), it gives this error.

Using: TRT 7.1.3

Perhaps a missing dependency?

Please revert “Update batchedNMS plugin to IPluginV2DynamicExt” change in TensorRT OSS - https://github.com/NVIDIA/TensorRT, and rebuild following tge README under https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS

I resolved it by installing the libnvinfer_plugin.so.7.0.0 from TRT OSS 7.0, even though I am using TRT 7.1.3.

Current TLT 2.0 release does not support TRT OSS 7.1 branch. Please sync TRT OSS 7.0 branch according to tlt user guide or https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS

Or @magnusm,
You can sync the TRT OSS master branch. It should be working.