Error in exporting engine file using tlt-converter tool

Please provide the following information when requesting support.

• Hardware Nano
• Network Type Yolo_v3

@ Morganh

The last time you compiled the Yolo engine with the TLT command, you can use it. Now the same command reports an error. Why?

./tlt-converter -k tlt_encode -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 ./models/yolo3/yolov3_resnet18.etlt
[ERROR] Number of optimization profiles does not match model input node number.
Aborted

or
./tlt-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 ./models/yolo3/yolov3_resnet18.etlt

[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (2, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine

or I reinstall the plug-in and still report an error, as shown below:
./tlt-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 -t fp16 models/yolo3/yolov3_resnet18.etlt

[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
[ERROR] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

Please double check TRT OSS plugin.
Refer to Tlt-convert for custom trained YoloV4 model failed on Jetson Nano 4G - #42 by Morganh

@ jazeel.jk
@Morganh

I have the same problem as you. I can’t use fp16. I can only have two ln LIB file

[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
[ERROR] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

I did it with full reference to deepstream_tao_apps/TRT-OSS/Jetson/TRT7.1 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

As mentioned in Inferring detectnet_v2 .trt model in python - #27 by Morganh, please double check .

For example,

Original:

$ ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0
0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0
4.5M -rw-r–r-- 1 root root 4.5M Apr 26 16:38 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

If

$ sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0 ~/libnvinfer_plugin.so.7.1.0.bak

$ sudo cp {TRT_SOURCE}/build/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

$ sudo ldconfig

then

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

i can’t see libnvinfer_plugin.so.7.0.0.1

I seem to have solved the problem by manually adding LN
sudo ln -s /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0

Yes, my comment above is just an example.
Glad to see you have fixed the problem.