Yolov4 not working in deepstream app?

I have trained yolov4 on my data, etlt, engine and calibration file created successfully.
Now, when I trying to run yolov4 with deep-stream app so its thrown error.

The same procedure work for detectnet but not for yolov4

Could you please share the full log along with the command line?

Sorry for late, The error was with tensor-rt version, now issue is resolved, but the file is only running on PC, on jetson trt converter,I am not able to convert yolo (etlt) file to engine file. when I tried to convert, it throws error.

Command

sudo ./tlt-converter -k tlt_encode -d 3,544,960 -e trt.fp16.engine -o BatchedNMS -t fp16 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 -m 1 -i nchw -w 1610612736 /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov3/yolov3_resnet18.etlt

Error:

[ERROR] Number of optimization profiles does not match model input node number.
Aborted

Can you share the full log?

And where did you download tlt-converter?

Full Log
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:
[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (2, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault

Details:
I download tlt-converter (jetpack 4.5) from transfer learning toolkit official website.
Link : https://developer.nvidia.com/tlt-get-started

Please try -k nvidia_tlt

I already tried with -k nvidia_tlt but same error.

Command:

sudo ./tlt-converter -e /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine -p Input,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -d 3,544,960 -k nvidia_tlt -m 1 tlt_encode /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt

Error

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:
[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (1, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault

Could you please double check your command? Above seems to be not correct.

BTW, where did you run tlt-converter? In Jetson device or your host?

I am running on jetson device. above command work well on my system, but not worked on jetson,

There is two “/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine” . Can you double check your command?

No, There is no two "/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine"


sudo ./tlt-converter -e /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine -p Input,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -d 3,544,960 -k nvidia_tlt -m 1 tlt_encode /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt

Actually these two are different.
one for model output file /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine

second for model etlt file /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt

See above. It is not expected.

I followed this link

Can you double check your command?
Why there is “tlt_encode” in “-k nvidia_tlt -m 1 tlt_encode” ?

I removed tlt encode from command but same error.

Error:

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:
[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (1, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault

Code:

sudo ./tlt-converter -e /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine -p Input,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -d 3,544,960 -k nvidia_tlt -m 1 /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_python_apps/apps/deepstream_tlt_apps-master/models/yolov4/yolov4_resnet18.etlt

Could you please run similar command as below?
In Tlt-convert for custom trained YoloV4 model failed on Jetson Nano 4G - #34 by Morganh , I generate fp16 trt engine successfully in Nano.

$ ./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 yolov3_resnet18.etlt

Above command throw same error on my side.

Error:
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:
[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (1, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault

OK, to narrow down, could you try to generate fp32 trt engine via “-t fp32” ?

Fp32, FP16 generated successfully on PC, but on jetson FP32 throw error. is it possible to use fp32 file make on system to use in jetson, because fp32 is general mode?

Sorry, I not added my modules versions. Below is my jetson module details.
CUDA = 10.2
TRT = 7.1.3

It is not expected to get these errors when run tlt-converter. So, may I get more info about your results?
In host pc, do you run tlt-covnerter inside tlt docker or ouside the tlt docker?
Both fp16 and fp32 can be generated successfully. Right?
In Jetson, do you mean both fp16 and fp32 can not be generated. Right?