Generation of tensorrt engine using onnx model file

Hi

Can we generate tensorrt engine file using onnx file ??

[property]
gpu-id=0
net-scale-factor=0.017507
model-color-format=0
onnx-file=152_best_7class.onnx
labelfile-path=labels_ship.txt
batch-size=1
network-mode=0
num-detected-classes=7
interval=0
gie-unique-id=1
process-mode=1
network-type=0
infer-on-gie-id=1
infer-on-class-ids=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

The above config file is property of nvinfer element, i have loaded the onnx file with other parameters .

The error am getting is like this:

Linking complete0:00:00.206644865 779650 0x556cff28b670 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
YOLO config file or weights file is not specified

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:01.419374027 779650 0x556cff28b670 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:01.497162951 779650 0x556cff28b670 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:01.497188463 779650 0x556cff28b670 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:01.497249115 779650 0x556cff28b670 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.497255056 779650 0x556cff28b670 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Config file path: /home/aimluser2/ship_multiple/ship_config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element nvinfer: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:nvinfer:
Config file path: /home/aimluser2/ship_multiple/ship_config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0

can you please help me to solve this…

yes you can, use trtexec tool

cd /usr/src/tensorrt/bin

./trtexec --onnx=path/to/152_best_7class.onnx --saveEngine=/path/to/152_best_7class.engine

do ./trtexex --help to see more options like --fp16 or --int8

once you have your engine file, modify your config file by adding

...
onnx-file=152_best_7class.onnx
model-engine-file=152_best_7class.engine  <---
labelfile-path=labels_ship.txt
...

This is the configuration file which i have used
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0

onnx-file=/home/aimluser2/demo-friday/yolov5s.onnx
model-engine-file=/home/aimluser2/demo-friday/yolov5s.engine
labelfile-path=labels80.txt
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=libnvdsinfer_custom_impl_Yolov5.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.01
pre-cluster-threshold=0.15
topk=300

i have generated the .engine of pretrained yolov5 using trtexec and ran the pipeline

the output am getting is as below
Running…

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.857: gst_clock_get_time: assertion ‘GST_IS_CLOCK (clock)’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.857: gst_object_unref: assertion ‘object != NULL’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.858: gst_clock_get_time: assertion ‘GST_IS_CLOCK (clock)’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.858: gst_object_unref: assertion ‘object != NULL’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.858: gst_clock_get_time: assertion ‘GST_IS_CLOCK (clock)’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.858: gst_object_unref: assertion ‘object != NULL’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.859: gst_clock_get_time: assertion ‘GST_IS_CLOCK (clock)’ failed

(Irst2023:2360486): GStreamer-CRITICAL **: 12:18:47.859: gst_object_unref: assertion ‘object != NULL’ failed
WARNING: Num classes mismatch. Configured: 80, detected by network: 0
Segmentation fault (core dumped)

and one more observation was custom-lib-path=libnvdsinfer_custom_impl_Yolov5.so we use this parameter to generate the .engine file so i removed the parameters from the config file

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0

onnx-file=/home/aimluser2/demo-friday/yolov5s.onnx
model-engine-file=/home/aimluser2/demo-friday/yolov5s.engine

labelfile-path=labels80.txt
num-detected-classes=80

[class-attrs-all]
nms-iou-threshold=0.01
pre-cluster-threshold=0.15
topk=300

the output i was getting was
pads linked…
adding osd probeno appsrc…
Linking complete0:00:00.169604515 2361898 0x557a019eef00 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 0]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 0]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.169748421 2361898 0x557a019eef00 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 0]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1185> [UID = 0]: Unique ID not set
0:00:00.169766123 2361898 0x557a019eef00 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:00.169770785 2361898 0x557a019eef00 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element nvinfer: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:nvinfer:
Config file path: config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0
Returned, stopping playback
Deleting pipeline

do we need to add or delete some parameters from the config file or is there any way to add engine file to the pipeline without adding nvinfer element??

I am assuming that you were able to create the engine using trtexec tool, so this issue is done.

and one more observation was 
custom-lib-path=libnvdsinfer_custom_impl_Yolov5.so 
we use this parameter to generate the .engine file so 
i removed the parameters from the config file

No, that is incorrect. Please refer this link to understand the meaning of properties used in the config files. The custom-lib-path is necessary when you are using your own custom models.

Please refer this github repo for the yolo implementation using deepstream sdk. You will understand the custom postprocess function and the config files better.