ONNX as inference model (YoloV8)

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) = Jetson nano
**• DeepStream Version = 6.0.1
**• JetPack Version (valid for Jetson only) = 4.6.1
**• TensorRT Version = 8.2.1.8-1+cuda10.2
**• Python version = 3.6.9
**• Issue Type( questions, new requirements, bugs) I am struggling to create a parser for the ONNX model that I have.

Hi, sorry for the long description, I wanted to make sure that I did not leave anything out.

I have created a pipeline that can receive a real time video stream, and then do object detection and object tracking on it. So far I have used the resnet10 model for inference, but I wanted to switch to a more accurate and up to date model. I am looking to use YOLOv8(and maybe later YoloNAS) as a inference model. I used this website to aid me in converting a yolov8.pt file to a ONNX file :

Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK | Seeed Studio Wiki .

(In the website it says that there is a file called “gen_wts_yoloV8.py”, but I think that the script was renamed to “export_yoloV8.py”). The website also states that the function will extract the cfg- and weights files, where in fact it only exports the ONNX file (which is okay for my purpose). Just to make sure that the ONNX file was exported properly, I wrote a function that determines the outputs of the ONNX model, when I used this file to determine the outputs of the ONNX file, I got these results:
“”"
Output ‘boxes’ shape: [1, 8400, 4]
Output ‘scores’ shape: [1, 8400, 1]
Output ‘classes’ shape: [1, 8400, 1]
“”"

So it seems that the ONNX file was extracted properly.
(I used this command to create the ONNX file:

python3 export_yoloV8.py -w yolov8n.pt --dynamic

I had to use the --dynamic flag (instead of --simplify) because I was struggling to install the onnxsim package (I believe it is due to my python version not being up to date enough)
)
Now for the difficult part…
Deepstream only has these parsers prebuilt:

  • FasterRCNN
    
  • MaskRCNN
    
  • SSD
    
  • YoloV3 / YoloV3Tiny / YoloV2 / YoloV2Tiny
    
  • DetectNet
    

Here is the website where I found this information:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html

I am unsure of how to create a parser, so I looked online and found this website, to build the parser that can parse my ONNX file:

I used this command to build the .so file:

CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

This is my config file for my inference model:
‘’’
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/yolov8n.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/labels_YOLO.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
batch-size=1
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
force-implicit-batch-dim=0
output-blob-names=boxes;scores
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/libnvdsinfer_custom_impl_Yolo.so

network-mode=0
process-mode=1
network-type=0
cluster-mode=2
parse-bbox-func-name=NvDsInferParseYolo
maintain-aspect-ratio=1
symmetric-padding=1

[class-attrs-all]
pre-cluster-threshold=0.4
eps=0.2
group-threshold=1
“”"

After I execute the program, I get this response:

“”"
Opening in BLOCKING MODE
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine open error
0:00:03.659054294 13578 0x15b26b50 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine failed
0:00:03.660209729 13578 0x15b26b50 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:03.660261813 13578 0x15b26b50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (Ceil_212: IUnaryLayer cannot be used to compute a shape tensor)
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 214 [ConstantOfShape → “361”]:
ERROR: [TRT]: ModelImporter.cpp:774: — Begin node —
ERROR: [TRT]: ModelImporter.cpp:775: input: “360”
output: “361”
name: “ConstantOfShape_214”
op_type: “ConstantOfShape”
attribute {
name: “value”
t {
dims: 1
data_type: 1
raw_data: “\000\000\200?”
}
type: TENSOR
}
ERROR: [TRT]: ModelImporter.cpp:776: — End node —
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - ConstantOfShape_214
[graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (Ceil_212: IUnaryLayer cannot be used to compute a shape tensor)
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:04.299932545 13578 0x15b26b50 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.301080479 13578 0x15b26b50 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.301160324 13578 0x15b26b50 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.301315014 13578 0x15b26b50 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:04.301350588 13578 0x15b26b50 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: pgie_config_YOLO.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: pgie_config_YOLO.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

“”"
I have looked around and could not find any resources that describes how to create a parser for a YoloV8 onnx file, and I am very unsure of what the actual problem might be. I’m assuming it is because the parser is faulty, but I could also be wrong. Does someone maybe know of anything that might send me in the right direction to solving this? Thank you very much for the effort.

Hi, I am not expert on this. But for me i used the custom onnx file using the following property in the pgie config:

parse-bbox-func-name=NvDsInferParseCustomONNX

I changed the config file with “parse-bbox-func-name=NvDsInferParseCustomONNX”, but I still get the same error

from the log, the app failed to generate engine for yolov8.onnx. I will try to reproduce.

after testing on DeepStream6.2+ Orin, I can’t reproduce this error. the app can succeed to genearate the engine. I used the same steps with this website step, here is the log:
log.txt (2.4 KB)

What opset did you use when you created the ONNX model? And I’m going to retry all of the steps I mentioned in my question but with Deepstream 6.2 (Did you use python3.8 to build the model?)

python3 export_yoloV8.py -w yolov8s.pt --dynamic
Python 3.8.10

I’ve just re-implemented the above steps but with Deepstream 6.2, and python 3.8.10. Now I also have 4 layers to my onnx model. Here is my log file:
log.txt (5.2 KB)
Line 25 says “Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseCustomONNX pointer”. I found this resource online where the same problem was solved: “Yolov4 with Deepstream 5.0 (Python sample apps), dlsym failed to get func NvDsInferParseCustomBatchedNMSTLT pointer”, but I am not sure what ‘gilles.charlier97’ meant by “I finally solved it by changing the libnvds_infercustomparser.so by the libnvds_infercustomparser-tlt.so thas is inside the folder postprocessor of the Github repo deepstream_tlt_apps.”

from the log, the app succeeded to generate engine. the new error “failed to init resource because dlsym failed to get func NvDsInferParseCustomONNX pointer” means that the app can’t get the parsing function NvDsInferParseCustomONNX from the lib in custom-lib-path.

I set “parse-bbox-func-name=NvDsInferParseYolo”, But get the error: “nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:778> [UID = 1]: Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseYolo pointer”.

I am not sure why dlsym failed to get func NvDsInferParseYolo pointer…

is NvDsInferParseCustomONNX your custom function? I did not see the NvDsInferParseCustomONNX 's code. please set parse-bbox-func-name to NvDsInferParseYolo.

Awesome thank you! There were two problems, firstly the one that you pointed out, and secondly my ‘libnvdsinfer_custom_impl_Yolo.so’ was in the wrong location. Thank you for the help!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.