How do I deploy a custom ONNX model in Graph Composer?

• Hardware Platform (Jetson / GPU)
x86-64 Ubuntu 18.04 machine with Geforce GTX 1660 Super

• DeepStream Version
6.0

• TensorRT Version
8.4.2-1+cuda11.6 is what I get when I do dpkg -l | grep TensorRT, but I have CUDA 11.4

• NVIDIA GPU Driver Version (valid for GPU only)
470.129.06

• Issue Type( questions, new requirements, bugs)
Question


I have seen a lot of articles and posts which say that custom models (specifically in the ONNX format) can be deployed on DeepStream via Graph Composer, but I am unable to find a comprehensive guide on how to do this. I have seen on other questions in this forum which have said to use nvinfer - however, the linked article doesn’t really explain how to do this.

When trying to figure this out myself, I found the following possible solution: to create an NvDsInferVideo node and input an nvinfer config file as the parameter config-file-path. Because of this, I tried to create a text file which would do this for my ONNX model based off a template I found online:

deepstream_custom_nvinfer_config.txt (3.0 KB)

However, when I run my DeepStream graph, no bounding boxes appear. Are there any parameters I’m missing here, and is there a proper tutorial I can follow other than Gst-nvinfer — DeepStream 6.1.1 Release documentation to learn how to deploy an ONNX model to the DeepStream graph composer?

Edit 1: I found an error with my config file, as I had set network-type=2 when it was supposed to be 0 (my model is object detection). When I did this and ran the graph, I now get an error saying “Could not find output coverage layer for parsing objects” and then “Failed to parse bboxes”. This leads me to believe that I am doing something fundamentally wrong in my attempts to use a custom model - is editing this nvinfer config file the correct method, and if so, what am I doing wrong?

To add new model to NvDsInferenceExt in Graph Composer, need to modify NvDsInferenceExt. There is no interface exported to implement it.

You can only customize new model with DeepStream c/c++ or python APIs.

1 Like

Alright, thank you for letting me know. I have found another solution, however, by using the custom-lib-path parameter and using the YOLOv3 custom model implementation, as well as the engine-create-func-name.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.