ONNX model integration with deepstream parallel inference app

Please provide complete information as applicable to your setup.

• Hardware Platform: GPU
• DeepStream Version: 6.2
• TensorRT Version: 8.5.2.2
• NVIDIA GPU Driver Version: 525.85.12
• Issue Type: Question

Hi,
I want to develop a video analytics solution using deepstream parallel inference app as shown in the deepstream parallel inference app repo .

I want to develop a single branch for which I have a trained ONNX model file, but I am unsure how to integrate it with the pipeline.

I have done it already for TAO trained models, where I was easily able to find config files for etlt files, which I just needed to modify according to my desire, but I cannot do the same for ONNX models. Hence I would like to know how one can use his custom ONNX model. I tried using BYOM converter, but some of the objects in my model are not supported by BYOM.

Regards,
Pradyumna Yadav

Can you elaborate what config do you want to apply?
Deepstream supports onnx model (by option onnx-file in the config file), can it work for your case?

Can you share me an example of such configuration file where ONNX model is integrated.

Here is one example: deepstream-6.2/sources/apps/sample_apps/deepstream-audio/configs

DeepStream already support ONNX model. Gst-nvinfer — DeepStream 6.2 Release documentation

What are the input layers and output layers of your ONNX model?

These are the input and output of the ONNX model.

Input

name: input_1:0
type: float32[num,256,256,3]

Output:

name: predictions/Softmax:0
type: float32[num,2]

The model looks like a classifier.

Please refer to the classifier sample here: NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com)

Thank you for sharing the resource, but I mentioned it already, this model isn’t TAO trained. Thus I don’t have the model configs neither the model engine that are usually generated from TAO.

Since it is the same question as Deepstream branch giving same output from a TAO trained model - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Let’s discuss in Deepstream branch giving same output from a TAO trained model - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums. Close this topic.

I would request you to understand the difference in the two forums, this forum is for ONNX model integration (not TAO trained) and Deepstream branch giving same output from a TAO trained model - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums is a forum for a TAO trained model where I am unable to get desired outputs on real data inference.

you need to write the class parsing function by yourself since your output layer is not CHW dimension.

And the label file should be in such format:
ink_thrown;no__ink

I think I can get some help using this: Using a Custom Model with DeepStream — DeepStream 6.3 Release documentation. So figuring out what needs to be changed here. You can suggest some for me to speed up on this.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There is a sample of customizing classifier output parsing with gst-nvinfer. Please refer to NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream (github.com)

The LPR model is a classifier which does not output CHW dimension.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.