how to deploy a new onnx model or trt model for detection in DeepStream ?

Hello,I have trained a custom object detection network and it has been saved as onnx model and transformed to trt model.
I’d like to deploy this network with deepstream on Jetson Xavier.
I am confused about the config file.In another word, is there any example config file for deploy a onnx or trt detection model?

Hi,

onnx model is supported by TensorRT/Deepstream directly.
The procedure is similar to the TensorFlow based model.
You can check this sample and give it a try:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD

Here is an implementation from community for your reference:
https://github.com/thatbrguy/Deep-Stream-ONNX

Thanks.

Thanks for you reply.
I am read the code from your given links.
But I found my custom network is not for the popular detection or segmentation.
So I am curious about how to write the ''custom_bbox_parser ‘’ accordingly.
Except the popular network like YOLO or SSD,can self-defined model run with the DeepStream ?

Hi,

Please refer to this page:
https://towardsdatascience.com/how-to-deploy-onnx-models-on-nvidia-jetson-nano-using-deepstream-b2872b99a031

Thanks.

Thanks.
But the link is unreachable, even with a VPN.

Hi,

Could you try it again?
I can open it without issue.

The tutorial title is “How to deploy ONNX models on NVIDIA Jetson Nano usingDeepStream”, published by Bharath Raj.

Thanks.

Hi, I am trying the GitHub - thatbrguy/Deep-Stream-ONNX: How to deploy ONNX models using DeepStream on Jetson Nano
but there is error in the end.
The path needs modification and the folder name needs update.

Is there any video tutorial?

I am working on a tracking device using Nvidia nano.
I have onnx from Azure and just want to import it to deep stream. Then output the x,y coordinates from my webcam.

Thanks