How to run pretrained Custom object detection model()

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson )
• DeepStream Version 6.0
• JetPack Version 4.6.5
• TensorRT Version 8.2
• Issue Type( questions)

Hello NVIDIA Community,

I’m working on a project where I have trained a custom object detection model using Azure Custom Vision, and exported it in the ONNX format. Now, I would like to deploy this model on my Jetson Nano using DeepStream for real-time inference. Additionally, I need to use a CSI camera as the video input source.

Could anyone provide guidance on the following:

  1. How to set up and run the custom ONNX object detection model on Jetson Nano with DeepStream?
  2. How to configure DeepStream to use the CSI camera as the video input source (for real-time inference)?
  3. Are there any specific steps I need to follow to ensure the compatibility of the ONNX model with DeepStream?
  4. Any additional tips for optimizing performance on Jetson Nano with DeepStream when using an object detection model?

I’ve already installed the necessary dependencies for DeepStream and the Jetson Nano, but I’m not sure how to integrate the ONNX model and set up the video input from the CSI camera.

Any help or pointers would be greatly appreciated!

Thanks in advance!

What type of Jetson board are you working with?

The Gst-nvinfer — DeepStream documentation is the component which uses TensorRT to deploy the ONNX models. Please refer to the \opt\nvidia\deepstream\deepstream-7.1\sources\apps\sample_apps\deepstream-test1 sample for how to construct a basic inferencing pipeline.

The DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums is for the detailed instruction of the gst-nvinfer parameters.

For Jetson, if your camera is supported by the Argus API based camera interfaceArgus NvRaw Tool — NVIDIA Jetson Linux Developer Guide 1 documentation, you can use “nvarguscamerasrc” Accelerated GStreamer — NVIDIA Jetson Linux Developer Guide 1 documentation in the DeepStream pipeline. Or else, you can use the general V4L2 camera src v4l2src in the DeepStream pipeline and use Gst-nvvideoconvert — DeepStream documentation to convert the general buffer to the Nvidia hardware buffer.

You need to try your onnx model with the corresponding TensorRT version to check whether the model is supported by the TensorRT or not.

If you only care about the model optimization, please raise topic in the TensorRT forum. For the DeepStream pipeline optimization, you may refer to Troubleshooting — DeepStream documentation

It is better to read the DeepStream document Welcome to the DeepStream Documentation — DeepStream 6.0.1 Release documentation first.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.