Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson ) • DeepStream Version 6.0 • JetPack Version 4.6.5 • TensorRT Version 8.2 • Issue Type( questions)
Hello NVIDIA Community,
I’m working on a project where I have trained a custom object detection model using Azure Custom Vision, and exported it in the ONNX format. Now, I would like to deploy this model on my Jetson Nano using DeepStream for real-time inference. Additionally, I need to use a CSI camera as the video input source.
Could anyone provide guidance on the following:
How to set up and run the custom ONNX object detection model on Jetson Nano with DeepStream?
How to configure DeepStream to use the CSI camera as the video input source (for real-time inference)?
Are there any specific steps I need to follow to ensure the compatibility of the ONNX model with DeepStream?
Any additional tips for optimizing performance on Jetson Nano with DeepStream when using an object detection model?
I’ve already installed the necessary dependencies for DeepStream and the Jetson Nano, but I’m not sure how to integrate the ONNX model and set up the video input from the CSI camera.
Any help or pointers would be greatly appreciated!
The Gst-nvinfer — DeepStream documentation is the component which uses TensorRT to deploy the ONNX models. Please refer to the \opt\nvidia\deepstream\deepstream-7.1\sources\apps\sample_apps\deepstream-test1 sample for how to construct a basic inferencing pipeline.
You need to try your onnx model with the corresponding TensorRT version to check whether the model is supported by the TensorRT or not.
If you only care about the model optimization, please raise topic in the TensorRT forum. For the DeepStream pipeline optimization, you may refer to Troubleshooting — DeepStream documentation
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks