How to run pretrained Custom object detection model()

What type of Jetson board are you working with?

The Gst-nvinfer — DeepStream documentation is the component which uses TensorRT to deploy the ONNX models. Please refer to the \opt\nvidia\deepstream\deepstream-7.1\sources\apps\sample_apps\deepstream-test1 sample for how to construct a basic inferencing pipeline.

The DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums is for the detailed instruction of the gst-nvinfer parameters.

For Jetson, if your camera is supported by the Argus API based camera interfaceArgus NvRaw Tool — NVIDIA Jetson Linux Developer Guide 1 documentation, you can use “nvarguscamerasrc” Accelerated GStreamer — NVIDIA Jetson Linux Developer Guide 1 documentation in the DeepStream pipeline. Or else, you can use the general V4L2 camera src v4l2src in the DeepStream pipeline and use Gst-nvvideoconvert — DeepStream documentation to convert the general buffer to the Nvidia hardware buffer.

You need to try your onnx model with the corresponding TensorRT version to check whether the model is supported by the TensorRT or not.

If you only care about the model optimization, please raise topic in the TensorRT forum. For the DeepStream pipeline optimization, you may refer to Troubleshooting — DeepStream documentation

It is better to read the DeepStream document Welcome to the DeepStream Documentation — DeepStream 6.0.1 Release documentation first.