Jetson TX2 NX combined with YOLOV7 and DeepStream

Hello, I’m kinda new to computer vision. I started a personal project that is the following.

I have a Jetson TX2 NX and a camera plugged on it. My YoloV7 model is already trained and has a really decent FPS detection on my computer.

My goals are: 1) to perform object detection in real time with YOLOV7.
2) Let the choice to the operator that sees the screen (on a computer) in real time, to choose only one of the object detected to track it. Hence it then becomes a single object tracking task.

At first I thaught about use YOLOV7 and OpenCV in Python, but after few research I found out that convert the YOLOV7 model to ONNX to TensorRT was the best option for fast inference. And I also discovered the Deepstream SDK that include things like VPI. I also found a repo call DeepStream-YOLO but I’m not sure how to use it.

Some new questions came out then: -Am I able to implement 2) without OpenCV and only with things like VPI through the Deepstream SDK ?

-As I know how to code in both Python and C++ what should be the best option to perform the real time object tracking task ?

-I know that I can run inference with TensorRT on the Jetson TX2 NX but to what extend can I also use DeepstreamSDK ? Will I encounter some restriction compared to the usage on a classic computer ?

Thank you in advance for any kind of help / informations, as I just begin into this world.


We’d prefer C++ APIs for better performance.

Please refer to Welcome to the DeepStream Documentation — DeepStream 6.1 Release documentation

From DeepStream APIs point of view, there is no difference.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.