TensorRT inferencing

Hello,

I am trying to do object detection using TensorRT, and I am a newbie at this. I have previously used OpenCV and TensorRT to achieve this, but I observed some frame loss as well as latency. Because of this, I want to use jetson-utils. Can anyone provide proper documentation for it?"

Hi,

Reminder:-
Can i expect the response as earliest as possible?

Regards,
Ranjitha

Hi @ranjitha you can find documentation for jetson-utils here GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

The document that i m looking for is the integration of jetson-utils and tensorrt inferencing

Reminder,
please reply me, whether my doing is right or wrong?

Hi @ranjitha, thank you for your patience, you can find the documentation that is available here: https://github.com/dusty-nv/jetson-inference#api-reference

Not that I only test this on Jetson, and for official support for low-latency video streaming I would recommend you look at DeepStream or the Jetson Multimedia API.

Sorry for asking this question which might appear silly though. Can we integrate camera with Deepstream-SDK and if so, can i know how can i achieve it easily. Also i have tried Deepstream-Yolo sample application by following the procedure given in here DeepStream on NVIDIA Jetson - Ultralytics YOLO Docs. But i observed a latency in video, why would that be happening?

Hardware decoding incurs some latency from my experience.

Doesn’t Deepstream-python supports jetpack 5.1.2?