DeepStream OR TensorRT OR CUDNN

Hi guys,
we have a YOLO 5 network for car detection, can we run this network with 04_video_dec_trt in Jetson multimedia API.
Is it better to use trt instead of deep stream? How about CUDNN? is it better implement network with CUDNN and TRT instead of Deep Stream?
Is there any sample code that implement YOLO 5 with CUDNN or TRT in Jetson ?
Thanks so much.

Hi,

Deepstream uses TensorRT as the inference backend.
If your input is a live stream, it’s recommended to Deepstream SDK.

cuDNN is a layer-level API. It will be simpler to convert a model with TensorRT.

There are some tutorials to deploy YOLOv5 with Deepstream from the community.
We also support YOLOv5 training in our TAO toolkit:

Thanks.

Thanks for your reply. I know that DeepStream can be used for implementing & running YOLOv5 TRT engine. However, I want to know if it is possible to use TensorRT directly and bypass DeepStream altogether. The motivation is to have lower level control and even better performance. I have the same question regarding CuDNN. Is it possible to implement YOLOv5 in CuDNN. Any sample code is highly appreciated.
Thanks so much,

Hi,

Deepstream has optimized the whole pipeline so the performance is guaranteed.
If you want to handle the buffer on your own, below is a sample for the standalone TensorRT app:

Implementing YOLO with cuDNN is much more complicated.
We don’t have a sample for YOLOv5, but the YOLOv3 author has done this in the below source so you can refer to:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.