Is yolox application a good case to use deepstream?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) : AGX Orin
• DeepStream Version : 6.1
• JetPack Version (valid for Jetson only) : 5.0.2
• TensorRT Version : latest in Jetpack 5.0.2
• NVIDIA GPU Driver Version (valid for GPU only) : latest in Jetpack 5.0.2
• Issue Type( questions, new requirements, bugs) : a question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


We are developing an A.I. system which utilizes camera on Orin AGX as an edge device.
On the edge device, things go this way:

gstreamer pipe : nvv4l2camerasrc device=/dev/video0 ! ‘video/x-raw(memory:NVMM),format=UYVY,width=3840,height=2160,framerate=30/1’ ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1

Camera captured images → gstreamer pipi above → application code in user space.

And the application code in userspace does deeplearning perception from the BGR image gstreamer appsink provides.

What the deeplearning perception application does is based on yolox. It recognizes and tracks traffic lights and pedestrains.

My guess is that this our application is doing its job in a not the best way. It doesn’t aware of deepstream at all. It uses gpu. So it send bgr images back to the kernel space so that gpu can do it’s required calculations. The BGR image was from the kernel space! So at least there exist unnecessary copies between kernel/user spaces.

I’d like to investigate if deepstream support yolox and its models, if our team can use deepstream instead of current SW architecture.
I want some resources, which explains about these so that I can pass it to our team.

If we decided to use deepstream for our application, what do we have to do?
Do we need to implement a new gstreamer plugin?
Or do we just use Nvidia’s already made closed plugins only?

Thanks in advance!

We already have DeepStream sample with our TAO yolovX models. NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (

And there are also some 3rd party DeepStream samples with YoloVX DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

1 Like

We have already provide many samples with different YolovX models.


Yes. Gst-nvinfer plugin can deploy many kinds of models with TensorRT and CUDA acceleration. Gst-nvinfer — DeepStream 6.2 Release documentation

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.