Slow live video decoding power

Hi ,

I have tried Jetson Nano and Jetson TX2, both of them are very slow when decoding MMJPEG video frames to Numpy(opencv) or Cuda (gstreamer), when we add the inference time of model to the processing time, it creates 20-30 fps on nano and tx2 accordingly.(Our cameras can support 120 fps in used resolutions (720p))

We want to simulate autonomous car, however, the obtained results are not as fast as to use them, is there any way to make this process faster on python or other languages? What is the situation on better cards such as Xavier?

Thank you for your supports, have a good day.

Hi,
We suggest use DeepStream SDK. Please look at
https://forums.developer.nvidia.com/t/announcing-developer-preview-for-deepstream-5-0/121619
Now the stable version is r32.3.1 + DeepStream SDK 4.0.2. You can install the packages through SDKManager and give it a try.

The latest release is r32.4.3 and we are going to release DeepStream SDK 5.0 GA with it soon.

1 Like

Can i use deepstream sdk with my custom pytorch models? Also, I see the SDK is too complicated, I think Nvidia’s genius C/C++ programmers can let it simple, like cv2.videocapture, otherwise, it can not be used by a lot of people and the afford to create this powerful library may be wasted.

I could not try it yet, maybe it is simpler than it looks, but I wish to have this performance gain with a bit easy way like jetson-inference repo by Dustin, can you please let me know if there is a tutorial to implement deepstream with custom models on jetson via python

Hi,
DeepStream SDK is an optimal solution and we would suggest you give it a try. You can install it through SDKManager. Documents are in

For running pytorch model in DS5.0, please take a look at

Python samples are in