Video stucks in Xavier

When I run a CNN model, like light-head RNN for object detection, I open another video file to display. The odd thing happens that the video stucks for a while and continues to play for a while. And I realize each time the network does the forward process, the video stucks.

Therefore, I guess the object detection network probably takes up too much gpu usage or cpu usage. But the truth is cpu only takes about 30% and gpu doesn’t take 100%. The framework I use is pytorch, which cannot set the gpu usage proportion.

I wonder where the problem is, whether I have way to solve it, there is any way to set the proportion of gpu. Any suggestions?

Thanks in advance!


Does the video also get stuck without running RNN on the background.
Jetson has a separate encoding/decoding hardware so it won’t take too much GPU to display a video.
Would you mind to help us to the experiment first?


Without running Light-head RCNN, the video will not stuck. It seems that the forward process of network takes up too much gpu usage, which affects the display of our video.

Is there any way to control the gpu, like setting the upper number of usage?


Limiting the GPU usage is possible on the TensorFlow but not for PyTorch.
However, PyTorch doesn’t pre-occupy GPU memory/resource so you can just adjust the complexity of your task.
Ex. decrease the batch size.

Another alternative is to set the CUDA stream priority.
Not sure if there is an API to control CUDA stream within the PyTorch.

If yes, setting it to the lower priority to unblock the display.


Thanks for your reply!

The batch size is always set to 1. And I realize when the forward process of network is called, the gpu utility will reach nearly up to 100%, even though the duration is quite short. I think that’s where the key is.

The second approach you mention is to use stream priority. As my knowledge, I should create a CUDA stream with high priority to display the video. By doing so, I can guarantee the sources which are needed in video displaying. Am I right?


Update about this question.

Recently, I’ve attempted the deepstream SDK. When running Yolov2 demo in Xavier, I find that the output video doesn’t have the stuck situation, even though the gpu utility is almost 100%. It looks like the video display in Yolov2 demo has been protected and isn’t affected by the gpu utility.

I’m kind of curious about the principle of video display part in Yolov2 demo. Is there any material about this topic? Thanks a lot.



You can get some information in our document:


Thanks for your reply.

I’ve recently dug it more in Deepstream. After running Yolov3 demo, I observed that there was still some video struck situation. It seems that Xavier is not suitable to display video when it is running the large neural network at the same time. The large neural network, like Yolov3 and Faster R-CNN would effect the rendering of video display.

Any methods to solve it ?



Sorry for the late update.

Do you use DeepStream for the display or just open it with other application?
That is because both inference and display use the GPU resource.
If the GPU is occupied, the user space application, usually with lower priority, need to wait for the resource.