Nvv4l2decoder memory usage

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1080Ti
• DeepStream Version 6.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.51.06
• Issue Type( questions, new requirements, bugs) Question

I’m trying to decode several streams with GPU acceleration using rtspsrc location=<URL> latency=0 ! queue ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! appsink. I’ve noticed that the first thread consumes around 300MB of VRAM and each subsequent thread consumes around 140MB. Is this supposed to be normal behaviour? If so, how come the first instance would consume much more than subsequent instances?

As i know, threads share same address space.
How do you check the first thread consumes around 300MB of VRAM and each subsequent thread consumes around 140MB?

I’m only using nvidia-smi. Are there better tools for measuring VRAM usage? Also, is VRAM usage supposed to increase as FPS increases?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

What is the format and resolution of the video in your rtsp source? How do you measure the VRAM usage?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.