Error: gst-resource-error-quark: Failed to allocate a buffer

Hardwar: Jetson Xavier Nx
JetPcak: 4.6
deepstream: 6.0

I connected two usb2.0 cameras to NX, usually they work fine, but some time will crash, logs as bellow:

943 - Gst.MessageType.ERROR
Error: gst-resource-error-quark: Could not read from resource. (9): gstv4l2bufferpool.c(1040): gst_v4l2_buffer_pool_poll (): /GstPipeline:pipeline0/GstV4l2Src:source-bin-0:
poll error 1: Success (0)
2024-05-07 15:22:11.554 | DEBUG | server.pipeline:bus_call:943 - Gst.MessageType.ERROR
Error: gst-resource-error-quark: Failed to allocate a buffer (14): gstv4l2src.c(998): gst_v4l2src_create (): /GstPipeline:pipeline0/GstV4l2Src:source-bin-0
2024-05-07 15:22:11.556 | DEBUG | server.pipeline:bus_call:943 - Gst.MessageType.ERROR
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:source-bin-0:
streaming stopped, reason error (-5)
At the same time, I use python opencv to play the video from the camera long time enough to verify the camera hardware is OK, could you give some advice for this?

By the way, for better support, please don’t move this topic to Xavier NX.

which sample are you testing or referring to? what is the whole media pipeline? please refer to this FAQ for How to connect a USB camera in DeepStream.

Thanks for response.

1 Referring to many sample, including deepstream-imagedata-multistream ,usb camera sample, uribin sample.
2 Whole media pipeline: v4l2src-jpegparse-nvv4l2decoder-videoconvert-nvvideoconvert-capsfilter-strmmux-pgie-nvvideoconvert-nvdsosd-fakesink.
3 Yes, the issue happens some time, and some time they work fine.
4 By the way, I use bash in docker to start python app when computer boot.

  1. how long will the application crash? to rule out memory leak, could you use this faq to capture HW & SW Memory Leak log. please also use Jetson power GUI or jtop to monitor the GPU,CPU utilization.
  2. To narrow down this issue, if using a simplified pipeline, will the application crash? for example, v4l2src-jpegparse-nvv4l2decoder-fakesink.

1、From 7 AM to 10AM, about 3 hours, once the crash happens first time, it will happen very soon, about 3 minutes after running. The machine tests at factory which did not work from 6PM, so I can’t check hardware utilization until tomorrow 7AM when they start to work.
2、I think the simplified pipeline will not crash, because it does nothing, any way, I will check this tomorrow.

Thank you, again.

Now, app runs well, through JTOP, memory utilization 3G(total 8G), one CPU utilization close to 100%, other five 60%~80%(total six cpu), GPU utilization close to 100%.

python3 -p python3
cat: /sys/kernel/debug/nvmap/iovmm/clients: No such file or directory
PID: 9 09:51:01 Traceback (most recent call last):
File “”, line 158, in
if name == ‘main’: main()
File “”, line 154, in main
File “”, line 134, in NvMap
sys.stdout.write("Total used hardware memory: " + total_iovmm.decode(‘utf-8’) + “\t”)
AttributeError: ‘int’ object has no attribute 'decode

  1. CPU and GPU utilization is too high. did you run other applications at the same time? can you use “deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt” to reproduce this isue? please refer to this topic for performance improvement.

  2. about “vmm/clients: No such file or directory” error, please use sudo to run. here is my test on the same Jetson.

sudo python3 -p python3
PID: 6710   10:26:27    Total used hardware memory: 848444K     hardware memory: 475.8594 MiB           VmSize: 10697.8281 MiB  VmRSS: 1539.5352 MiB       RssFile: 466.0195 MiB   RssAnon: 1073.5156 MiB  lsof: 861`


  1. Just run one application. I will reproduce this issue as you said, and try to improve performance.

  2. I will try this to log memory tomorrow.

  3. Another clue, I use the simplified pipeline, it crashed too, damn. I use test1-sample to test camera, it will not crash in a long time, and I change the pipeline the same with the one in the test1-usb sample, it still crash, but happens less often obviously, the difference between pipeline in my app with the one in sample is that I use two usb cameras while there is only one usb camera in the sample. It consumed a lot of time, that’s why I respond late, sorry for that.


could you share the gst-launch pipeline? why do you need to use videoconvert, which dose not support hardware acceleration? please refer to this pipeline in the faq above.

$ gst-launch-1.0 v4l2src device=/dev/video2 ! 'image/jpeg,  width=640, height=480, framerate=30/1' ! nvv4l2decoder ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0  nvstreammux name=mux width=1280 height=720 batch-size=1  ! fakesink

dstest_pgie_config.txt (1.3 KB) (12.3 KB)
I refereed the pipeline in deepstream-test1-usbcam sample in which videoconvert is used, I added a queue between streammux and pgie this mornning, please check attachment for detail.

Now the app has ran for about two hours(tow computers, four usb cameras connected), fortunatetly not crash happened yet, I will check your pipeline later.



In your pipeline, the width and height of streammux are different with them in decoder, is this right?
and no vosd in the pipeline, I need to process detect result in the prob callback.

I use decoder instead of videoconvert, and add nvvideoconver and vosd behind pgie, crash still happened.

And the GPU utilization almost the same.

could you narrow down this issue? you can dump the media pipeline by this method.

streammux requires setting width and height. for performance, you can set it to the decoder’s output resolution, which can be got from the media pipeline.

the DeepStream original sample run well. you can check if the CPU/GPU utilization is high. please simplify the code to check which new code causes the crash.


I got the dump file, please help check them too.

In the vosd probe callback, I save one image each two frames, maybe that is the reason why CPU and GPU utilization are too high, I will remove responsive code and test.

Thanks. (5.5 KB) (3.0 KB) (23.7 KB)