I connected two usb2.0 cameras to NX, usually they work fine, but some time will crash, logs as bellow:
943 - Gst.MessageType.ERROR
Error: gst-resource-error-quark: Could not read from resource. (9): gstv4l2bufferpool.c(1040): gst_v4l2_buffer_pool_poll (): /GstPipeline:pipeline0/GstV4l2Src:source-bin-0:
poll error 1: Success (0)
2024-05-07 15:22:11.554 | DEBUG | server.pipeline:bus_call:943 - Gst.MessageType.ERROR
Error: gst-resource-error-quark: Failed to allocate a buffer (14): gstv4l2src.c(998): gst_v4l2src_create (): /GstPipeline:pipeline0/GstV4l2Src:source-bin-0
2024-05-07 15:22:11.556 | DEBUG | server.pipeline:bus_call:943 - Gst.MessageType.ERROR
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:source-bin-0:
streaming stopped, reason error (-5)
At the same time, I use python opencv to play the video from the camera long time enough to verify the camera hardware is OK, could you give some advice for this?
By the way, for better support, please donât move this topic to Xavier NX.
which sample are you testing or referring to? what is the whole media pipeline? please refer to this FAQ for How to connect a USB camera in DeepStream.
1 Referring to many sample, including deepstream-imagedata-multistream ,usb camera sample, uribin sample.
2 Whole media pipeline: v4l2src-jpegparse-nvv4l2decoder-videoconvert-nvvideoconvert-capsfilter-strmmux-pgie-nvvideoconvert-nvdsosd-fakesink.
3 Yes, the issue happens some time, and some time they work fine.
4 By the way, I use bash in docker to start python app when computer boot.
how long will the application crash? to rule out memory leak, could you use this faq to capture HW & SW Memory Leak log. please also use Jetson power GUI or jtop to monitor the GPU,CPU utilization.
To narrow down this issue, if using a simplified pipeline, will the application crash? for example, v4l2src-jpegparse-nvv4l2decoder-fakesink.
1ăFrom 7 AM to 10AM, about 3 hours, once the crash happens first time, it will happen very soon, about 3 minutes after running. The machine tests at factory which did not work from 6PM, so I canât check hardware utilization until tomorrow 7AM when they start to work.
2ăI think the simplified pipeline will not crash, because it does nothing, any way, I will check this tomorrow.
Now, app runs well, through JTOP, memory utilization 3G(total 8G), one CPU utilization close to 100%, other five 60%~80%(total six cpu), GPU utilization close to 100%.
python3 nvmemstat.py -p python3
cat: /sys/kernel/debug/nvmap/iovmm/clients: No such file or directory
PID: 9 09:51:01 Traceback (most recent call last):
File ânvmemstat.pyâ, line 158, in
if name == âmainâ: main()
File ânvmemstat.pyâ, line 154, in main
NvMap(program)
File ânvmemstat.pyâ, line 134, in NvMap
sys.stdout.write("Total used hardware memory: " + total_iovmm.decode(âutf-8â) + â\tâ)
AttributeError: âintâ object has no attribute 'decode
CPU and GPU utilization is too high. did you run other applications at the same time? can you use âdeepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txtâ to reproduce this isue? please refer to this topic for performance improvement.
about âvmm/clients: No such file or directoryâ error, please use sudo to run. here is my test on the same Jetson.
Just run one application. I will reproduce this issue as you said, and try to improve performance.
I will try this to log memory tomorrow.
Another clue, I use the simplified pipeline, it crashed too, damn. I use test1-sample to test camera, it will not crash in a long time, and I change the pipeline the same with the one in the test1-usb sample, it still crash, but happens less often obviously, the difference between pipeline in my app with the one in sample is that I use two usb cameras while there is only one usb camera in the sample. It consumed a lot of time, thatâs why I respond late, sorry for that.
could you share the gst-launch pipeline? why do you need to use videoconvert, which dose not support hardware acceleration? please refer to this pipeline in the faq above.
dstest_pgie_config.txt (1.3 KB) pipeline.py.txt (12.3 KB)
I refereed the pipeline in deepstream-test1-usbcam sample in which videoconvert is used, I added a queue between streammux and pgie this mornning, please check attachment for detail.
Now the app has ran for about two hours(tow computers, four usb cameras connected), fortunatetly not crash happened yet, I will check your pipeline later.
In your pipeline, the width and height of streammux are different with them in decoder, is this right?
and no vosd in the pipeline, I need to process detect result in the prob callback.
I use decoder instead of videoconvert, and add nvvideoconver and vosd behind pgie, crash still happened.
could you narrow down this issue? you can dump the media pipeline by this method.
streammux requires setting width and height. for performance, you can set it to the decoderâs output resolution, which can be got from the media pipeline.
the DeepStream original sample run well. you can check if the CPU/GPU utilization is high. please simplify the code to check which new code causes the crash.
In the vosd probe callback, I save one image each two frames, maybe that is the reason why CPU and GPU utilization are too high, I will remove responsive code and test.