Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Nvidia Jetson Xavier NX
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only): 5.1
• TensorRT Version: 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): Using deepstream-app sample application
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
We are currently developing multi-source multi-inference deepstream pipeline using 2 CSI (IMX219) camera and doing inference for detection and segmentation.
Our top level pipeline looks something like this:
We were using Jetpack version 4.6 and Deepstream 6.0 and custom deepstream pipeline was working fine with 2 camera configured on 10 FPS.
With change in the requirement now we are migrating to jetpack 5.1 with latest version on deepstream 6.2. With this we are facing a issue that the pipeline only works when we set FPS to 30. Any FPS below that is not working and throws below error:
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadExecute:694 NvBufSurfaceFromFd Failed.
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:247 (propagating)
nvstreammux: Successfully handled EOS for source_id=0
To narrow down the issue and by-passing our application code, we have tried reproducing the same issue using the sample deepstream-app with source2_csi_usb_dec_infer_resnet_int8.txt configuration file. I have updated the configuration file to use 2 CSI camera with 1280x720 @ 10 FPS and a file sink.
You can find my configuration file here:
source2_csi_dec_infer_resnet_int8.txt (3.8 KB)
After running the deepstream-app we were able to reproduce the exact issue using sample app. It is showing below error:
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadExecute:694 NvBufSurfaceFromFd Failed.
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:247 (propagating)
H264: Profile = 66, Level = 0
NVMEDIA: Need to set EMC bandwidth : 125333
NVMEDIA: Need to set EMC bandwidth : 125333
NVMEDIA_ENC: bBlitMode is set to TRUE
0:00:08.063552005 110689 0xfffefc029300 WARN v4l2bufferpool gstv4l2bufferpool.c:1533:gst_v4l2_buffer_pool_dqbuf:<sink_sub_bin_encoder1:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:08.066936093 110689 0xfffefc029300 FIXME basesink gstbasesink.c:3246:gst_base_sink_default_event:<sink_sub_bin_sink1> stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:08.067157727 110689 0xfffefc029300 WARN qtmux gstqtmux.c:2981:gst_qt_mux_start_file:<sink_sub_bin_mux1> Robust muxing requires reserved-moov-update-period to be set
nvstreammux: Successfully handled EOS for source_id=0
**PERF: 0.00 (0.00) 9.86 (9.74)
**PERF: 0.00 (0.00) 10.01 (9.91)
Note: Our application with custom deepstream pipeline and the sample deepstream-app works when we set the FPS to 30.
I am looking forward for any valuable inputs on this direction.
Thanks you in advance!