Camera input and FPS issue with deepstream-app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson AGX Orin
• DeepStream Version : 6.2
• JetPack Version (valid for Jetson only) : 5.1
• TensorRT Version : 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only) : Jetson AGX Orin
• Issue Type( questions, new requirements, bugs) : Questions

I’m using a camera with the mjpg 19201080@120 format for inference, but the frame rate is only 100. However, when I use two cameras with 19201080@60 format, the frame rate can reach 120(60 + 60).

Also, when I use two cameras with 1920*1080@120 format, a USB bandwidth saturation error occurs.

ERROR from src_elem: Failed to allocate required memory.
** INFO: <bus_callback:182>: usb bandwidth might be saturated

Which demo did you run? Could you attach more log information with GST_DEBUG=3?

The sample I use is deepstream-app. And more information in the attached filed.
cant_start_2cam_with_120p.log (4.7 KB)

If you want to test the performance, you can use the fakesink. Also you should boost-the-clocks.
About the error, could you attach a video dumped from your 120fps camera?

0:00:04.668064832  8680 0xaaaafd286860 WARN          v4l2bufferpool gstv4l2bufferpool.c:809:gst_v4l2_buffer_pool_start:<src_elem:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:04.674386578  8680 0xaaaaf137fb60 WARN          v4l2bufferpool gstv4l2bufferpool.c:809:gst_v4l2_buffer_pool_start:<src_elem:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:04.719229881  8680 0xaaaaf137fb60 ERROR         v4l2bufferpool gstv4l2bufferpool.c:678:gst_v4l2_buffer_pool_streamon:<src_elem:pool:src> error with STREAMON 28 (No space left on device)
0:00:04.719266329  8680 0xaaaaf137fb60 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<src_elem:pool:src> start failed
0:00:04.719282649  8680 0xaaaaf137fb60 WARN                 v4l2src gstv4l2src.c:660:gst_v4l2src_decide_allocation:<src_elem> error: Failed to allocate required memory.

It looks like an error in gstreamer native code gstv4l2bufferpool.c. There may not be enough space on your device.

Could you provide instructions to dump video from camera? Thanks

Could you check if your camera provides the methods to save recorded videos to files?

We use fakesink to test indeed.
Why do you need video dumped from 120 fps camera?
We have connect the camera to PC with USB . It can provide 120 fps MJPG stream,

We want to use the filesource to reproduce your problem in our enviroment. So if you can provide the video source from your camera, It will make our analysis easier. Could you attach your deepstream_app config file to us?

Which format of dump video do you need ?

If you don’t want to let your source public, you can message that to me.
1.Both mp4 or ts format is OK.
2.Please attach your deepstream_app config file to us too.
3.From the log you attached before, it may be a memory issue with your device. Could you check the memory with jtop when you run the 2 120fps cameras?

Hi,
Already message you video and config file.
I have no AGX Orin device in my hand now.
However I don’t find out-of-memory in jtop when running with two 120 fps camera.

It works well with the two120fps videos. So it seems like a problem of USB bandwidth limitation. Because this is a hardware limitation, you can only solve this problem by reducing the resolution or fps of your camera.

Hi,
On Orin developer kit, the 4 type-A ports are from embedded USB hub so the bandwidth is shared. There are two type-C ports. Please connect one camera to the type-A port and the other to either type-C port(through type-C to A cable). See if you can reach target frame rate for both cameras in the setup

For two 120 fps cameras issue, we will try later.
I’d like to discuss the FPS issue of deepstream-app running with only One 120fps camera.
We get FPS is about 100 from perf_measurement log.
Any comment or suggestion?

Let’s put aside the impact of the RTSP first. Could you use the video you provided to test the fps?
Could you attach your config file too?

I have no AGX Orin device in my hand now.
Please check the config file attached.
config_deepstream_app_debug_1920.txt (3.8 KB)

I don’t have your model and 120 fps camera. So I just run the /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/source2_1080p_dec_infer-resnet_demux_int8.txt. Change the config file source to your 120fps video and change sink to fakesink. We can get 400 fps on my Orin.
When you use 2 60fps video as source and set the nvstreammux batch-size to 2, it will batch 2 frames at once in the nvstreammux. Although you can see that each path can reach 60fps, it’s still 60 fps for the pipeline.
About the performace:
1.You need to ensure that the camera can output 120fps of video on your board.
2.You can remove the nvinfer in your plugin to check the fps
3.You can also use the trtexec --onnx=your_model.onnx to check the TensorRT perf first

Hello,
Our application is to use USB Cam - 1920x1080@120fps as input data, but we tried some tests that even only one USB Cam is connected that the real frame rate can’t reach 120fps as camera capability. (real is ~100fps)
We don’t know where is the bottleneck if use case is using real USB Camera as input.
So, we made the USB port read/write testing with U3/U2 flash disks, the result looks not so good but should afford the data throughput - 1920x1080@120fps.
=> We guessed 1920x1080@120fps should be ok, but not in fact.

| AI-Box | HP U3.1-256GB | SanDisk U3.2G1-64GB | SanDisk U3.1-64GB | SanDisk U2.0- 16GB |
|Jetson Xavier NX | 880 / 328 | 800 / 360 | 1096 / 336 | 240 / 52 |
|Jetson AGX Orin | 960 / 400 | 113 / 110 | 1200 / 800 | 262 / 60 |
<Data: Read / Write (Mbps)>
Could you help point us which part is the problem? USB bandwidth? What is the max. value?Camera Combination? 1x2K USB Camera only? 2x2K is not supported?
Or FPS limitation is no 120fps?
Thanks a lot and looking forward to hearing from your reply.

@yuweiw , please reply this. Thanks.

Have you tried using the two methods above to verify if model parsing is the bottleneck?