Low camera frame rate

Hi

We set a source with a 60 fps camera on deepstream5.0 with Jetpack4.4.
However, we got 30 fps most of time after launched deepstream.
The frame rate could be 60 fps after running deepstream few hours.

Set configuration:

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=1920
camera-height=1080
camera-fps-n=60
camera-fps-d=1
camera-v4l2-dev-node=0

source1_csi_60f.txt (3.7 KB)

the camera shows 60 fps as running below command.
$ gst-launch-1.0 v4l2src device=/dev/video1 ! “video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080” ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v

Thank you for any advice,

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi Kayccc,

Thank you for your support.

Here are my updates.
• Hardware Platform (Jetson / GPU) : Jetson Agx Xavier
• DeepStream Version : deepstream 5.0
• JetPack Version (valid for Jetson only): Jertpack 4.4
• TensorRT Version : Tensor RT 7.1.3

• Issue Type( questions, new requirements, bugs) : low camera frame rate
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
a. install deepstream5.0 by sdkmanager
b. modify source1_usb_dec_infer_resnet_int8.txt to source1_csi_60f.txt (see first post.)
c. launch deepstream with a customize mipi(csi) camera ( which is 60 fps)
d. got 30 fps most of time after launched deepstream.
The frame rate could be 60 fps after running deepstream few hours.

Thank you for any advice,

Hi,
Please try the command and see if it can achieve 60fps:

$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,framerate=60/1 ! videoconvert ! video/x-raw,format=NV12 ! nvvideoconvert ! 'video/x-raw(memory:NVMM)' ! fpsdisplaysink text-overlay=0 video-sink=nvoverlaysink -v

In the config file, the node is /dev/video0

camera-v4l2-dev-node=0

But it is /dev/video1 in the gst-launch-1.0 command. Probably it is a typo, but please make sure you open the same source.

Hi DaneLLL,

Sorry. It did not work.
The error message is as below.

Did you expect to use nvoverlaysink ?
The command works.
However, without FPS message I could not make sure it run with 60 fps.

gst-launch-1.0 v4l2src device=“/dev/video0” ! “video/x-raw,width=1920,height=1080,format=(string)UYVY,framerate=60/1” ! nvvidconv ! “video/x-raw(memory:NVMM),width=640, height=320,format=(string)I420,framerate=60/1” ! nvvidconv ! nvoverlaysink sync=false -e

Error message:

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,framerate=60/1 ! nvvidconv ! video/x-raw,format=NV12 ! nvvidconv ! ‘video/x-raw(memory:NVMM)’ ! fpsdisplaysink text-overlay=0 video-sink=nvoverlaysink -v
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0: sync = true
Setting pipeline to PLAYING …
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, format=(string)UYVY, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, format=(string)UYVY, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/Gstnvvconv:nvvconv1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/Gstnvvconv:nvvconv1.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, format=(string)UYVY, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, format=(string)UYVY, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason error (-5)
Execution ended after 0:00:00.397101494
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
nvidia@nvidia-desktop:~$

Thank you,

Hi,
In DeepStream SDK, please use nvvideoocnvert instead of nvvidconv. So are you able to run this pipeline:

$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,framerate=60/1 ! videoconvert ! video/x-raw,format=NV12 ! nvvideoconvert ! 'video/x-raw(memory:NVMM)' ! fpsdisplaysink text-overlay=0 video-sink=nvoverlaysink -v

Hi DaneLLL,

We tried your command but it still cannot reach 60 fps.

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,framerate=60/1 ! videoconvert ! video/x-raw,format=NV12 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM)’ ! fpsdisplaysink text-overlay=0 video-sink=nvoverlaysink -v


/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 21, dropped: 1, fps: 40.34, drop rate: 1.92
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 42, dropped: 1, current: 40.18, average: 40.26
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 63, dropped: 1, current: 39.82, average: 40.11
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 83, dropped: 1, current: 39.97, average: 40.08
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 104, dropped: 1, current: 40.24, average: 40.11

David

Hi,
It is the pipeline linkage in deepstream-app. If it cannot have 60fps, may not achieve 60fps in running deepstream-app. Please execute nvpmodel -m 0 and sudo jetson_clocks to run in max performance. See if this helps.

If your source supports UYVY, you may try nvv4l2camerasrc and run the pipeline

$ gst-launch-1.0 nvv4l2camerasrc device=/dev/video1 bufapi-version=1 ! 'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1,format=UYVY' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! fpsdisplaysink text-overlay=0 video-sink=nvoverlaysink -v

Hi DaneLLL,

We are always running on MAXN with Jetson_clock enabling.

It can reach 60fps with your command line but I would like to run Deepstream with eight cameras.

It very strange if I run Deepstream with eight cameras over 24 hr, the frame rate will ramp up to 60fps per channel (original frame rate is ~30fps). Why? Where should I check?

David

Hi,
The default setting of batched-push-timeout is for 30fps:

##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Please set it to 20000 and give it a try.

Hi DaneLLL,

I have modified per your recommendation but no use. Here is my config file.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1920
height=1080

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=1920
camera-height=1080
camera-fps-n=60
camera-fps-d=1
camera-v4l2-dev-node=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=1
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=20000

Set muxer output width and height

width=1920
height=1080

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=0
model-engine-file=…/…/models/Primary_Detector/resnet10.caffemodel_b8_gpu0_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary.txt

[tracker]
enable=0

For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively

tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process and enable-past-frame applicable to DCF only
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[tests]
file-loop=0


Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

**PERF: FPS 0 (Avg)
**PERF: 30.50 (28.48)
**PERF: 31.77 (30.89)
**PERF: 31.91 (31.33)

David

Hi

We found DS5 got lower frame rate as running Ov5693 120fps.
When the framerate is set to 120 in source1_csi_dec_infer_resnet_int8.txt.
DS5 only got 60 fps.
Is it normal?

Attached the message.
msg_agx_jp44_ds5_60_only_p.txt (5.4 KB) msg_show_fps_only_120.txt (6.9 KB)

Thank you,

Hi,
Please apply the patch to deepstream-app and try again:

diff --git a/apps/deepstream/common/src/deepstream_source_bin.c b/apps/deepstream/common/src/deepstream_source_bin.c
index c8da5ef..21e91d2 100644
--- a/apps/deepstream/common/src/deepstream_source_bin.c
+++ b/apps/deepstream/common/src/deepstream_source_bin.c
@@ -81,7 +81,8 @@ create_camera_source_bin (NvDsSourceConfig * config, NvDsSrcBin * bin)
       break;
     case NV_DS_SOURCE_CAMERA_V4L2:
       bin->src_elem =
-          gst_element_factory_make (NVDS_ELEM_SRC_CAMERA_V4L2, "src_elem");
+          gst_element_factory_make ("nvv4l2camerasrc", "src_elem");
+      g_object_set (G_OBJECT (bin->src_elem), "bufapi-version", TRUE, NULL);
       bin->cap_filter1 =
           gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "src_cap_filter1");
       if (!bin->cap_filter1) {
@@ -137,6 +138,7 @@ create_camera_source_bin (NvDsSourceConfig * config, NvDsSrcBin * bin)
     gst_caps_set_features (caps, 0, feature);
     g_object_set (G_OBJECT (bin->cap_filter), "caps", caps, NULL);
 
+    gst_caps_set_features (caps1, 0, feature);
     g_object_set (G_OBJECT (bin->cap_filter1), "caps", caps1, NULL);
 
     nvvidconv2 = gst_element_factory_make (NVDS_ELEM_VIDEO_CONV, "nvvidconv2");

It uses nvv4l2camerasrc to eliminate the memory copy of using v4l2src. Should bring us some performance improvement.

Hi DaneLLL,

I’m afraid to say the patch does not work.

Here are the error messages.

  1. added all
    nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c source1_o2b.txt
    ** ERROR: <create_camera_source_bin:101>:

(deepstream-app:14189): GStreamer-CRITICAL **: 16:03:58.157: gst_caps_set_features: assertion ‘IS_WRITABLE (caps)’ failed
** ERROR: <create_camera_source_bin:178>: Failed to link ‘src_elem’ (video/x-raw(memory:NVMM), width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], format=(string){ UYVY }, interlace-mode=(string){ progressive, interlaced }, framerate=(fraction)[ 0/1, 2147483647/1 ]) and ‘src_cap_filter1’ (video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1)
** ERROR: <create_camera_source_bin:233>: create_camera_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed

  1. added first diff only

nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c source1_o2b.txt
** ERROR: <create_camera_source_bin:101>:

** ERROR: <create_camera_source_bin:178>: Failed to link ‘src_elem’ (video/x-raw(memory:NVMM), width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], format=(string){ UYVY }, interlace-mode=(string){ progressive, interlaced }, framerate=(fraction)[ 0/1, 2147483647/1 ]) and ‘src_cap_filter1’ (video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1)
** ERROR: <create_camera_source_bin:233>: create_camera_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: main:636: Failed to create pipeline
Quitting
App run failed

Thank you for any advice,

Hi,
We have verified the patch with E-Con See3CAM CU135 and can launch it successfully. If your source supports UYVY, it should work fine. A bit strange it fails…

Hi,
Just see a typo and fix it:
+gst_caps_set_features (caps1, 0, feature);

Please try again.

Hi,
It requires VIC engine for converting UYVY to NV12. For multiple sources, the loading can be heavy. Please refer to the post to set it to max clock:

It should bring some improvement also.

Hi DaneLLL,

Thank you so much for your great support.
The patch works.

Thanks

Hi DaneLLL,

There is a side effect with your patch. The “DeepStream” window which covers “deepstream-app” one is not transparent and causes the result invisible if we set sink type to EglSink in config file.

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=2


It’s normal if we set sink type to Overlay. Where do we need to modify?

-David

Hi,
We will try to reproduce the issue with EglSink.