Low camera frame rate

Hi DaneLLL,

I have modified per your recommendation but no use. Here is my config file.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1920
height=1080

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=1920
camera-height=1080
camera-fps-n=60
camera-fps-d=1
camera-v4l2-dev-node=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=1
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=20000

Set muxer output width and height

width=1920
height=1080

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=0
model-engine-file=…/…/models/Primary_Detector/resnet10.caffemodel_b8_gpu0_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary.txt

[tracker]
enable=0

For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively

tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process and enable-past-frame applicable to DCF only
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[tests]
file-loop=0


Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

**PERF: FPS 0 (Avg)
**PERF: 30.50 (28.48)
**PERF: 31.77 (30.89)
**PERF: 31.91 (31.33)

David

Hi

We found DS5 got lower frame rate as running Ov5693 120fps.
When the framerate is set to 120 in source1_csi_dec_infer_resnet_int8.txt.
DS5 only got 60 fps.
Is it normal?

Attached the message.
msg_agx_jp44_ds5_60_only_p.txt (5.4 KB) msg_show_fps_only_120.txt (6.9 KB)

Thank you,

Hi,
Please apply the patch to deepstream-app and try again:

diff --git a/apps/deepstream/common/src/deepstream_source_bin.c b/apps/deepstream/common/src/deepstream_source_bin.c
index c8da5ef..21e91d2 100644
--- a/apps/deepstream/common/src/deepstream_source_bin.c
+++ b/apps/deepstream/common/src/deepstream_source_bin.c
@@ -81,7 +81,8 @@ create_camera_source_bin (NvDsSourceConfig * config, NvDsSrcBin * bin)
       break;
     case NV_DS_SOURCE_CAMERA_V4L2:
       bin->src_elem =
-          gst_element_factory_make (NVDS_ELEM_SRC_CAMERA_V4L2, "src_elem");
+          gst_element_factory_make ("nvv4l2camerasrc", "src_elem");
+      g_object_set (G_OBJECT (bin->src_elem), "bufapi-version", TRUE, NULL);
       bin->cap_filter1 =
           gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "src_cap_filter1");
       if (!bin->cap_filter1) {
@@ -137,6 +138,7 @@ create_camera_source_bin (NvDsSourceConfig * config, NvDsSrcBin * bin)
     gst_caps_set_features (caps, 0, feature);
     g_object_set (G_OBJECT (bin->cap_filter), "caps", caps, NULL);
 
+    gst_caps_set_features (caps1, 0, feature);
     g_object_set (G_OBJECT (bin->cap_filter1), "caps", caps1, NULL);
 
     nvvidconv2 = gst_element_factory_make (NVDS_ELEM_VIDEO_CONV, "nvvidconv2");

It uses nvv4l2camerasrc to eliminate the memory copy of using v4l2src. Should bring us some performance improvement.

Hi DaneLLL,

I’m afraid to say the patch does not work.

Here are the error messages.

  1. added all
    nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c source1_o2b.txt
    ** ERROR: <create_camera_source_bin:101>:

(deepstream-app:14189): GStreamer-CRITICAL **: 16:03:58.157: gst_caps_set_features: assertion ‘IS_WRITABLE (caps)’ failed
** ERROR: <create_camera_source_bin:178>: Failed to link ‘src_elem’ (video/x-raw(memory:NVMM), width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], format=(string){ UYVY }, interlace-mode=(string){ progressive, interlaced }, framerate=(fraction)[ 0/1, 2147483647/1 ]) and ‘src_cap_filter1’ (video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1)
** ERROR: <create_camera_source_bin:233>: create_camera_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed

  1. added first diff only

nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c source1_o2b.txt
** ERROR: <create_camera_source_bin:101>:

** ERROR: <create_camera_source_bin:178>: Failed to link ‘src_elem’ (video/x-raw(memory:NVMM), width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], format=(string){ UYVY }, interlace-mode=(string){ progressive, interlaced }, framerate=(fraction)[ 0/1, 2147483647/1 ]) and ‘src_cap_filter1’ (video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1)
** ERROR: <create_camera_source_bin:233>: create_camera_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: main:636: Failed to create pipeline
Quitting
App run failed

Thank you for any advice,

Hi,
We have verified the patch with E-Con See3CAM CU135 and can launch it successfully. If your source supports UYVY, it should work fine. A bit strange it fails…

Hi,
Just see a typo and fix it:
+gst_caps_set_features (caps1, 0, feature);

Please try again.

Hi,
It requires VIC engine for converting UYVY to NV12. For multiple sources, the loading can be heavy. Please refer to the post to set it to max clock:


It should bring some improvement also.

Hi DaneLLL,

Thank you so much for your great support.
The patch works.

Thanks

Hi DaneLLL,

There is a side effect with your patch. The “DeepStream” window which covers “deepstream-app” one is not transparent and causes the result invisible if we set sink type to EglSink in config file.

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=2


It’s normal if we set sink type to Overlay. Where do we need to modify?

-David

Hi,
We will try to reproduce the issue with EglSink.

Hi,
Could you try gst-launch-1.0 command like:

$ gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 bufapi-version=1 ! 'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mx.sink_0 nvv4l2camerasrc device=/dev/video1 bufapi-version=1 ! 'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mx.sink_1 nvstreammux width=1920 height=1080 batch-size=2 live-source=1 name=mx ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt batch-size=2 ! nvvideoconvert ! nvmultistreamtiler width=1920 height=1080 rows=1 columns=2 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=0

The above pipeline is with two sources. Please extend to 8 sources and give it a try.

Hi DaneLLL,

It works with 8 sources but we still have some questions.

  1. How to implement in deepstream-app and perform its frame rate per channel?
  2. We cannot switch to single window when we click certain one on display wall.

Thanks,
David

Hi,

Using nvv4l2camerasrc + nveglglessink in deepstream-app, it works for single source. But for multiple sources, it shows whole black as your observation. We would need internal teams’ help to do further investigation on this and may take some time. For a quick solution, we suggest use gst_parse_launch() to launch the pipeline. You may refer to the sample of using the function:
What is maximum video encoding resolution in pixels?

For showing specific source, you can call g_object_set() to set the property in nvmultistreamtiler:

  show-source         : ID of the source to be shown. If -1 all the sources will be tiled else only a single source will be scaled into the output buffer.
                        flags: readable, writable
                        Integer. Range: -1 - 2147483647 Default: -1

Hi DaneLLL,

Is there any update of nvv4l2camerasrc + nveglglessink for multi source?

Or is there any sample of using gst_parse_launch() to launch a csi stream?
As using launch_string below for gst_parse_launch(), the mp4 file did not save any data?

<< "appsrc name=mysource ! "
<< "v4l2src device=/dev/video" << csi_num << " ! "
<< "video/x-raw,width="<< w <<",height="<< h <<"format=(string)UYVY ! "
<< "nvvidconv ! video/x-raw(memory:NVMM) ! omxh264enc ! h264parse ! qtmux ! "
<< "filesink location= c.mp4 ";

Thank you for any suggestion,

Hi,

It is tracked in internal bug system. Still under investigation and may take some time. On DS5.0.0 or 5.0.1, you would need to use gst_parse_launch().

Please try the pipeline and check if you can see video preview:

<< "nvv4l2camerasrc device=/dev/video" << csi_num << " ! "
<< "video/x-raw(memory:NVMM),width="<< w <<",height="<< h <<",format=(string)UYVY ! "
<< "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! "
<< "nvoverlaysink ";

Hi DaneLLL,

Is there any update of nvv4l2camerasrc + nveglglessink for multi source?

Thank you for providing the pipeline below and it works.

<< “nvv4l2camerasrc device=/dev/video” << csi_num << " ! "
<< “video/x-raw(memory:NVMM),width=”<< w <<",height="<< h <<",format=(string)UYVY ! "
<< "nvvideoconvert ! video/x-raw(memory:NVMM)format=NV12 ! "
<< "nvoverlaysink ";


However, the "," before format might cause different result. Is it reasonable?

Thank you,

To have Deepstream supports nvv4l2camerasrc + nveglglessink for multi source is a new feature to be implemented, the release schedule is under planning.

Hi,
It looks like , is missing in the pipeline. The correct string is ",format=(string)UYVY ! "

Hi

We just realized the whole black display is the main problem of using nvv4l2camerasrc + nveglglessink in deepstream-app with multiple sources.

The black page also takes the event handle, click the black page could switch to single window.
However, we could not find a way to avoid the black page.

Attached the log as running 8 channels with black page showed.
And 2 channels without black page showed.

Thank you for any advice,
msg_ds5_show_8camera_error.txt (13.7 KB)

Hi,
The issue is under investigation. Suggest you use gst-launch-1.0 command.

If you have to use deepstream-app, the working pipeline is v4l2src + nveglglessink. There is a memory copy which may impact the performance. Have suggested you run sudo nvpmodel -m 0, sudo jetson_clocks, and enable VIC at max clock. In this case it is not able to achieve 8 1080p60 per your test result. Probably you may try 7 sources, 6 sources.