Monochrome sensor grey8 60fps performance issue

bringing up of monochrome sensor in jetson nano
How to increase the performance for 1920x1204 GRAY8 @ 60fps
Sensor is streaming @ 60 fps, if we encode or display the output it is not running @ 60fps

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)GRAY8,framerate=60/1' ! fpsdisplaysink text-overlay=0 video-sink=fakesink --verbose

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 33, dropped: 0, current: 64.48, average: 64.48
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 65, dropped: 0, current: 62.53, average: 63.51

./camera_v4l2_cuda -d /dev/video0 -s 1920x1204 -f GREY -r 60
pitch =2048, it is not using V4L2_MEMORY_DMABUF and using V4L2_MEMORY_MMAP
Average FPS = 11.6913
it is displaying properly but fps is not 60fps
so if i add support in nvv4l2camerasrc also, the performance will not improve

Observation: Raw2NvBuffer conversion takes ~45msec per frame

gst-launch-1.0 -v -e v4l2src device=/dev/video0 ! "video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1" ! nvvidconv ! "video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1" ! nvv4l2h264enc bitrate=8000000 ! h264parse ! qtmux ! filesink location=60fps_h264.mp4"
With encoding and writing into file also gives ~24fps
Expected is 60 fps

nvarguscamerasrc is showing “No cameras available”
how do we achieve 60fps performance

hello user167310,

that’s due to nvarguscamerasrc doesn’t support GRAY8.
and, camera_v4l2_cuda, it’ll process debayer and also do some memory copy, so it’s expect not reach the frame-rate performance.
the gst pipeline you’re using is involve nvvidconv video converter, which convert the formats as I420.

so,
could you please try toggle the system configuration to performance mode.
you may use the nvpmodel GUI and switch to MaxN for testing.

you may also test with h265, for example, $ gst-launch-1.0 -e v4l2src num-buffers=300 ! 'video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1' ! nvvidconv ! "video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1" ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=video0.mp4

Hi @JerryChang
Thank you for your reply.

root@ubuntu:/home/test/R32.7.2_4l_r8_60fp_images# uname -a
Linux ubuntu 4.9.253-tegra #2 SMP PREEMPT Tue Jul 12 15:12:50 IST 2022 aarch64 aarch64 aarch64 GNU/Linux
root@ubuntu:/home/test/R32.7.2_4l_r8_60fp_images# jetson_clocks --show
SOC family:tegra210  Machine:NVIDIA leopard Jetson Nano Developer Kit
Online CPUs: 0-3
cpu0: Online=1 Governor=schedutil MinFreq=1428000 MaxFreq=1428000 CurrentFreq=1428000 IdleStates: WFI=0 c7=0 
cpu1: Online=1 Governor=schedutil MinFreq=1428000 MaxFreq=1428000 CurrentFreq=1428000 IdleStates: WFI=0 c7=0 
cpu2: Online=1 Governor=schedutil MinFreq=1428000 MaxFreq=1428000 CurrentFreq=1428000 IdleStates: WFI=0 c7=0 
cpu3: Online=1 Governor=schedutil MinFreq=1428000 MaxFreq=1428000 CurrentFreq=1428000 IdleStates: WFI=0 c7=0 
GPU MinFreq=921600000 MaxFreq=921600000 CurrentFreq=921600000
EMC MinFreq=204000000 MaxFreq=1600000000 CurrentFreq=204000000 FreqOverride=1
Fan: PWM=80
NV Power Mode: MAXN
root@ubuntu:/home/test/R32.7.2_4l_r8_60fp_images# dpkg-query --show nvidia-l4t-core
nvidia-l4t-core    32.7.2-20220420143418

gst-launch-1.0 -e v4l2src num-buffers=300 ! 'video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1' ! nvvidconv ! "video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1" ! nvv4l2h265enc bitrate=8000000 maxperf-enable=true ! h265parse ! qtmux ! filesink location=video0.mp4

with the above command fps= 24fps

hello user167310,

may I also know what’s the format dumps by v4l utility?
for example, $ v4l2-ctl -d /dev/video0 --list-formats-ext

root@ubuntu:/home/test/R32.7.2_4l_r8_60fp_images# v4l2-ctl --device /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘GREY’
Name : 8-bit Greyscale
Size: Discrete 1920x1204
Interval: Discrete 0.017s (60.000 fps)

Hi JerryChang,
we measured the timetaken only for “Raw2NvBuffer” is taking 45msec

Hello,

Try changing your pixel format to RGGB using this v4l2-ctl:

v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=RGGB

Then try using nvarguscamerasrc but add saturation=0 to your pipeline:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=60/1’ ! nvv4l2h265enc bitrate=8000000 maxperf-enable=true ! h265parse ! qtmux name=mux ! filesink location=video0.mp4 -e

@frank.delacruz
Thank you for your input

root@ubuntu:/home/test/R32.7.2_4l_r8_60fp_images# v4l2-ctl --set-fmt-video=width=1920,height=1204,pixelformat=RGGB

ran the nvargus camera pipeline

root@ubuntu:/home/test/R32.7.2_4l_r8_60fp_images# journalctl -f -b -u nvargus-daemon
-- Logs begin at Sat 2022-07-16 10:30:06 IST. --
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: (NvOdmDevice) Error NotInitialized: hDev Table not initialized (in dvs/git/dirty/git-master_linux/camera-partner/imager/src/devices/V4L2SensorViCsi.cpp, function initialize(), line 97)
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: NvPclDriverInitializeData: Unable to initialize driver v4l2_sensor
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: NvPclInitializeDrivers: error: Failed to init camera sub module v4l2_sensor
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: NvPclStartPlatformDrivers: Failed to start module drivers
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: NvPclStateControllerOpen: Failed ImagerGUID 4. (error 0x3)
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: NvPclOpen: PCL Open Failed. Error: 0xf
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: SCF: Error BadParameter: Sensor could not be opened. (in src/services/capture/CaptureServiceDeviceSensor.cpp, function getSourceFromGuid(), line 593)
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: SCF: Error BadParameter:  (propagating from src/services/capture/CaptureService.cpp, function addSourceByGuid(), line 437)
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: SCF: Error BadParameter:  (propagating from src/api/CameraDriver.cpp, function addSourceByIndex(), line 305)
Jul 16 11:31:06 ubuntu nvargus-daemon[4868]: SCF: Error BadParameter:  (propagating from src/api/CameraDriver.cpp, function getSource(), line 471)

we are still getting No cameras available

hello user167310,

the performance issue is due to format conversion, which Nano series do not support gray formats through Argus.

Hello,

What camera are you using? Who is the manufacturer?

Also, could you try using the gstreamer pipeline I sent? I had the exact same issue you were having and I was able to use nvarguscamerasrc by changing the pixel forma using v4l2-ctl. I changed the format from grey to rggb. I was then able to use nvarguscamerasrc by using the format=NV12 in my caps.

@JerryChang

gst-launch-1.0 -v -e videotestsrc num-buffers=60 ! queue ! ‘video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1’ ! nvvidconv ! queue ! “video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1” ! nvv4l2h265enc bitrate=8000000 maxperf-enable=true ! queue ! h265parse ! qtmux ! filesink location=videotestsrc.mp4

It is running @ 60fps
Time taken Raw2NvBuffer is ~13msec

but i run the

gst-launch-1.0 -v -e v4l2src num-buffers=60 ! queue ! ‘video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1’ ! nvvidconv ! queue ! “video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1” ! nvv4l2h265enc bitrate=8000000 maxperf-enable=true ! queue ! h265parse ! qtmux ! filesink location=v4l2src.mp4

It is running @ 27fps
Time taken Raw2NvBuffer is ~27msec

Both the code flows are same,
In both the pipeline, the nvvidconv plugin all the buffer configurations are same
inbuf_memtype =BUF_MEM_SW
outbuf_memtype == BUF_MEM_HW
why there is a big difference in the timetaken

Hi,
Please execute sudo nvpmodel -m 0 and sudo jetson_clocks. And check frame rate of the commands:

gst-launch-1.0 -v -e videotestsrc num-buffers=600 is-live=1 ! queue ! 'video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1' ! nvvidconv ! queue ! 'video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1' ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0
gst-launch-1.0 -v -e v4l2src num-buffers=600 ! queue ! 'video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1' ! nvvidconv ! queue ! 'video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1' ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

@DaneLLL

Please find the results
# gst-launch-1.0 -v -e videotestsrc num-buffers=600 is-live=1 ! queue ! 'video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1' ! nvvidconv ! queue ! 'video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1' ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 460, dropped: 0, current: 59.96, average: 60.19
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 491, dropped: 0, current: 60.04, average: 60.18
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 522, dropped: 0, current: 59.99, average: 60.17
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 552, dropped: 0, current: 59.98, average: 60.16
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 583, dropped: 0, current: 60.03, average: 60.15
# gst-launch-1.0 -v -e v4l2src num-buffers=600 ! queue ! 'video/x-raw,format=GRAY8,width=1920,height=1204,framerate=60/1' ! nvvidconv ! queue ! 'video/x-raw(memory:NVMM),format=I420,width=1920,height=1204,framerate=60/1' ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

result is

 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 315, dropped: 0, current: 40.29, average: 40.52
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 336, dropped: 0, current: 40.47, average: 40.52
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 357, dropped: 0, current: 40.49, average: 40.52
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 378, dropped: 0, current: 40.43, average: 40.51
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 399, dropped: 0, current: 40.49, average: 40.51
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 420, dropped: 0, current: 40.48, average: 40.51
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 441, dropped: 0, current: 40.49, average: 40.51
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 462, dropped: 0, current: 40.45, average: 40.50
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 483, dropped: 0, current: 40.45, average: 40.50
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 504, dropped: 0, current: 40.51, average: 40.50
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 525, dropped: 0, current: 40.45, average: 40.50
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 546, dropped: 0, current: 40.52, average: 40.50
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 567, dropped: 0, current: 40.50, average: 40.50
 /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 588, dropped: 0, current: 40.48, average: 40.50 

v4l2src is 40fps
videotest src is 60fps

v4l2src_log_jul22_forum.txt (68.9 KB)
videotestsrc_log_jul22_forum.txt (67.9 KB)

Hi,
If videotestsrc can achieve 60fps, ideally v4l2src should achieve the same. A bit strange the result is different. For GRAY8 input, Raw2NvBuffer() is required and after executing $ sudo nvpmodel -m 0, $ sudo jetson_clocks, it runs in optimal throughput. There may not be room for improvement.

One more try is to run VIC at max clock. Please refer to:
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL

@DaneLLL

Thank you for your suggestions

nvpmodel -m 0
jetson_clocks
echo on > /sys/devices/50000000.host1x/54340000.vic/power/control
echo userspace > /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/governor
cat /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/available_frequencies
140800000 268800000 332800000 371200000 409600000 435200000 473600000 499200000 537600000 563200000 563200000 588800000 601600000 627200000
echo 627200000 > /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/max_freq
echo 627200000 > /sys/devices/50000000.host1x/54340000.vic/devfreq/54340000.vic/userspace/set_freq

but still no improvement, v4l2src is 40fps

Hi,
Please set io-mode=2 to v4l2src for a try. See if the sensor can run in this mode and there’s improvement.

@DaneLLL
There is no improvement in the performance if we set the set io-mode=2 to v4l2src

We set it to RGGB and run the pipeline using nvarguscamerasrc pipeline

gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 ! "video/x-raw(memory:NVMM), width=1920, height=1204, format=NV12, framerate=60/1" ! nvv4l2h265enc bitrate=8000000 maxperf-enable=true ! h265parse ! qtmux name=mux ! filesink location=video0.mp4 -e

We are getting errors

nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...

we tried with both the discontinuous_clk settings but the output is same
nvargus_logs.txt (1.8 KB)

discontinuous_clk = “yes”;
discontinuous_clk = “no”;

hello user167310,

don’t your sensor actually grayscale? had you only modify DT to report the pixel format as RGGB?