An important bug about nvargus and tee /queue when captured by using multiple sensors ?

Hello,DaneLLL
It was the same result.Whether it’s running two cameras in a single process, or two single camera in two process.

I think nvargusCamerasrc is a great plugin that includes a lot of features that V4L2 doesn’t have.However, this problem currently occurs when using multiple sensors,this bug is easy to reproduce.

run a camera in one process
image

and run two cameras in one process
image
Is VDDIN out of scope?

Hello,DaneLL
I had a new found. In the function bool StreamConsumer::threadExecute(GstNvArgusCameraSrc *src)(gstnvarguscamerasrc.cpp), I had printf the frame counter and it’s time,it was work well when only using a camera,it’s very standard 25 fps.


But when using two camera,the frame counter interval was more than 25,and the time is equal to the (frame counter interval) *40ms.


This means that some frames are not picked up on time,It’s not enough internal memory to capture? Or the delay in consumption caused the blockage for too long?

I ran some more detailed tests:print the consumption time.
It was fast when using a single camera to capture:


But a lot of times over 40ms when capturing with two cameras:

Hi,
We will try to reproduce the issue first. Will update.

And are you able to try Jetpack 4.6(r32.6.1)? Or you have to stay on r32.4.4?

Hello,DaneLLL
We will stay on r32.4.4 this year,can you git me a patch it after you fix it?
More detailed bug location:A bug of the function "NvBufferTransform" in gstnvarguscamerasrc.cpp

Hi,
Please run the command and check if you observe the issue:

gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1' ! tee name=t ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 nvarguscamerasrc sensor-id=1 ! 'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1' ! tee name=t1 ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t1. ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t1. ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t1. ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

We run on r32.6.1/Xavier NX developer kit + two Raspberry Pi V2 cameras and can see all sinks at 30fps. Please help give it a try.

Hello,DaneLLL
The comand was work well in Jetpack4.6 by using terminal ,I want to run it on the C-code to further locate the problem.
Do you now how to set “video-sink=fakesink” in c-code,I set it but it to fakesink,but it general a autovideosink

.

Hello,DaneLL
I further located the bug,it has nothing to do with Tee or Queue,but it has to do with a plugin which name is “nvvidconv”;
As shown in the figure below,it can work well 25fps(3mintues 4546frames) without nvvidconv;
,


And when using two queue to convert to 1920*1080,it will get 23.6fps(3minutes 4250frames)

With nvVidconvert number increased to 4,the framerate was drop to 15.5fps(3minutes 2800frames).

Hello,DaneLL
you can repetition the bug by the follow two commands in two terminals:
A:
gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=4000,height=3000’ ! tee name=t ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0
B:
gst-launch-1.0 -v nvarguscamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM),width=4000,height=3000’ ! tee name=t ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 t. ! queue ! nvvidconv ! ‘video/x-raw,width=4000,height=3000’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

Hi,
In your pipeline there is memory copy in nvvidconv plugin:

'video/x-raw(memory:NVMM),width=4000,height=3000' ! tee name=t ! queue ! nvvidconv ! 'video/x-raw,width=4000,height=3000'

It copies data from NVMM buffer to CPU buffer. So the performance is dominated by CPU cores. The resolution is > 4K so it shall hit constraint of CPU capability. For optimal solution we suggest keep NVMM buffer from sources to sinks. This would eliminate the memory copy and gives optimal performance on Jetson platforms.

Hello DaneLLL
Can Nvidia provide patches to configure support for more than 4K resolution?4000*3000 resolution is now common used in sensor(477/577/586 and so on).

Hi,
The issue is not about resolution. It hits constraint of CPU capability due to the memory copy. For getting CPU pointer of NVMM buffer, we would suggest use NvBuffer APIs instead of copying the whole buffer. Please refer to this sample:
How to run RTP Camera in deepstream on Nano - #29 by DaneLLL

So that you can get NvBuffer in appsink and call NvBufferMemMap() to get CPU pointer.

Do you know how to get the NVMM to memory by using dmabuf_fd?


I need the buffer to save yuv-file

Hi,
Please refer to this code:

        char filename[256];
	
        sprintf(filename,"output000.yuv",(unsigned)frameNumber);

        std::ofstream *outputFile = new std::ofstream(filename);
        dump_dmabuf(dmabuf_fd,0,outputFile);
        dump_dmabuf(dmabuf_fd,1,outputFile);
        if (par.pixel_format==NvBufferColorFormat_YUV420)
        {
            dump_dmabuf(dmabuf_fd,2,outputFile);
        }

        delete outputFile;

dump_dmabuf() is in

/usr/src/jetson_multimedia_api/samples/common/classes/NvUtils.cpp
1 Like

Hi,
Thank you for your guidance.I had try it, but the yuV images saved were abnormal.It seems to have a problem with the pixels,it was NV12 format after capture.


device1_.yuv (17.2 MB)

Hi,
Probably the buffer is in blocklinear. Please set this property of nvvidconv to false:

  bl-output           : Blocklinear output, applicable only for memory:NVMM NV12
 format output buffer
                        flags: readable, writable
                        Boolean. Default: true

And try again.

Hi,
After set to “bl-output” it will call error:

PosixMemMap (48) failed
nvbuf_utils: NvBufferMemMap function failed… Exiting…
NvBufferMap failed

I noticed other something strange,the pitch no equal to 4000,does it have anything to do with it?
palne:0,width:4000,height:3000,pitch:4096,psize:12320768
palne:1,width:2000,height:1500,pitch:4096,psize:6160384

Hello DaneLLL
I can set “bl-output” successfully by commenting out the following code, now I can get normal YUV data
image
But it created a new problem,the framerate was drop down after set nvvidconv “bl-output”.Is there any issue that can do “BL-output” processing for NVMM data received by appsink separately,not nvVidConv.

A new topic is created:
How to do "bl-output" for a single NVMM-buffer(NV12) which was been received by appsink?

Let’s continue discussion in the topic.