How to do "bl-output" for a single NVMM-buffer(NV12) which was been received by appsink?

I can’t get valid NV12-buffer by using the follow code:
image


And I get get valid data after set nvvidconv’s “bl-output” to false,but it‘s framerate’ will drop down after set it.
It’s there a way to do “bl-output” for a single NVMM-buffer(NV12) which was been appsink received by appsink?

Hi,
Doe it work for 3840x2160? The code looks fine. Please try stand resolution.

Hello DaneLL
In 4000*3000 resolution,It can get yuv file with set “bl-output” to false.But the framerate will dropout over set it(using NVMM).Setting this parameter seems to cause a CPU memory copy.

However, if I do not set the parameter “BL-output”, I cannot get the normal YUV.

Is there a way to get normal NV12 data from NVMM without droping down the frame rate.

Hi,
It looks expected. For dumping YUV data, you would need to set bl-output=0 to get data in pitch linear.

There is blocklinear to pitchlinear conversion on hardware converter. The resolution is high so please run VIC at mas clock:
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL
And also CPU cores at max clock sine you write data to storage. If it still cannot achieve performance requirement, it is vary likely to hit performance constraint. Suggest you not to write every frame to storage. The optimal performance is to keep data in NVMM buffer from source to sink, without writing to another CPU buffer or storage.

Even if I don’t save data to storage in appsink, as soon as I set BL-Output to “0”, the frame rate will go down.


Can you test it in imx477/577 with 4000*3000 mode?

Hi,
From your description the performance bottleneck looks to be in hardware converter. Without setting bl-output, it does not call NvBufferTransform() in nvvidconv plugin and pass the buffer to next element directly. If you see performance drop, it means the NvBufferTransform() call caps the performance. The maximum throughput is to disable DFS so that VIC engine can be always in max clock.

The nvvidconv plugin is open source from Jetpack 4.5. Although you use previous version, you may download the source code and take a look.

Thank you for your explanatio.
Will this be resolved with the jetpack upgrade?Or is this the limit of NX’s hardware performance?

Hi,
This is limitation of Xavier NX. On Xavier NX, max frequency of VIC(hardware converter) is 601.6MHz. On Xavier, it is 1036.8MHz.

Hello,DaneLLL
Can this be fixed on the Jetson AGX?

Hi,
We run this command to compare performance between Xavier and Xavier NX:

$ gst-launch-1.0 -v videotestsrc num-buffers=1000 ! video
/x-raw,width=1280,height=720,format=NV12 ! nvvidconv ! 'video/x-raw(memory:NVMM)
,width=4000,height=3000' ! nvvidconv bl-output=0 ! 'video/x-raw(memory:NVMM)' !
fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

Xavier NX:

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 928, dropped: 0, current: 40.76, average: 39.53
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 949, dropped: 0, current: 40.00, average: 39.54
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 970, dropped: 0, current: 40.90, average: 39.57
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 991, dropped: 0, current: 40.91, average: 39.60

Xavier:

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 887, dropped: 0, current: 72.39, average: 72.90
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 924, dropped: 0, current: 72.32, average: 72.88
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendere
d: 961, dropped: 0, current: 72.34, average: 72.85

So it looks like both Xavier and Xavier NX cannot achieve 4000x3000p25 to 4 appsinks.

1 Like

Thank you for your support. I will try to find another way to replace it

Hello,DaneLLL
I have a new idea.
(1).If I do not convert NVMM-Buffer to CPU-memory within appsink’s callback,Instead I convert it to a new NVMM memory.
(2).Then the NVMM-buffer will been converted to yuv in a separate pipeline which does not involve capture
Does this approach work?Is yuV conversion of copied NVMMBuffers affected by NvBufferTransform()?

Hi,
For converting to another NVMM buffer it uses hardware converter, so it does not reduce loading.

One possible enhancement is to modify nvarguscamerasrc. The code for allocating NvBuffer is

    input_params.width = self->width;
    input_params.height = self->height;
    input_params.layout = NvBufferLayout_BlockLinear;
    input_params.colorFormat = NvBufferColorFormat_NV12;
    input_params.payloadType = NvBufferPayload_SurfArray;
    input_params.nvbuf_tag = NvBufferTag_CAMERA;

Please modify NvBufferLayout_BlockLinear to NvBufferLayout_Pitch so that you don’t need to link to nvvidconv plugin for converting to pitch-linear buffer.

1 Like

Hell,DaneLLL
I had solved the problem through your suggestion,but now a new problem has arisen.
The nvjpegenc was work well when nvarguscamerasrc’s layout was NvBufferLayout_BlockLinear,and it get Segment Fault after nvarguscamerasrc’s layout was change to NvBufferLayout_Pitch.
The test commands are as follows:

gst-launch-1.0 nvarguscamerasrc num-buffers=1 ! ‘video/x-raw(memory:NVMM), width=(int)4000, height=(int)3000, format=(string)NV12’ ! nvjpegenc quality=100 idct-method=2 ! filesink location=test.jpg -e

Hi,
For clearness, please create a new topic for the jpeg encoding issue. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.