Image Stride is not changing while using multimedia application in Jetson TX2 NX

Hi all,

I am using multimedia application for camera streaming .Camera format is as follows
Width/Height : 1456/1088
Pixel Format : 'Y10 ’
Field : None
Bytes per Line : 1472
I am capturing raw data using v4l2. Raw data will be converted to NV12 and will be copied to Nvbuffer . NvVideoEncoder encodes the data to h264 format. NV12 conversion is working properly.But encoded data is not proper. Camera width is 1472.But still stride(plane.fmt.stride showing as 1536). Can we change this stride value? How does Nvbuffer plane.fmt.stride knows the camera stride?
I am getting image as below

is it because of the wrong stride value ?

Kindly support me in resolving this issue
Thanks in advance

hello joseneethu75,

may I know which JetPack release version you’re using.
you should be able to add --set-ctrl flags and using preferred_stride options to update the settings.
for example, $ v4l2-ctl -d /dev/video0 ... --set-ctrl preferred_stride=256 ..
thanks

Hi @JerryChang ,

Thank you for the response. I am using jetpack version 4.6

hello joseneethu75,

that preferred_stride option should be available for JP-4.6, please try using that property to update the settings.

Hi @JerryChang ,

I changed the preferred stride .But still plane.fmt.stride showing as 1536.

Hi @JerryChang ,

Could you please share your thoughts on how jetson knows and sets the image stride ? I tried with v4l2-ctl command as you suggested . But unfortunately image stride is not getting changed even if camera width is 1472.
Kindly help me

Thanks in advance

hello joseneethu75,

please refer to VI driver to examine these two paragraph for stride settings,
for example,
$public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/channel.c

static void tegra_channel_update_format(...)
...
        chan->format.bytesperline = preferred_stride ?: bytesperline;
        dev_dbg(&chan->video->dev,
                        "%s: Resolution= %dx%d bytesperline=%d\n",
                        __func__, width, height, chan->format.bytesperline);
static void tegra_channel_fmt_align(...)
...
        min_bpl = bpl;
        max_bpl = rounddown(TEGRA_MAX_WIDTH, chan->stride_align);
        temp_bpl = roundup(*bytesperline, chan->stride_align);

        *bytesperline = clamp(temp_bpl, min_bpl, max_bpl);

BTW,
you may also enable kernel dynamic debug logs for checking,
for example,
# cd /sys/kernel/debug/dynamic_debug/
# echo file channel.c +p > control
then, you should kernel logs ($ dmesg) to print dev_dbg() messages.
thanks

Hi @JerryChang ,

I enabled the debug logs and I noticed that while running jetson multimedia application, I am getting the prints as below.

video4linux video0: tegra_channel_update_format: Resolution= 1456x1088 bytesperline=1472

If I set preferred stride as 256 and after that if i run the application, I am getting the following kernel logs

video4linux video0: tegra_channel_update_format: Resolution= 1456x1088 bytesperline=256

But in both cases the nvbuffer plane property (plane.fmt.stride ) is showing as 1536. How is Nvbuffer plane properties are getting updated?

hello joseneethu75,

could you please also share the pipeline you’re used for reference, thanks

Hi @JerryChang ,

Please see the pipeline I am using,

After each block I checked by writing the image data to a file.Image is proper (1472X1088) up to the block(Copying data to Nvbuffer).But image written in the callback handler function is not a proper Image.(I already shared the improper image ) And also as I mentioned previously Nvbuffer plane format stride is 1536.But I am not sure that whether the stride is causing this issue.Kindly share your thoughts

Hi,
For putting data into NvBuffer, you would need to consider data alignment. You can call Raw2NvBuffer() after converting Y10 to NV12.

Hi @DaneLLL ,

Is there any sample code to refer copying of raw data to NvBuffer using Raw2NvBuffer().I am using read_video_frame() in the sample code (samples/03_video_cuda_enc) to copy the raw data to Nvbuffer plane data in my application.

Hi,
For thus use-case, please refer to the samples:

  1. Capture frame data into CUDA buffer:
    /usr/src/jetson_multimedia_api/samples/v4l2cuda

  2. Access NvBuffer through CUDA and do video encoding:

/usr/src/jetson_multimedia_api/samples/12_camera_v4l2_cuda
TX2 Camera convert/encode using Multimedia API issue - #17 by DaneLLL

A possible solution is like:

  1. Capture Y10 frame data into CUDA buffer
  2. allocate NV12 Nvbuffer and get CUDA pointer
  3. Convert data from Y10 CUDA buffer to NV12 Nvbuffer
  4. Send NV12 NvBuffer to encoder

Hi @DaneLLL ,

Thank you for the response.I will check these sample codes.Meanwhile could you please share me your thoughts on how does NvBuffer plane format(stride is getting updated).

Hi,
Please check
Gstnvv4l2camerasrc with GRAY8 support - #16 by DaneLLL

NV12 has two planes and you can call NvBufferGetParam() to get pitch, width, height of each plane.

Hi @DaneLLL ,

I checked NvBufferGetParam() to get pitch, width and height
In my case width = 1456, height =1088 and pitch = 1536.But in our device driver , #define RM_SURFACE_ALIGNMENT is set to 64 (in vi4_fops.c)

I referred the sample codes. I have one doubt in calling the function Raw2NvBuffer().Once it is called how can we feed the data to encoder?

Hi,
For feeding NvBuffer to hardware encoder, please refer to 01_video_encode sample, or
this patch:
TX2 Camera convert/encode using Multimedia API issue - #17 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.