Encode from yuv444 10-bit MIPI/streaming input

I am trying to encode yuv444 10-bit MIPI input.(using Tegra API)

As you know, generally the streaming format is pack format.

But xavier does not support yuv444 10-bit packed format.

So how do I encode yuv444 10-bit packed format MIPI / streaming?

Encoder usually exists for streaming.

Then how to encode lossless 10-bit(like yuv444 10-bit) with xavier?

Is there a method you recommend?

Hi,
The supported use-cases are demonstrated in the sample:

/usr/src/jetson_multimedia_api/samples/01_video_encode/

You can convert yuv444 10-bit packed format to either supported format through software converter, and then feed into hardware encoder.

I think you are referring to NvTransform() method.

But I don’t know what format to use for NvBufSurfaceColorFormat of Input.

Can you explain what 10-bit packed corresponds to?(in nvbufsurface.h or v4l2_nv_extensions.h)

Hi,

No, hardware converter does not support yuv444 10-bit packed format. You would need to implement software code to convert yuv444 10-bit packed format to V4L2_PIX_FMT_NV24_10LE

Thank you for your reply.

If these things are implemented with a software converter, how can you keep up with 4k streaming?

And I’m curious about something more fundamental.

NVIDIA said it would be 444 10-bit, but isn’t the main purpose of these usually used for streaming?

What is the purpose of the 444 10-bit you implemented through the API?

Hi,
Please check section 2.7.2 in module data sheet:

Log in | NVIDIA Developer

There is a statement:

Maximum throughput is half for YUV444 compared to YUV420

For achieving maximum performance, the frame data has to be YUV420. Since YUV444 has double size of YUV420, performance is expected to be worse. Moreover, hardware engine does not support packed format, it requires software converter to convert the frame data to V4L2_PIX_FMT_NV24_10LE. This also reduces certain throughput.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.