TX2 tegra_multimedia_api encode/decode issue

Hi all:

I want to use tegra_multimedia_api to do h264 hardware encode /decode, our TARGET is "decode 4k * 4 rtsp streaming -> CV computing -> rtsp encode 4k streaming *4 to output",

on this goal, we want to reserve GPU for CV computing, so we want to leverage codec to help decode & encode.

But while we test the sample on TX2, we found there is some spots on the result and don’t know how to remove it? My questions are

1. how to use tegra_multimedia_api to eanble hardware encode/decode capability on h264 with correct result output? 

2. from docs, we know nvidia has gst omx and tegra_multimedia_api, what's different? which one's performance is better?

3. on TX2, could we achieve our goal?

Hi,
We have listed it in document

If you run 4x4Kp30, it exceeds the limitation.

Hi DaneLLL:

Double confirm it, on spec, I could get H264 decode 3840x2160@60fps + H264 encode 3840x2160@30fps at the same time. 
Is it correct?

2,  "how to use tegra_multimedia_api to eanble hardware encode/decode capability on h264 with correct result output?"

3. omx and tegra_multimedia_api, what's different? which one's performance is better?

Hi,

Yes.

After flashing through sdkmanager, you will see samples in

/usr/src/tegra_multimedia_api

You may start with 00_video_decode and 01_video_encode

We support tegra_multimedia_api and gstreamer. It is two different software frameworks and users can pick one for application development. gstreamer is easy to use because there are existing plugins for various usecases. For example, if you need rtsp streaming, you can use rtspsrc and rtph264depay. tegra_multimedia_api is low level and may have slightly better performance, but you need to implement/integrate all functions.

Hi DaneLLL,

Thank you for your support.

As we tested tegra_multimedia_api/encode and decode function and found some spots on preview.
And don’t know how to fix.

Here are our test steps.

  1. encode a yuv file to a h264 file
    the command:
    nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$ ./video_encode ~/tegra_multimedia_api/samples/01_video_encode/output_640_360_debug.yuv 640 360 H264 u1_yuv.h264 -hpt 1

  2. decode the h264 file and find unexpected spots
    nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/00_video_decode$ ./video_decode H264 u1_yuv.h264

Does the encode /decode commands we used correct?
If not correct, how to use?

Attached please find the yuv file and h264 file.

Thank you for any advice,
output_640_360_debug.yuv.7z (857 KB)
u1_yuv.h264.7z (1.25 MB)

Hi,
My apology for the late response. Will try to reproduce the issue with your attachment.

Hi DaneLLL,

Thank you for your support.
Please feel free to let us know, if you need any further information to reproduce the issue?

Thank you,

Hi,
output_640_360_debug.yuv is in NV12 format. Please modify the format in 01_video_encode and try again.

    switch (ctx.profile)
    {
        case V4L2_MPEG_VIDEO_H265_PROFILE_MAIN10:
            ctx.raw_pixfmt = V4L2_PIX_FMT_P010M;
            break;
        case V4L2_MPEG_VIDEO_H265_PROFILE_MAIN:
        default:
            ctx.raw_pixfmt = `V4L2_PIX_FMT_NV12M`;
    }

Hi DaneLLL,

Thank you for your great support.
Tried to build 01_video_encode of Jetpack4.3 and got a below error.
( 01_video_encode of L4T R28.2.1 does not have raw_pixfmt.)

Linking: video_encode
/usr/bin/ld: cannot find -lv4l2
collect2: error: ld returned 1 exit status
Makefile:52: recipe for target ‘video_encode’ failed
make: *** [video_encode] Error 1

Thank you for any advice,

Hi,
Please install
$ sudo apt install libv4l-dev

Hi DaneLLL,

Thank you for your prompt support.
Changed to NV12 format and got a better results.

Still checking with customer about the change.

Thank you,

Hi DaneLLL:

I have a question about how to integrated decode API to our APP, on MMAPI sample, video_decode_main.cpp 

it provides to get all data from files and then decode them at once. but I got the H264 streaming from our system, and I only want to decode the H264 streaming frame by frame, not get all the data and decode them at the same time. Do you have any good example on this case?

Hi @hobin0920
If the H264 stream is with reference frame=1, decoder keeps one frame for decoding next frames. It works like

Queue 1st frame in output plane
Queue 2nd frame in output plane
Receive 1st decoded frame in capture plane
Queue 3rd frame in output plane
Receive 2nd decoded frame in capture plane
...

So for receiving first decoded frame, you have to queue two frames to output plane.

Hi DaneLLL:

Do you have an example to tell how to use it? there are many parameter need to be set and also I still try to figure out the process, If anything could help on this, plz let me know it.

thank you

Hi,
00_video_decode is the sample for demonstrating video decoding. Please take a look. For enabling low latency, please set –disable-dpb.

Reference link:

Hi DaneLLL:

on 00_video_decode example,   this  dec_capture_loop_fcn thread will wait for read file result to update resolution to decoder, but my input is not file, I get the H264 buffer from our machine, so I use H264 frame buffer to replace read buffer from file, but it seems like many other parameter depend on read file, except resolution, crop and ....

Do you have any idea how to fulfill these parameter without open file?

dec_capture_loop_fcn(void *arg)
{
context_t *ctx = (context_t *) arg;
NvVideoDecoder *dec = ctx->dec;
struct v4l2_event ev;
int ret;

cout << "Starting decoder capture loop thread" << endl;
// Need to wait for the first Resolution change event, so that
// the decoder knows the stream resolution and can allocate appropriate
// buffers when we call REQBUFS
printf("line %d\r\n",__LINE__);
do
{
printf("line %d\r\n",__LINE__);
    ret = dec->dqEvent(ev, 50000);
printf("line %d\r\n",__LINE__);
    if (ret < 0)
    {
        if (errno == EAGAIN)
        {
            cerr <<
                "Timed out waiting for first V4L2_EVENT_RESOLUTION_CHANGE"
                << endl;
        }
        else
        {
            cerr << "Error in dequeueing decoder event" << endl;
        }
        abort(ctx);
        break;
    }
}
while ((ev.type != V4L2_EVENT_RESOLUTION_CHANGE) && !ctx->got_error);

Hi,
Reading h264 stream from file is in read_decoder_input_nalu(). For your usecase, you should modify the function to copy the data into NvBuffer instead of reading from file.

Hi DaneLLL:

that's great, now I want to know if I input nalu data, do I need to input SPS and PPS info to decoder? or I just need to input IDR to decoder?

Continue in 121421