OMX.Nvidia.mjpeg.decoder - Frame Rate issue when using mediacodec.

Hi,

I am using OMX.Nvidia.mjpeg.decoder for decoding 720P MJPEG frames from camera HAL. I am getting 30 FPS from the camera. But after converting it into YUV frames using NV MJPEG decoder, the frame rate is reducing very much. (only 12 FPS). Can anyone help me on this?

Even for CIF ( 352x288) I am getting only 20 FPS. It seems mjpeg decoder is taking more time.

Note:

  1. I am using media codec functions/API for MJPEG decoder.

Hi Gopinath,
Have you tried max perfromance Jetson/Performance - eLinux.org ?

Hi DaneLLL,

Max performance results in very little improvements. (13 FPS). Please see the output of tegrastats command.

E/TegraStats( 6775): RAM 628/1739MB (lfb 173x4MB) cpu [0%,0%,0%,0%]@1938 EMC 36%@792 AVP 5%@300 VDE 444 GR3D 0%@852 EDP limit 0
E/TegraStats( 6775): RAM 628/1739MB (lfb 173x4MB) cpu [57%,74%,65%,64%]@1861 EMC 35%@792 AVP 4%@300 VDE 444 GR3D 4%@852 EDP limit 0
E/TegraStats( 6775): RAM 627/1739MB (lfb 173x4MB) cpu [51%,59%,66%,58%]@1938 EMC 35%@792 AVP 4%@300 VDE 444 GR3D 2%@852 EDP limit 0
E/TegraStats( 6775): RAM 628/1739MB (lfb 173x4MB) cpu [51%,61%,64%,56%]@1938 EMC 35%@792 AVP 4%@204 VDE 444 GR3D 0%@852 EDP limit 0
E/TegraStats( 6775): RAM 628/1739MB (lfb 173x4MB) cpu [44%,64%,66%,65%]@1912 EMC 36%@792 AVP 4%@300 VDE 444 GR3D 0%@852 EDP limit 0
E/TegraStats( 6775): RAM 628/1739MB (lfb 173x4MB) cpu [54%,72%,52%,49%]@1912 EMC 35%@792 AVP 4%@300 VDE 444 GR3D 0%@852 EDP limit 0

When I checked with software decoder (libjpegturbo) I am able to achieve 30FPS for 720P MJPEG video.

Below is code snippet, how I am giving and getting frames to HW decoder.

int nv_mjpegconversion( unsigned char * InputBuffer, unsigned char * OutputBuffer, int InWidth, int InHeight,int Framesize) {
    using namespace android;

    static int64_t kTimeout = 0;
    int numberoftrack = 1;
    int tracksize = 1;
    int framecount = 0;
    status_t err;
    size_t i = 0;


    bool sawInputEOS = false;
    long lSize;
    char * buffer;;
    size_t result;
    uint8_t * InBuffer;
    uint8_t * OutBuffer;

    for (;;) {
        if (!sawInputEOS) {
            size_t trackIndex = 0;

  	     if (framecount > 0) {
                sawInputEOS = true;
            } else {
                CodecState *state = &stateByTrack2.editValueFor(trackIndex);
                size_t index;
                err = state->mCodec->dequeueInputBuffer(&index, kTimeout);

                if (err == OK) {	

                    framecount++; 
                    const sp<ABuffer> &buffer = state->mInBuffers.itemAt(index);

                    InBuffer = buffer->data();
                    memcpy(InBuffer,InputBuffer,Framesize);

                     int64_t timeUs = 10;//what is this time????

                    uint32_t bufferFlags = 0;
                    err = state->mCodec->queueInputBuffer(
                            index,
                            0,
                            Framesize,
                            timeUs,
                            bufferFlags);

                    CHECK_EQ(err, (status_t)OK);

 
                } else {
                    CHECK_EQ(err, -EAGAIN);
                }
            }
        } else {  
                CodecState *state = &stateByTrack2.editValueAt(0);

                if (!state->mSignalledInputEOS) {
                    size_t index;
                    status_t err =
                        state->mCodec->dequeueInputBuffer(&index, kTimeout);

                    if (err == OK) {
                        ALOGV("signalling input EOS on track %d", i);

                        err = state->mCodec->queueInputBuffer(
                                index,
                                0 /* offset */,
                                0 /* size */,
                                0ll /* timeUs */,
                                MediaCodec::BUFFER_FLAG_EOS);

                        CHECK_EQ(err, (status_t)OK);

                        state->mSignalledInputEOS = true;
                    } else {
                        CHECK_EQ(err, -EAGAIN);
                    }
                }

        }
        bool sawOutputEOSOnAllTracks = true;
        CodecState *state = &stateByTrack2.editValueAt(i);
        if (!state->mSawOutputEOS) {
            sawOutputEOSOnAllTracks = false;
        }

        if (sawOutputEOSOnAllTracks) {
            break;
        }


            state = &stateByTrack2.editValueAt(0);

            if (state->mSawOutputEOS) {
                continue;
            }

            size_t index;
            size_t offset;
            size_t size;
            int64_t presentationTimeUs;
            uint32_t flags;
            status_t err = state->mCodec->dequeueOutputBuffer(
                    &index, &offset, &size, &presentationTimeUs, &flags,
                    kTimeout);

            if (err == OK) {

	        //Convert the YUV16 data to YUV420SP       
		    if(size > 0)
		    {      
		      const sp<ABuffer> &Outbuffer = state->mOutBuffers.itemAt(index);
		      OutBuffer = Outbuffer->data();
		      yuv422sp_2_yuv420(OutputBuffer,OutBuffer,InWidth, InHeight);
        	    }

                //ALOGE("INFO_FORMAT_CHANGED: %s", format->debugString().c_str());
                err = state->mCodec->releaseOutputBuffer(index);
                CHECK_EQ(err, (status_t)OK);

                if (flags & MediaCodec::BUFFER_FLAG_EOS) {
     
                    state->mSawOutputEOS = true;
                }
            } else if (err == INFO_OUTPUT_BUFFERS_CHANGED) {
                CHECK_EQ((status_t)OK,
                         state->mCodec->getOutputBuffers(&state->mOutBuffers));

            } else if (err == INFO_FORMAT_CHANGED) {
                sp<AMessage> format;
                CHECK_EQ((status_t)OK, state->mCodec->getOutputFormat(&format));

            } else {
                CHECK_EQ(err, -EAGAIN);
            }

    } 


    return 0;
}

Hi Gopinath,
If you run the same app and see different results between NV MJPEG decoder and libjpegturbo, I think it can be limitation of NV MJPEG decoder. On TK1, there is no specific HW MPEG decoder, so it probably offers limited performance. If libjpegturbo achieves the target performance, I think you can use it.

Starting from TX1, we have HW decoder called NVJPG, which can give better performance.

Hi DaneLLL,

I don’t thing it is limitation of hardware NV MJPEG. Because we are able to achieve 30FPS in 720P MJPEG to YUV in Linux TK1 using nvjpegdec.

DISPLAY=:0 gst-launch-0.10 v4l2src device=/dev/video0 qu
eue-size=5 always-copy=false ! "image/jpeg,width=(int)1280, height=(int)720, in
terlaced=(boolean)false, pixel-aspect-ratio=(fraction)1/1" ! nvjpegdec ! fpsdis
playsink sync=false

The problem in using libjpegturbo is it causes more CPU usage since it is software conversion.

Hi Gopinath,
It is possible as the implementation for Android and L4T is different. Let me check and see if I can provide more detail.

Do you have patch for adding ‘video/mjpeg’ into Android frameworks? It looks not supported by default.

Hi DaneLLL,

You can download the patch from the below link.

[url]Dropbox - Video_MJPEG_PATCH.zip - Simplify your life.

Note:

  1. I have added supported for video/mjpeg codec alone not for extractor since it is not needed for my case.

Hi Gopinath,
With the test app attached I can see 720p decoding at ~70fps. Please check and compare with your usecase.
mjpeg_decode.zip (148 KB)

Hi DaneLLL,

Thank-you so much for your support. When I ported your sample application , I am able to achieve 30FPS for 720P video.

I have one more doubt. When using your test jpg image (vlcsnap-2017-02-06-12h46m28s863.jpg) I am getting output in YUV420SP (NV21) format.

But with our camera MJPEG frames, I am getting different output format (YV16 semi planar and pixel format is YUV ) .

Is it because of input MJPEG frames? Am I not possible to configure the output format of YUV (in my case It would be better if i receive NV12 as output )

Hi Gopinath,
Looks like output format depends on the compressed bitstream and is not configurable.

The decoder informs the format change in the following code section:

} else if (err == INFO_FORMAT_CHANGED) {
            sp<AMessage> format;
            CHECK_EQ((status_t)OK, state->mCodec->getOutputFormat(&format));

            ALOGV("INFO_FORMAT_CHANGED: %s", format->debugString().c_str());

Hi DaneLLL,

Okay.Thank-you