Clarification on TX2 Max HW Encode Resolution

Can somebody clarify the max resolution @ 30fps of the AVC omxh264encoder bundled with the TX2? Could also probably switch to HEVC if it supports higher res on the TX2, but right now am using AVC. I’m seeing conflicting resolutions within Nvidia. Is it 4k x 4k or 4k x 2k?

4k x 2k
Tech specs:
https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/
4K x 2K 60 Hz Encode”
https://devtalk.nvidia.com/default/topic/1003950/jetson-tx2/encoding-yuv420-received-from-csi/post/5139336/#5139336 (copy/pasted data sheet) (3840x2160):
https://a70ad2d16996820e6285-3c315462976343d903d5b3a03b69072d.ssl.cf2.rackcdn.com/c23d65a443df872eb4f76c0cc6941a84

4k x 4k
https://devtalk.nvidia.com/default/topic/1015571/?comment=5253645
“Max width is 4096. Max height is 4096
“video/x-raw,width=4096,height=4096,…”

$ gst-inspect-1.0 omxh264enc

video/x-h264
width: [ 16, 4096 ]
height: [ 16, 4096 ]

No 4k listed (???):
https://developer.download.nvidia.com/assets/embedded/downloads/secure/tx2/Jetson%20TX2%20Module%20Data%20Sheet/Jetson_TX2_TX2i_Module_DataSheet_v01.pdf?phihZZK4QOixc4zV5sUdrlFS04aS-F9dWQPmp9TN0cA9oBfelNuROqlshSiJ7DStINBZQUt5Tf5xA3mo_VPbR9gT1CGdxfGpAPa1Wl8Ib9i902FbZ84UMgh335wY2Te9ZYfradUDfdIZBZIV0TNt090a6zh-BoHNp5ay1pAMLl_ceT7jqcbqT6xU6ELdtvW-uJxSp-84XhtwIvXdaNltcOvo3d-JhMp5VhuqjBvQalHA

TX2 data sheet, page 2 doesn’t even list 4k as an encode resolution for neither HEVC or AVC, but I know it does 4k x 2k for AVC at least by 1x.

Any clarification on this would be appreciated, is it 4k in either height or width but limited to level 5.2 (9.4k samples) or something?

PLease refer to [Software Features]->[Video Encoders] in documentation
https://developer.nvidia.com/embedded/dlc/l4t-documentation-28-2-ga

Gotcha, thanks for the clarification.

Would you be able to say if there are any official plans to support 4k x 4k in HW encoder? Be it in there TX2 or a later Jetson module if it comes out (not sure if it’s HW limitation in the TX2 or not)?

The 4k x 2k limit is unfortunately causing us problems that are difficult to work around.

For 4Kx4K, do you mean 3840x3840? Also what is required frame rate?

Yeah, I think 4096 is what we’re using now - but for argument’s sake let’s just say we instead use 3840x3840.

30fps is the required frame rate for our application. But is it capable of doing 3840x3840 at a lower FPS?

Hi,
We have verified 3840x3840 encoding with below patch applied to 01_video_encode:

diff --git a/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp b/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp
index 3f048f4..cf014fe 100644
--- a/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp
+++ b/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp
@@ -131,7 +131,7 @@ void CloseCrc(Crc **phCrc)
 static int
 write_encoder_output_frame(ofstream * stream, NvBuffer * buffer)
 {
-    stream->write((char *) buffer->planes[0].data, buffer->planes[0].bytesused);
+    //stream->write((char *) buffer->planes[0].data, buffer->planes[0].bytesused);
     return 0;
 }
 
@@ -581,6 +581,9 @@ main(int argc, char *argv[])
     int error = 0;
     bool eos = false;
     unsigned int input_frames_queued_count = 0;
+    uint32_t size = 0;
+    int count = 0;
+    int encode_frames = 3000;
 
     set_defaults(&ctx);
 
@@ -642,6 +645,7 @@ main(int argc, char *argv[])
 
     ctx.enc = NvVideoEncoder::createVideoEncoder("enc0");
     TEST_ERROR(!ctx.enc, "Could not create encoder", cleanup);
+    ctx.enc->enableProfiling();
 
     // It is necessary that Capture Plane format be set before Output Plane
     // format.
@@ -1108,6 +1112,11 @@ main(int argc, char *argv[])
         }
         if (read_video_frame(ctx.in_file, *buffer) < 0)
         {
+            if (count < encode_frames) {
+                count++;
+                buffer->planes[0].bytesused = size;
+            } else {
+
             cerr << "Could not read complete frame from input file" << endl;
             v4l2_buf.m.planes[0].bytesused = 0;
             if(ctx.b_use_enc_cmd)
@@ -1122,6 +1131,15 @@ main(int argc, char *argv[])
                 v4l2_buf.m.planes[0].m.userptr = 0;
                 v4l2_buf.m.planes[0].bytesused = v4l2_buf.m.planes[1].bytesused = v4l2_buf.m.planes[2].bytesused = 0;
             }
+
+            }
+        } else {
+            if (buffer->planes[0].bytesused) {
+                size = buffer->planes[0].bytesused;
+            } else if (count < encode_frames) {
+                count++;
+                buffer->planes[0].bytesused = size;
+            }
         }
 
         if (ctx.input_metadata)
@@ -1295,6 +1313,7 @@ cleanup:
 
         CloseCrc(&ctx.pBitStreamCrc);
     }
+    ctx.enc->printProfilingStats(std::cout);
 
     if(ctx.output_memory_type == V4L2_MEMORY_DMABUF)
     {

The result looks good:

nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$ gst-launch-1.0 videotestsrc pattern=1 num-buffers=10 ! 'video/x-raw,format=I420,width=3840,height=3840' ! filesink location= ~/4k.yuv
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:02.316441195
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$ ./video_encode ~/4k.yuv 3840 3840 H264 4k.264 -hpt 1                                        Failed to query video capabilities: Inappropriate ioctl for device
NvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
NvH264MSEncSetCommonStreamAttribute: level not supported
875967048
842091865
NvH264MSEncSetCommonStreamAttribute: level not supported
NvH264MSEncSetCommonStreamAttribute: LevelIdc conformance violation
NvH264MSEncSetCommonStreamAttribute: LevelIdc conformance violation
NvH264MSEncSetCommonStreamAttribute: LevelIdc conformance violation
===== MSENC blits (mode: 1) into tiled surfaces =====
Could not read complete frame from input file
File read complete.
----------- Element = enc0 -----------
Total Profiling time = 60.9925
<b>Average FPS = 49.3503</b>
Total units processed = 3011
Average latency(usec) = 140874
Minimum latency(usec) = 39326
Maximum latency(usec) = 156267
-------------------------------------
App run was successful

Oh great, we will try this immediately and let you know! Thanks!

So we mainly use GST pipelines, not direct NV access like simple encoder uses (we have a direct NV path like simple encoder but we don’t utilize it regularly, as we saw no advantages with it compared to normal GST pipelines in code), would this be an issue? We’re using the normal omxh264enc plugin for encoding. I can try running simple encoder example with a longer test sequence, but here at least it seems 30fps is not achieved over a longer duration. I would think there’s no performance trade-off between the two approaches, but please correct me if I’m wrong.

I generated a 10 second YUV using this:

gst-launch-1.0 videotestsrc pattern=1 num-buffers=300 ! 'video/x-raw,format=I420,width=3840,height=3840' ! filesink location=4k.yuv

Then I encoded it using this:

gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw, format=(string)I420,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! filesink location=vid.MP4

Measured fps by adding to the above:

gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw, format=(string)I420,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! fpsdisplaysink video-sink="filesink location=vid.MP4" text-overlay=false -v

The results weren’t very promising, I ran 4 runs, it seems to start off very strong like yours did but then quickly falls off, I’m thinking maybe it’s throttling? Voltage or heat related? Over 4-runs on a Jetson without our software running on it (to get a baseline) we got 21.33, 19.56, 25.86, 29.57 FPS. I later got 2 ‘quick’ runs with 1s frame-rates hitting as high as 70FPS, denoted as “FastRun” in the full results below, these averages were: 28.14, 46.16FPS. No changes were made to get these high FPS’s, and they usually do not happen, it’s only on occasion that they happen. Most of the time the 1s FPS doesn’t go above 30.

I have the full results compilation here: https://drive.google.com/file/d/1RWl-p535FdvdJIx_Y0KgU5iHohXsGWsc/view?usp=sharing
As you can see it typically starts high at 30+ (or spikes there soon after starting) and then drops off to the teens. So it’s definitely capable of 30FPS (in short bursts, anyway), but doesn’t seem sustainable over a full clip.

I can’t figure out why it’s so spontaneous. I feel like it has to be hitting some limiter. The 1s frame rates have varied anywhere from 12FPS to 73FPS. The 10s average frame rates varied anywhere from 19FPS to 46FPS. I don’t get why there’s such a discrepancy between runs.

In terms of performance, here is the output of:

sudo ~/jetson_clocks.sh --show

We’re loading a custom clock config which simply sets the minimum clocks to the maximum values, to keep them maxed. I repeated the test on power model 0 (nvpmodel -m 0) and the behavior seems essentially the same.

SOC family:tegra186  Machine:quill
Online CPUs: 0,3-5
CPU Cluster Switching: Disabled
cpu0: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu1: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=2035200
cpu2: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=2035200
cpu3: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu4: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu5: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
GPU MinFreq=1300500000 MaxFreq=1300500000 CurrentFreq=1300500000
EMC MinFreq=40800000 MaxFreq=1866000000 CurrentFreq=1866000000 FreqOverride=1
Fan: speed=80

Would be really great if I could figure out why so much variance between runs. Any ideas?

Thank you for your help so far!

PLease refer to
https://devtalk.nvidia.com/default/topic/1032771/jetson-tx2/no-encoder-perfomance-improvement-before-after-jetson_clocks-sh/post/5255605/#5255605

I see, thank you, it would be really great if this can be built and get a consistent FPS.

Having trouble getting 28.1 OpenMAX encoder plugin to build.

Having the same EGL checking issues as that poster #1 here: https://devtalk.nvidia.com/default/topic/1023695/how-to-compile-gstomx-for-tx2-/

My output ending of ./configure:

configure: using GStreamer Base Plugins in /usr/lib/aarch64-linux-gnu/gstreamer-1.0
checking for GST_EGL... configure: error: Package requirements (gstreamer-egl-1.0) were not met:

No package 'gstreamer-egl-1.0' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables GST_EGL_CFLAGS
and GST_EGL_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details

Though the link to how he disabled EGL checking & got around the issue is broken, and it’s not immediately obvious to me how to disable EGL checking.

I saw you said in post #2 that OMX is dependent on EGL src so I tried to build EGL src & followed the instructions in the EGL README, which seems to be just “configure” -> “make” -> “make install” (or pre-fixing with ./autogen.sh first), but EGL make isn’t building the gstreamer-egl-1.0 lib I understand that I need… Or is building EGL not necessary and the OMX build is only supposed to reference the EGL source via a path? Little confused on building OMX here.

When I run ./autogen.sh for EGL, followed by ./configure, it ends ./configure with the following, which makes me think it’s not configured properly.

configure: *** Plug-ins without external dependencies that will be built:

configure: *** Plug-ins without external dependencies that will NOT be built:

configure: *** Plug-ins that have NOT been ported:

configure: *** Plug-ins with dependencies that will be built:

configure: *** Plug-ins with dependencies that will NOT be built:
    eglgles

EGL Make output (builds nothing)

$ make
(CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/nvidia/omxh264_build/gstegl_src/gst-egl/missing autoheader)
rm -f stamp-h1
touch config.h.in
cd . && /bin/bash ./config.status config.h
config.status: creating config.h
config.status: config.h is unchanged
make  all-recursive
make[1]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl'
Making all in gst-libs
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs'
Making all in gst
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs/gst'
Making all in egl
make[4]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs/gst/egl'
  CC       libgstegl_1.0_la-egl.lo
  CCLD     libgstegl-1.0.la
make[4]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs/gst/egl'
make[4]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs/gst'
make[4]: Nothing to be done for 'all-am'.
make[4]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs/gst'
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs/gst'
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs'
make[3]: Nothing to be done for 'all-am'.
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs'
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/gst-libs'
Making all in ext
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/ext'
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/ext'
make[3]: Nothing to be done for 'all-am'.
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/ext'
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/ext'
Making all in pkgconfig
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/pkgconfig'
  CP     gstreamer-egl-1.0.pc
  CP     gstreamer-egl-1.0-uninstalled.pc
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/pkgconfig'
Making all in m4
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/m4'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/m4'
Making all in common
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/common'
Making all in m4
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/common/m4'
make[3]: Nothing to be done for 'all'.
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/common/m4'
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/common'
make[3]: Nothing to be done for 'all-am'.
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/common'
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/common'
Making all in docs
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/docs'
Making all in libs
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/docs/libs'
make[3]: Nothing to be done for 'all'.
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/docs/libs'
make[3]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/docs'
make[3]: Nothing to be done for 'all-am'.
make[3]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/docs'
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/docs'
Making all in po
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/po'
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/po'
Making all in tools
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/tools'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl/tools'
make[2]: Entering directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl'
make[2]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl'
make[1]: Leaving directory '/home/nvidia/omxh264_build/gstegl_src/gst-egl'

PLease build gstegl and then gstomx.

Yea, I figured that’s what was needed - that’s where the problems above are from.

I’ll keep playing with it, hopefully will get it to build.

So I got the libgstomx library built, it just needed some headers and such from libEGL to build completely, so I connected it with them and it was good to go.

So now the library is built and tested, works well. The problem still persists, though, even with the patch in place. The behavior seems the same with the original plugin vs the built one with the patch. Below I have some samples, but it seems to suddenly fall off to 13fps, after being at 50+ for maybe 5 seconds. It’s very odd behavior. This seems like thermal throttling or something of that nature. Any thoughts?

In the logs below, note the ‘current’ field of fpsdisplaysink0.

Original plugin, nvpmodel 0. Starts strong @ 50-70fps for 6 seconds, then suddenly dies off to 12-13 fps.

$ gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw, format=(string)I420,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! fpsdisplaysink video-sink="filesink location=vid.MP4" text-overlay=false -v
Setting pipeline to PAUSED ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = "video/x-raw\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ format\=\(string\)I420\,\ framerate\=\(fraction\)30/1"
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:sink: caps = "video/x-raw\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ format\=\(string\)I420\,\ framerate\=\(fraction\)30/1"
===== MSENC blits (mode: 1) into tiled surfaces =====
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:src: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1"
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:src: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1\,\ codec_data\=\(buffer\)014240150301000a6742403495a00f001e1901000468ce3c80"
/GstPipeline:pipeline0/GstQTMux:qtmux0.GstPad:video_0: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1\,\ codec_data\=\(buffer\)014240150301000a6742403495a00f001e1901000468ce3c80"
/GstPipeline:pipeline0/GstQTMux:qtmux0.GstPad:src: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0.GstPad:sink: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = "video/quicktime\,\ variant\=\(string\)apple"
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
New clock: GstSystemClock
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 26, dropped: 0, current: 50.08, average: 50.08
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 60, dropped: 0, current: 67.94, average: 58.85
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 96, dropped: 0, current: 71.94, average: 63.16
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 132, dropped: 0, current: 70.20, average: 64.93
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 169, dropped: 0, current: 69.57, average: 65.90
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 202, dropped: 0, current: 63.26, average: 65.45
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 209, dropped: 0, current: 12.20, average: 57.11
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 215, dropped: 0, current: 11.75, average: 51.55
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 221, dropped: 0, current: 11.66, average: 47.17
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 227, dropped: 0, current: 11.76, average: 43.69
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 234, dropped: 0, current: 12.53, average: 40.66
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 240, dropped: 0, current: 11.68, average: 38.29
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 247, dropped: 0, current: 12.00, average: 36.05
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 253, dropped: 0, current: 11.47, average: 34.31
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 259, dropped: 0, current: 11.70, average: 32.84
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 265, dropped: 0, current: 11.60, average: 31.53
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 271, dropped: 0, current: 11.41, average: 30.35
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 278, dropped: 0, current: 12.12, average: 29.24
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 285, dropped: 0, current: 12.46, average: 28.30
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 291, dropped: 0, current: 11.91, average: 27.52
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 298, dropped: 0, current: 12.18, average: 26.73

Built plugin, sustained the high speed for the same 6 seconds, then died off.

$ gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw, format=(string)I420,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! fpsdisplaysink video-sink="filesink location=vid.MP4" text-overlay=false -v
Setting pipeline to PAUSED ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = "video/x-raw\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ format\=\(string\)I420\,\ framerate\=\(fraction\)30/1"
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:sink: caps = "video/x-raw\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ format\=\(string\)I420\,\ framerate\=\(fraction\)30/1"
===== MSENC blits (mode: 1) into tiled surfaces =====
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:src: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1"
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:src: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1\,\ codec_data\=\(buffer\)014240150301000a6742403495a00f001e1901000468ce3c80"
/GstPipeline:pipeline0/GstQTMux:qtmux0.GstPad:video_0: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1\,\ codec_data\=\(buffer\)014240150301000a6742403495a00f001e1901000468ce3c80"
/GstPipeline:pipeline0/GstQTMux:qtmux0.GstPad:src: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0.GstPad:sink: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = "video/quicktime\,\ variant\=\(string\)apple"
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
New clock: GstSystemClock
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 23, dropped: 0, current: 45.55, average: 45.55
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 62, dropped: 0, current: 75.01, average: 60.50
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 100, dropped: 0, current: 75.57, average: 65.46
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 138, dropped: 0, current: 75.12, average: 67.86
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 176, dropped: 0, current: 74.52, average: 69.19
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 204, dropped: 0, current: 49.90, average: 65.71
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 211, dropped: 0, current: 12.55, average: 57.61
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 218, dropped: 0, current: 12.22, average: 51.48
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 224, dropped: 0, current: 11.94, average: 47.28
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 231, dropped: 0, current: 12.01, average: 43.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 237, dropped: 0, current: 11.60, average: 40.60
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 244, dropped: 0, current: 12.05, average: 38.01
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 251, dropped: 0, current: 12.30, average: 35.92
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 258, dropped: 0, current: 12.47, average: 34.18
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 265, dropped: 0, current: 12.51, average: 32.68
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 272, dropped: 0, current: 11.96, average: 31.29
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 279, dropped: 0, current: 12.08, average: 30.09
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 286, dropped: 0, current: 12.04, average: 29.02

This is a re-try with the built plugin, it really held out for a long time before dropping off, but this obviously isn’t consistent enough for long recording times and the like:

$ gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw, format=(string)I420,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! fpsdisplaysink video-sink="filesink location=vid.MP4" text-overlay=false -v
Setting pipeline to PAUSED ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = "video/x-raw\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ format\=\(string\)I420\,\ framerate\=\(fraction\)30/1"
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:sink: caps = "video/x-raw\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ format\=\(string\)I420\,\ framerate\=\(fraction\)30/1"
===== MSENC blits (mode: 1) into tiled surfaces =====
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:src: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1"
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:src: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1\,\ codec_data\=\(buffer\)014240150301000a6742403495a00f001e1901000468ce3c80"
/GstPipeline:pipeline0/GstQTMux:qtmux0.GstPad:video_0: caps = "video/x-h264\,\ alignment\=\(string\)au\,\ stream-format\=\(string\)avc\,\ width\=\(int\)3840\,\ height\=\(int\)3840\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ framerate\=\(fraction\)30/1\,\ codec_data\=\(buffer\)014240150301000a6742403495a00f001e1901000468ce3c80"
/GstPipeline:pipeline0/GstQTMux:qtmux0.GstPad:src: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0.GstPad:sink: caps = "video/quicktime\,\ variant\=\(string\)apple"
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = "video/quicktime\,\ variant\=\(string\)apple"
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
New clock: GstSystemClock
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 26, dropped: 0, current: 51.34, average: 51.34
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 63, dropped: 0, current: 73.68, average: 62.47
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 100, dropped: 0, current: 71.40, average: 65.50
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 138, dropped: 0, current: 73.62, average: 67.55
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 176, dropped: 0, current: 74.29, average: 68.90
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 211, dropped: 0, current: 69.42, average: 68.99
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 249, dropped: 0, current: 73.35, average: 69.62
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 287, dropped: 0, current: 70.25, average: 69.70
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 294, dropped: 0, current: 12.13, average: 62.62
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 301, dropped: 0, current: 11.95, average: 57.00

Maybe there’s some hard limits in the encoder for thermals or whatever? I’ll try getting the simple encoder example built with the patch integrated and see what I get there.

Hi greg2,
Do you replace the original libgstomx.so?

/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so

Also please check if encoder engine always runs at max clocks via tegrastats

Made mistake, used omxh264 source from where the original patch was, but that wasn’t 28.1 src, changed to 28.1 src to match my BSP. Unfortunately, same results. Spikes at 50-70FPS for first few seconds, then levels at 12-13FPS.

Anyways, yes, I replaced, and I verified the replacement is functional and being used with A. Print statements, and B. gst-inspect (source data in gst-inspect is Jul 2).

As for tegra stats, I ran ‘sudo ~/tegrastats’
The MSENC field appears when encode takes place. The value (frequency in Mhz?) is 1164.
I noticed in other posts that when other people do this their value value is 1113. I’m not sure if this discrepancy has any significance.
As the FPS fluctuates (ie. if current FPS hits 70FPS when it first starts or if it drops to 5FPS for a second as it does on very rare occasion) the value stays at exactly 1164, never any different. So if that’s the max frequency, it’s pinned the whole time.

I verified using print statements that the max frequency flag is being set.

I’m going to build the simple encoder example and see if I can get better results with the direct NV access using the patch you provided.

EDIT:
Every once in awhile there’s a fast encoding like shown below, but this is very typically not the case, usually it only starts at 50-70FPS for first few seconds, then immediately settles at 12-13FPS for the duration of the encoding, like I posted in post #13.

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 10, dropped: 0, current: 18.08, average: 18.08
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 19, dropped: 0, current: 17.69, average: 17.89
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 43, dropped: 0, current: 47.48, average: 27.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 80, dropped: 0, current: 71.32, average: 38.35
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 117, dropped: 0, current: 73.80, average: 45.22
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 154, dropped: 0, current: 73.68, average: 49.84
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 192, dropped: 0, current: 75.44, average: 53.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 230, dropped: 0, current: 74.75, average: 56.07
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 268, dropped: 0, current: 75.46, average: 58.19

I have a little interesting of a finding today with the OpenMAX plugin. Today I had one of the really fast FPS ones (happened directly after setting nvpmodel to 0, but consecutive runs it falls back to its normal behavior of 12-13FPS), shown below:

last-message = rendered: 21, dropped: 0, current: 41.82, average: 41.82
last-message = rendered: 59, dropped: 0, current: 74.73, average: 58.38
last-message = rendered: 97, dropped: 0, current: 75.41, average: 64.05
last-message = rendered: 134, dropped: 0, current: 73.63, average: 66.44
last-message = rendered: 172, dropped: 0, current: 74.89, average: 68.14
last-message = rendered: 210, dropped: 0, current: 75.85, average: 69.41
last-message = rendered: 247, dropped: 0, current: 73.97, average: 70.06
last-message = rendered: 285, dropped: 0, current: 75.08, average: 70.69

Then back to normal…:

current: 52.80, average: 52.80
urrent: 72.69, average: 62.72
current: 73.97, average: 66.42
current: 75.34, average: 68.64
current: 74.50, average: 69.82
current: 17.92, average: 61.24
current: 11.99, average: 54.27
current: 12.12, average: 48.36
current: 12.54, average: 44.08
...

Except this time when it was fast I passed the flag “MeasureEncoderLatency=true” to OMX. I’m a little confused by the results.

KPI: omx: frameNumber= 0 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 1 encoder= 62 ms pts= -1
KPI: omx: frameNumber= 2 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 3 encoder= 99 ms pts= -1
KPI: omx: frameNumber= 4 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 5 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 6 encoder= 57 ms pts= -1
KPI: omx: frameNumber= 7 encoder= 53 ms pts= -1
KPI: omx: frameNumber= 8 encoder= 50 ms pts= -1
KPI: omx: frameNumber= 9 encoder= 65 ms pts= -1
KPI: omx: frameNumber= 10 encoder= 58 ms pts= -1
KPI: omx: frameNumber= 11 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 12 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 13 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 14 encoder= 54 ms pts= -1
KPI: omx: frameNumber= 15 encoder= 57 ms pts= -1
KPI: omx: frameNumber= 16 encoder= 64 ms pts= -1
KPI: omx: frameNumber= 17 encoder= 92 ms pts= -1
KPI: omx: frameNumber= 18 encoder= 54 ms pts= -1
KPI: omx: frameNumber= 19 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 20 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 21 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 22 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 23 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 24 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 25 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 26 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 27 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 28 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 29 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 30 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 31 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 32 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 33 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 34 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 35 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 36 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 37 encoder= 90 ms pts= -1
KPI: omx: frameNumber= 38 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 39 encoder= 63 ms pts= -1
KPI: omx: frameNumber= 40 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 41 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 42 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 43 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 44 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 45 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 46 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 47 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 48 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 49 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 50 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 51 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 52 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 53 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 54 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 55 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 56 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 57 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 58 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 59 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 60 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 61 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 62 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 63 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 64 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 65 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 66 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 67 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 68 encoder= 83 ms pts= -1
KPI: omx: frameNumber= 69 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 70 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 71 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 72 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 73 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 74 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 75 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 76 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 77 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 78 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 79 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 80 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 81 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 82 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 83 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 84 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 85 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 86 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 87 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 88 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 89 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 90 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 91 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 92 encoder= 86 ms pts= -1
KPI: omx: frameNumber= 93 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 94 encoder= 89 ms pts= -1
KPI: omx: frameNumber= 95 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 96 encoder= 92 ms pts= -1
KPI: omx: frameNumber= 97 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 98 encoder= 88 ms pts= -1
KPI: omx: frameNumber= 99 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 100 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 101 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 102 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 103 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 104 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 105 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 106 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 107 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 108 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 109 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 110 encoder= 62 ms pts= -1
KPI: omx: frameNumber= 111 encoder= 66 ms pts= -1
KPI: omx: frameNumber= 112 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 113 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 114 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 115 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 116 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 117 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 118 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 119 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 120 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 121 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 122 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 123 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 124 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 125 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 126 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 127 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 128 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 129 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 130 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 131 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 132 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 133 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 134 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 135 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 136 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 137 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 138 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 139 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 140 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 141 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 142 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 143 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 144 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 145 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 146 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 147 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 148 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 149 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 150 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 151 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 152 encoder= 88 ms pts= -1
KPI: omx: frameNumber= 153 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 154 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 155 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 156 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 157 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 158 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 159 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 160 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 161 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 162 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 163 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 164 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 165 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 166 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 167 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 168 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 169 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 170 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 171 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 172 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 173 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 174 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 175 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 176 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 177 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 178 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 179 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 180 encoder= 83 ms pts= -1
KPI: omx: frameNumber= 181 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 182 encoder= 93 ms pts= -1
KPI: omx: frameNumber= 183 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 184 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 185 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 186 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 187 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 188 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 189 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 190 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 191 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 192 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 193 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 194 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 195 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 196 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 197 encoder= 70 ms pts= -1
KPI: omx: frameNumber= 198 encoder= 64 ms pts= -1
KPI: omx: frameNumber= 199 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 200 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 201 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 202 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 203 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 204 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 205 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 206 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 207 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 208 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 209 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 210 encoder= 93 ms pts= -1
KPI: omx: frameNumber= 211 encoder= 90 ms pts= -1
KPI: omx: frameNumber= 212 encoder= 91 ms pts= -1
KPI: omx: frameNumber= 213 encoder= 94 ms pts= -1
KPI: omx: frameNumber= 214 encoder= 99 ms pts= -1
KPI: omx: frameNumber= 215 encoder= 91 ms pts= -1
KPI: omx: frameNumber= 216 encoder= 90 ms pts= -1
KPI: omx: frameNumber= 217 encoder= 94 ms pts= -1
KPI: omx: frameNumber= 218 encoder= 83 ms pts= -1
KPI: omx: frameNumber= 219 encoder= 89 ms pts= -1
KPI: omx: frameNumber= 220 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 221 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 222 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 223 encoder= 83 ms pts= -1
KPI: omx: frameNumber= 224 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 225 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 226 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 227 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 228 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 229 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 230 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 231 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 232 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 233 encoder= 81 ms pts= -1
KPI: omx: frameNumber= 234 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 235 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 236 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 237 encoder= 92 ms pts= -1
KPI: omx: frameNumber= 238 encoder= 86 ms pts= -1
KPI: omx: frameNumber= 239 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 240 encoder= 91 ms pts= -1
KPI: omx: frameNumber= 241 encoder= 92 ms pts= -1
KPI: omx: frameNumber= 242 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 243 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 244 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 245 encoder= 84 ms pts= -1
KPI: omx: frameNumber= 246 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 247 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 248 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 249 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 250 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 251 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 252 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 253 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 254 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 255 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 256 encoder= 83 ms pts= -1
KPI: omx: frameNumber= 257 encoder= 86 ms pts= -1
KPI: omx: frameNumber= 258 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 259 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 260 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 261 encoder= 89 ms pts= -1
KPI: omx: frameNumber= 262 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 263 encoder= 85 ms pts= -1
KPI: omx: frameNumber= 264 encoder= 91 ms pts= -1
KPI: omx: frameNumber= 265 encoder= 94 ms pts= -1
KPI: omx: frameNumber= 266 encoder= 87 ms pts= -1
KPI: omx: frameNumber= 267 encoder= 86 ms pts= -1
KPI: omx: frameNumber= 268 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 269 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 270 encoder= 92 ms pts= -1
KPI: omx: frameNumber= 271 encoder= 92 ms pts= -1
KPI: omx: frameNumber= 272 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 273 encoder= 83 ms pts= -1
KPI: omx: frameNumber= 274 encoder= 79 ms pts= -1
KPI: omx: frameNumber= 275 encoder= 82 ms pts= -1
KPI: omx: frameNumber= 276 encoder= 80 ms pts= -1
KPI: omx: frameNumber= 277 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 278 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 279 encoder= 77 ms pts= -1
KPI: omx: frameNumber= 280 encoder= 72 ms pts= -1
KPI: omx: frameNumber= 281 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 282 encoder= 71 ms pts= -1
KPI: omx: frameNumber= 283 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 284 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 285 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 286 encoder= 76 ms pts= -1
KPI: omx: frameNumber= 287 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 288 encoder= 74 ms pts= -1
KPI: omx: frameNumber= 289 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 290 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 291 encoder= 75 ms pts= -1
KPI: omx: frameNumber= 292 encoder= 78 ms pts= -1
KPI: omx: frameNumber= 293 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 294 encoder= 73 ms pts= -1
KPI: omx: frameNumber= 295 encoder= 68 ms pts= -1
KPI: omx: frameNumber= 296 encoder= 69 ms pts= -1
KPI: omx: frameNumber= 297 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 298 encoder= 67 ms pts= -1
KPI: omx: frameNumber= 299 encoder= 69 ms pts= -1

Why is it saying the encoder latency is 60-70ms latency per frame? That’s not possible given the 33ms or less needed to achieve 30fps. For the 60-70 I was getting this number should be ~14-16ms. Am I misunderstanding something about the way the log is printed?

Also, for sanity sake, does NVPModel have any effect on HW encoder performance? I feel like I get those fast encode spikes more frequently when I set nvpmodel to 0, but it may by my imagination.

Below is the result with videotestsrc:

nvidia@tegra-ubuntu:~$ gst-launch-1.0 videotestsrc <b>num-buffers=2000</b> ! 'video/x-raw, format=(string)NV12,width=240,height=240' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=3840,format=NV12' ! omxh264enc profile=high bitrate=30000000 ! 'video/x-h264,level=(string)5.2' ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
NvH264MSEncSetCommonStreamAttribute: LevelIdc conformance violation
NvH264MSEncSetCommonStreamAttribute: LevelIdc conformance violation
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after <b>0:00:28.886546873</b>

2000 / 28.88 ~= 69fps

Please configure profile=high,level=5.2 and try again.

@DaneLLL - is the videotestsrc a real world example of the picture flowing from any image sensor? Because seeing 4K4K@69 fps is well over the specs of 4K2K@60 from the TX2 datasheet. Plus we also observe, that the queue around the encoder is only 6 frames, which does not allow a lot of headroom in buffering when some frames would take longer.

Good results so far!
Changing level & profile had no effect.

When I run your command I get the same ~28.08 seconds (~71FPS). When I add the fpsdisplaysink to verify the real time FPS (only change to pipeline), it apparently limits the pipeline to 30FPS, real-time is 30FPS and end result takes 1:06.66 (~30.00FPS). That was surprising.

So that’s definitely an improvement, not sure why fpsdisplaysink limits the frame rate, but anyway…

I played with the pipeline and in the caps when I set (memory:NVMM) I got an error (below).
So I re-created the raw file using NV12 as the format as you had instead of I420 but the result was the same YUV file.
Pipeline:

gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw(memory:NVMM),format=(string)NV12,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! filesink location=vid.MP4

Yielded this Error:

NvMMLiteOpen : Block : BlockType = 4 
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
NvMSEncCheckInputSurface:327: Only Blocklinear and Pitch surface format allowed
VENC: VideoEncInputProcessing: 4367:  VideoEncFeedImage failed. Input buffer 0 sent
VENC: NvMMLiteVideoEncDoWork: 4989: BlockSide error 0x4
Event_BlockError from 0BlockAvcEnc : Error code - 4

I’m not sure why it doesn’t like the input NV12/I420 buffer from file given that your example sets the videotestsrc as an NV12 buffer. Do you know what’s going on? Unsure why the input is different… NV12 should be NV12.

As a hack, however should be redundant, I added nvvidconv and in the caps added the NVMM:

gst-launch-1.0 filesrc location=4k.yuv blocksize=22118400 ! 'video/x-raw,format=(string)NV12,width=3840,height=3840' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=(string)NV12,width=(int)3840,height=(int)3840,framerate=(fraction)30/1' ! omxh264enc ! qtmux ! filesink location=vid.MP4

The duration is consistently 5.76s, or ~52.08FPS (300 frame file)

So it seems clear to me that the ‘(memory:NVMM)’ is what is making the difference. How do I avoid nvvidconv usage?

EDIT:
WEIRD side note. If I change the ‘NV12’ in the above to ‘I420’, instead of consistently 5.76s the encode takes consistently 7.75s (38.70FPS). The input file is exactly the same, I’m not sure why there’s a 25% decrease in performance. I’m guessing something related to memory alignment.

Hi danieel, Greg2,
For gstreamer implementation, the best case is to have nvcamerasrc as input source. It is for Bayer sensor and with video/x-raw(memory:NVMM). If it is not your source, we suggest you use MMAPIs, which has NvBuffer implementation.