HW Accelerated Video Encoding on Jetson TX2

Hi,
I’m a total beginner in Gstreamer and Jetson TX2 HW encoding. I’m trying to make an application that will use gstreamer video stream. My pipeline is divided into two branches. The first one can take video input from an HDMI Camera connected with TX2, encode raw video into H264/H265 and then send the video stream to my target computer as UDP stream. The second one is going to throw another video stream to appsink so that other programs may use video stream data for various background processes.
Here is my pipeline:
Host(TX2):
gst-launch-1.0 v4l2src device=/dev/video0 ! nvvidconv ! tee name = te ! queue ! ‘video/x-raw(memory:NVMM), format=(string)I420,width=1920,height=1080’ ! nvvidconv flip-method=2 ! ‘video/x-raw, format=(string)I420’ ! videoconvert ! ‘video/x-raw, format=(string)NV12’ ! appsink te. ! queue ! ‘video/x-raw(memory:NVMM),format=(string)I420’ ! omxh264enc ! video/x-h264,width=1920,height=1080,stream-format=byte-stream,bitrate=5000 ! rtph264pay ! udpsink host=192.168.1.10 port=8009

Target(Windows 10):
gst-launch-1.0 udpsrc port=8009 ! application/x-rtp, clock-rate=90000, payload=96, framerate=30/1 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

This pipeline runs for a while and then kills automatically after 30 seconds (avg.). My target is to get video from HDMI camera, encode it to H264/H265 using Hardware so that I’ve reduced CPU load, finally send the encoded video via udp. I’m trying to acquire 30 frames/s and around 2-3 MB bit-rate.
How am I supposed to know if the video stream is being encoded by HW? Is it by using nvvidconv and memory:NVMM in my pipeline? I came across some blog posts where it says HW is only enabled for CSI camera(which is nvcamerasrc but not v4lwsrc). Thanks

Hi,
Running appink is gst-launch-1.0 looks to have some memleak issue. Please refer to
https://devtalk.nvidia.com/default/topic/1049483/jetson-tx2/i-ve-set-the-maximum-buffer-for-appsink-but-memory-consumption-keeps-growing/post/5327311/#5327311

Or you can run with fakesink

gst-launch-1.0 v4l2src device=/dev/video0 ! nvvidconv ! tee name = te ! queue ! 'video/x-raw(memory:NVMM), format=(string)I420,width=1920,height=1080' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)NV12' ! <b>fakesink</b> te. ! queue ! 'video/x-raw(memory:NVMM),format=(string)I420' ! omxh264enc ! video/x-h264,width=1920,height=1080,stream-format=byte-stream,bitrate=5000 ! rtph264pay ! udpsink host=192.168.1.10 port=8009

Also nvvidconv supports video/x-raw,format=NV12 at source pad. You can remove videoconvert from the pipeline.

omxh264enc plugin uses HW encoding engine. Your pipeline is surely using HW h264 encoding.

Thanks a lot for your prompt response. I’ve tried the pipeline that you’ve recommended. Now it’s even better. Last time you recommended “fakesink” instead of “appsink”. But I’m not sure how to grab samples using fakesink. I can grab samples using “gst_appsink_pull_sample” and “gst_appsink_pull_preroll”. But, how can I get the same kind of samples using “fakesink”? As still, I couldn’t figure out way of using fakesink, trying to manage with appsink even though with a high amount of memory usage.
Ran a couple of test on the pipeline. Found out that udp stream plays almost flawlessly on the target computer. But there is a massive amount of latency in the other branch which goes out via “appsink”. I’m using appsink samples in a tracking software. I’ve used GST-SHARK(https://github.com/RidgeRun/gst-shark) to measure latency. Found out slight irregularity in appsink stream. I’m not sure if this is the right place to ask this but yet I find here very much effective. Is there any better way to measure latency between two consecutive elements and pads? Excuse me as I’m being a bit greedy. I’m not yet satisfied with the stream I receive on the other end which is my tracking software.

My Pipeline:

GST_DEBUG="GST_TRACER:7" GST_TRACERS="interlatency" gst-launch-1.0 v4l2src device=/dev/video0 ! nvvidconv ! tee name = te ! queue ! 'video/x-raw(memory:NVMM), format=(string)I420, width=1920, height=1080' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)NV12' ! appsink te. ! queue ! 'video/x-raw(memory:NVMM), format=(string)I420' ! omxh264enc bitrate=3000000 preset-level=0 control-rate=2 insert-sps-pps=true ! video/x-h264, width=1920, height=1080, stream-format=byte-stream, bitrate=5000 ! rtph264pay ! udpsink host=192.168.1.10 port=8002

Latency Data Found Using gst-shark:

0:00:00.066932864  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x42b4b0 (cpuusage)
0:00:00.067052480  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x42b570 (graphic)
0:00:00.067095936  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x42b630 (proctime)
0:00:00.067133376  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x42b6f0 (interlatency)
0:00:00.067172672  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x589010 (scheduletime)
0:00:00.067208096  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x5890d0 (framerate)
0:00:00.067343136  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x589190 (queuelevel)
0:00:00.067387680  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x589250 (bitrate)
0:00:00.067446560  7312       0x587300 DEBUG             GST_TRACER gsttracer.c:163:gst_tracer_register:<registry0> update existing feature 0x589310 (buffer)
0:00:00.067941248  7312       0x587300 TRACE             GST_TRACER gsttracerrecord.c:110:gst_tracer_record_build_format: interlatency.class, from_pad=(structure)"scope\,\ type\=\(GType\)NULL\,\ related-to\=\(GstTracerValueScope\)GST_TRACER_VALUE_SCOPE_PAD\;", to_pad=(structure)"scope\,\ type\=\(GType\)NULL\,\ related-to\=\(GstTracerValueScope\)GST_TRACER_VALUE_SCOPE_PAD\;", time=(structure)"scope\,\ type\=\(GType\)NULL\,\ related-to\=\(GstTracerValueScope\)GST_TRACER_VALUE_SCOPE_PROCESS\;";
0:00:00.068086720  7312       0x587300 DEBUG             GST_TRACER gsttracerrecord.c:124:gst_tracer_record_build_format: new format string: interlatency, from_pad=(string)%s, to_pad=(string)%s, time=(string)%s;
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Framerate set to : 60 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4 
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
0:00:00.303831136  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv0_src, time=(string)0:00:00.020850016;
0:00:00.303974656  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_0, time=(string)0:00:00.021033568;
0:00:00.304070752  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_1, time=(string)0:00:00.021131296;
0:00:00.304106624  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue0_src, time=(string)0:00:00.021156224;
0:00:00.304169152  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue1_src, time=(string)0:00:00.021220384;
0:00:00.304212736  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter0_src, time=(string)0:00:00.021272416;
0:00:00.304248256  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter2_src, time=(string)0:00:00.021308032;
===== MSENC blits (mode: 1) into tiled surfaces =====
0:00:00.316723712  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)omxh264enc-omxh264enc0_src, time=(string)0:00:00.033766400;
0:00:00.316837824  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter3_src, time=(string)0:00:00.033899328;
0:00:00.317548864  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.034606944;
0:00:00.317624736  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.034606944;
0:00:00.322925760  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv0_src, time=(string)0:00:00.018679520;
0:00:00.323037856  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_0, time=(string)0:00:00.018822240;
0:00:00.323104160  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_1, time=(string)0:00:00.018892224;
0:00:00.323203488  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue1_src, time=(string)0:00:00.018977024;
0:00:00.323283648  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter2_src, time=(string)0:00:00.019068576;
0:00:00.333010848  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv1_src, time=(string)0:00:00.050055008;
0:00:00.333111744  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter1_src, time=(string)0:00:00.050174944;
0:00:00.333151968  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)appsink0_sink, time=(string)0:00:00.050174944;

0:00:00.336747072  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.053785888;
0:00:00.336835968  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.053785888;
0:00:00.337098912  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue0_src, time=(string)0:00:00.032875424;
0:00:00.337184256  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter0_src, time=(string)0:00:00.032973632;
0:00:00.337194496  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.054259136;
0:00:00.337232960  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.054259136;
0:00:00.337726464  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv0_src, time=(string)0:00:00.014478528;
0:00:00.337792448  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_0, time=(string)0:00:00.014560064;
0:00:00.337847840  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_1, time=(string)0:00:00.014615552;
0:00:00.337912160  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue1_src, time=(string)0:00:00.014675296;
0:00:00.337969536  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter2_src, time=(string)0:00:00.014737536;
0:00:00.344455680  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)omxh264enc-omxh264enc0_src, time=(string)0:00:00.040223296;
0:00:00.344610176  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter3_src, time=(string)0:00:00.040395552;
0:00:00.344760544  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.040544064;
0:00:00.344812352  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.040544064;
0:00:00.347629504  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)omxh264enc-omxh264enc0_src, time=(string)0:00:00.024389088;
0:00:00.347691872  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter3_src, time=(string)0:00:00.024461984;
0:00:00.347800704  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.024569760;
0:00:00.347832224  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.024569760;
0:00:00.354766720  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv0_src, time=(string)0:00:00.011946368;
0:00:00.354835296  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_0, time=(string)0:00:00.012036128;
0:00:00.354916608  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_1, time=(string)0:00:00.012117824;
0:00:00.355004864  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue1_src, time=(string)0:00:00.012201216;
0:00:00.355045696  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter2_src, time=(string)0:00:00.012247648;
0:00:00.356760128  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv1_src, time=(string)0:00:00.052535424;
0:00:00.356819968  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter1_src, time=(string)0:00:00.052611392;
0:00:00.356844352  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)appsink0_sink, time=(string)0:00:00.052611392;

0:00:00.356926304  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue0_src, time=(string)0:00:00.033698144;
0:00:00.356959232  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter0_src, time=(string)0:00:00.033732576;
0:00:00.358536576  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)omxh264enc-omxh264enc0_src, time=(string)0:00:00.015730432;
0:00:00.358580256  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter3_src, time=(string)0:00:00.015782752;
0:00:00.358636448  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.015839552;
0:00:00.358655808  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.015839552;
0:00:00.370633056  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv0_src, time=(string)0:00:00.007932224;
0:00:00.370691040  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_0, time=(string)0:00:00.008007776;
0:00:00.370723648  7312       0x615050 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)te_src_1, time=(string)0:00:00.008041600;
0:00:00.370777472  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue1_src, time=(string)0:00:00.008091968;
0:00:00.370812896  7312       0x6152d0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter2_src, time=(string)0:00:00.008129728;
0:00:00.371763648  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv1_src, time=(string)0:00:00.048524736;
0:00:00.371816736  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter1_src, time=(string)0:00:00.048588256;
0:00:00.371840896  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)appsink0_sink, time=(string)0:00:00.048588256;

0:00:00.371926624  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)queue0_src, time=(string)0:00:00.029126464;
0:00:00.371967424  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter0_src, time=(string)0:00:00.029169376;
0:00:00.374407744  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)omxh264enc-omxh264enc0_src, time=(string)0:00:00.011713184;
0:00:00.374463968  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)capsfilter3_src, time=(string)0:00:00.011780288;
0:00:00.374522720  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)rtph264pay0_src, time=(string)0:00:00.011839840;
0:00:00.374546720  7312   0x7f880030a0 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)udpsink0_sink, time=(string)0:00:00.011839840;
0:00:00.386538976  7312       0x615190 TRACE             GST_TRACER :0:: interlatency, from_pad=(string)v4l2src0_src, to_pad=(string)nvvconv1_src, time=(string)0:00:00.043723040;

Hi,
You may refer to rtsp case and adapt to your case:
[url]https://devtalk.nvidia.com/default/topic/1043770/jetson-tx2/problems-minimizing-latency-and-maximizing-quality-for-rtsp-and-mpeg-ts-/post/5295828/#5295828[/url]

We have seen the latency close to configured value in running this case.

Yeah, currently I’m also measuring latency by glass-glass method.

There is a couple of things that are not clear to me yet. When “memory:NVMM” is added next to ‘video/x-raw’ in the code, fakesink/appsink fails to grab a full-size frame. It renders frames in smaller sizes resulting in abnormal tracking. But if ‘memory:NVMM’ isn’t added as caps of the 2nd video converter(nvvidconv), fakesink/appsink can grab full-size frames. Same way if ‘fakesink/appsink’ is replaced by ‘autovideosink’ in the pipeline, it fails to play video on the tx2 display. But when ‘memory:NVMM’ is added next to the ‘video/x-raw’, it can play video flawlessly. How can I use appsink/fakesink properly so that I get all the advantages of tx2 hardware? Is there any sample code/demonstration that I can adapt to?

This is the pipeline implemented in code:

gst-launch-1.0 v4l2src device=/dev/video0 ! nvvidconv ! tee name = te ! queue ! 'video/x-raw(memory:NVMM), format=(string)I420,width=1920,height=1080' ! nvvidconv flip-method=2 ! <b><u>'video/x-raw, format=(string)NV12'</u></b> ! fakesink te. ! queue ! 'video/x-raw(memory:NVMM),format=(string)I420' ! omxh264enc ! video/x-h264,width=1920,height=1080,stream-format=byte-stream,bitrate=5000 ! rtph264pay ! udpsink host=192.168.1.10 port=8009

Here is a portion of the code where “memory:NVMM” is used:

if ((source = gst_element_factory_make("v4l2src", "source")) == NULL) {
#ifdef DEBUG_BUILD
		LOG4CXX_DEBUG(logger, "[" << id <<"]CSIDynamicPipeline::init() failed. Error with gst_element_factory_make('v4l2src')");

#endif
		return DONE;
	}
#ifdef fakesink
if ((sourceCaps = gst_caps_from_string("video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420,framerate=25/1")) == NULL) {
#ifdef DEBUG_BUILD
		LOG4CXX_DEBUG(logger, "[" << id <<"]CSIDynamicPipeline::init() failed. Error with gst_caps_from_string(sourceCaps)");
#endif
		return DONE;
	}
#else
	if ((sourceCaps = gst_caps_from_string("video/x-raw, width=(int)1280, height=(int)720,framerate=25/1")) == NULL
) {
#ifdef DEBUG_BUILD
		LOG4CXX_DEBUG(logger, "[" << id <<"]CSIDynamicPipeline::init() failed. Error with gst_caps_from_string(sourceCaps)");
#endif
		return DONE;
	}

Fakesink is working but slower than “appsink”. This is how I grabbed frames from the stream:

GstPad* srcpad = gst_element_get_static_pad(fakesink,"sink");
         gst_pad_add_probe(srcpad,GST_PAD_PROBE_TYPE_BUFFER,cb_have_data,NULL,NULL);
         gst_object_unref(srcpad);
		printf("heheheheheh!!!\n");

What am I doing wrong here? any better way to implement this? Thanks

Hi,
For accessing NvBuffer[video/x-raw(memory:NVMM)] in appsink, you may refer to this post:
[url]https://devtalk.nvidia.com/default/topic/1037450/jetson-tx2/use-gstreamer-or-tegra_multimedia_api-to-decode-video-would-be-more-efficient-and-increase-throughput-/post/5270860/#5270860[/url]

Apologies!
There was a line of code which was added for test purposes had 500ms added latency. Commenting out the line solved the extravagant latency.
Still struggling with fakesink, appsink. I’ll give a try with the above link.

Hi,

There are some irregularities I’ve found out recently while receiving GStreamer video from jetson tx2 to qgc ground control(in a different computer) https://github.com/mavlink/qgroundcontrol
For H264 encoded with omxh264enc, there is some snow effect in the video which is totally unexpected. Whereas, severe distortions on the video is observed while receiving H265 video encoded by omxh265enc. Latency was my primary issue, but now video quality has been something that I can’t ignore. When encoded by the same encoders but instead of ‘nvvidconv’, ‘videoconvert’ is used, there are no big issues other than the latency. Video quality seems to be much better than that’s rendered by ‘nvvidconv’. Is there any extra properties or caps that need to be added to get rid of this poor quality video? To understand and check the pipeline, I play the video on GStreamer built-in player and vlc player. But when properties like “control-rate, bitrate, iframeinterval, vbv-size, profile, etc” are added after the encoder that is omxh264enc/omxh265enc, the video can’t be played on vlc player. Is there any other way I can get to play the video stream in some video player? I can’t upload pictures/video here, an added picture/video could explain my problem in a broader sense. How do I get rid of this video quality problem? Thanks again.

Here is the pipeline:

gst-launch-1.0 v4l2src device=/dev/video0 ! nvvidconv ! tee name = te ! queue ! 'video/x-raw(memory:NVMM), format=(string)I420,width=1920,height=1080' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)NV12' ! fakesink te. ! queue ! 'video/x-raw(memory:NVMM),format=(string)I420' ! omxh265enc control-rate=2 bitrate=2500000 ! video/x-h265,width=1920,height=1080,stream-format=byte-stream,bitrate=5000 ! rtph265pay ! udpsink host=192.168.1.10 port=8009

Hi,
Bitrate and quality are tradeoff. For 1080p30, we have observed with worst quality(all frames are quantized at 50), the bitrate is ~1.5Mbps.
[url]Constant Bitrate help - Jetson TX1 - NVIDIA Developer Forums

It is 2.5Mbps in your case. Seems too low.

Thanks.
It doesn’t have to be always 2.5Mbbs, just to reduce the packet size I’m using 2.5Mbbs. But it can be increased to 4 Mbbs. So, you’re saying that if the bit-rate is increased, there won’t be a distorted video frame? Starting from 2.5Mbbs, I’ve tried up to 35Mbbs. For h264, there are mosaic and snow effects in the video. For h265, there is distorted video and extra latency(sometimes). Distorted video is observed especially when the camera sees greeneries. What might be an ideal way to stream video in UDP and RTP? I can stream in both the protocols but not a very good/expected quality. What’s your opinion on this? Looking forward.

One suggestion, make sure what you are using to display the video is working
properly. We were using our own TX2-based decoder that had a buffer overflow
with high-bit-rate video and we thought this was an encoder problem. The symptoms
sound somewhat similar to what you are describing.

Often we would capture a stream into a file, then split the file out into
individual jpeg images, and look at the jpeg images frame by frame to verify
video quality. We have a capture program that verifies TS/RTP counters to
make sure no packets are lost due to network problems.