Encoding Time on TX2 NX

Hi,

I’m running some tests on the TX2 NX to measure time to encode raw video files, as well as power consumption and resource utilisation.

For example, I would run the following pipeline:

gst-launch-1.0 filesrc location=original_cut.raw ! 'video/x-raw, width=(int)1440, height=(int)1080, framerate=(fraction)60/1' ! rawvideoparse width=1440 height=1080 format=gray8 framerate=60 ! nvvidconv ! nvv4l2h265enc MeasureEncoderLatency=true control-rate=1 qp-range=0,22:0,22:0,22 quant-i-frames=0 quant-p-frames=0 ! h265parse ! qtmux ! filesink location=/dev/null

The total time indicated by Gstreamer would be:

“Execution ended after 0:00:03.267317578”

However, the log file created by the “MeasureEncoderLatency” option gives encoding times per frame that average out to 17.5ms

The video file was 7 seconds and 420 frames. 420 frames x 17.5ms is 7.35 seconds. This is much longer than the 3.26 seconds indicated by Gstreamer at the end of execution.

What is happening here? Which encoding time is correct?

Hi,
File source is not a live source which has interval between frames. So it encodes the frames continuously. It is like this command:

gst-launch-1.0 videotestsrc num-buffers=420 ! 'video/x-raw, width=(int)1440, height=(int)1080, framerate=(fraction)60/1,format=GRAY8' ! nvvidconv ! nvv4l2h265enc MeasureEncoderLatency=true control-rate=1 qp-range=0,22:0,22:0,22 quant-i-frames=0 quant-p-frames=0 ! h265parse ! qtmux ! filesink location=/dev/null

For live source, there is interval between each frame. For 60fps, the source generates frame in 16.66ms interval. It is like this command:

gst-launch-1.0 videotestsrc is-live=1 num-buffers=420 ! 'video/x-raw, width=(int)1440, height=(int)1080, framerate=(fraction)60/1,format=GRAY8' ! nvvidconv ! nvv4l2h265enc MeasureEncoderLatency=true control-rate=1 qp-range=0,22:0,22:0,22 quant-i-frames=0 quant-p-frames=0 ! h265parse ! qtmux ! filesink location=/dev/null

Hi @DaneLLL,

You did not answer my question. Once Gstreamer finishes encoding, I get 2 different computation times. I get one line that tells me:

“Execution ended after 0:00:03.267317578”

AND I also get a log file (which I have attached). If I add up all of the encoding times in the log file, I get a computation time of 9.526 seconds.
gst_v4l2_enc_latency_9882.log (24.4 KB)

So my question is, which encoding time is the correct one?

Hi,

This value is accurate for total encoding time. In latency profiling, the value has delay due to reference frames. For encoding into IDR P P P P… , the encoder has to keep one reference frame. 1st frame is sent to encoder and kept. After 2nd frame is sent to encoder, 1st frame is returned and 2nd frame is kept. For more complicated use-case such as I P B B P B B P… , the latency value can be varying in a range.

For total encoding time, please check the Execution ended print. Or may use fpsdisplaysink to show runtime framerate.

Hi @DaneLLL,

Thank you for the answer. I still don’t fully understand the latency times given inside the log file (from my previous post on this thread). I understand it shows latency per frame, but the sum of latencies latencies in the log file does not match the total execution time.

Does the Hardware encoder encode frames in series (one after the other) or does it encode frames in parallel (.i.e. multiple frames encoded simultaneously)?

Hi,
For each source it encodes in series. For more information, please check implementation of MeasureEncoderLatency property. The gst-v4l2 is open sourc. You may download it and check the implementation. Source code is in
https://developer.nvidia.com/embedded/l4t/r32_release_v6.1/sources/t186/public_sources.tbz2