Xavier AGX : Video encoding crash

We have a very reproducible issue when encoding multiple mp4 videos. This reproduces on Xavier AGX, on the latest and previous versions of Jetpack.
Our sample application creates small videos by successively creating encoders and freeing them. Our real life use case is to split long videos into manageable sizes. After video 165, we have these errors:

Starting NVMME enc0 video encoding to 166-test-vid.mp4
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
875967048
842091865
H264: Profile = 77, Level = 0 
NVMME enc0 beginEncoding succeeded for 166-test-vid.mp4
NvMMLiteNvMediaCreate:3702: NvMediaDeviceCreate failed 
NVMEDIA_ENC: 3996: Failed to create NvMedia encoder 
VENC: NvMMLiteVideoEncDoWork: 4283: BlockSide error 0x4
NvVideoEnc: BlockError 
NvVideoEncTransferCaptureBufferToBlock: DoWork failed line# 631 
NvVideoEncTransferOutputBufferToBlock: DoWork failed line# 667 
NvVideoEnc: NvVideoEncTransferOutputBufferToBlock TransferBufferToBlock failed Line=678
NvVideoEncTransferCaptureBufferToBlock: DoWork failed line# 631 
NvVideoEncTransferOutputBufferToBlock: DoWork failed line# 667 
NvVideoEnc: NvVideoEncTransferOutputBufferToBlock TransferBufferToBlock failed Line=678
NvVideoEncTransferOutputBufferToBlock: DoWork failed line# 667 
NvVideoEnc: NvVideoEncTransferOutputBufferToBlock TransferBufferToBlock failed Line=678
[NvMediaDeviceCreate:95] Unable to create TVMR device[ERROR] (/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:178) <enc0> Output Plane:Error while DQing buffer: Invalid argument
[ERROR] (/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:178) <enc0> Capture Plane:Error while DQing buffer: Invalid argument
NVMME enc0 failed to dequeue buffer from encoder capture plane
NVMME enc0 deInit()
NVMME enc0 endEncoding() for 166-test-vid.mp4

Attached is complete code and makefile, should work on a jetson with jetpack.
vidrepro.tar.gz (32.7 KB)

Note: Our encoder class relies on the files provided by Jetpack on /usr/src/jetson_multimedia_api/samples/common/classes .

Thank you for your help!

Hi,
For on the latest and previous versions of Jetpack, do you mean JP4.4.1 and JP4.4?

We see this on the current jet pack version (4.4 and we saw it on 4.3).

Hi lawrence8pqxb,

We tried to build your code with JP-4.4.1 on Xavier, but got below errors:

/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:125: undefined reference to `jpeg_finish_decompress'
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:131: undefined reference to `jpeg_read_raw_data'
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:134: undefined reference to `jpeg_finish_decompress'
out/NvJpegDecoder.o: In function `NvJPEGDecoder::decodeToBuffer(NvBuffer**, unsigned char*, unsigned long, unsigned int*, unsigned int*, unsigned int*)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:171: undefined reference to `jpeg_mem_src'
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:174: undefined reference to `jpeg_read_header'
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:211: undefined reference to `jpeg_start_decompress'
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:234: undefined reference to `jpeg_finish_decompress'
out/NvJpegDecoder.o: In function `NvJPEGDecoder::decodeIndirect(NvBuffer*, unsigned int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:291: undefined reference to `jpeg_read_raw_data'
out/NvJpegDecoder.o: In function `NvJPEGDecoder::decodeDirect(NvBuffer*, unsigned int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvJpegDecoder.cpp:406: undefined reference to `jpeg_read_raw_data'
out/NvVideoEncoder.o: In function `NvVideoEncoder::setEncoderCommand(int, int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvVideoEncoder.cpp:202: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::dqBuffer(v4l2_buffer&, NvBuffer**, NvBuffer**, unsigned int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:126: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::qBuffer(v4l2_buffer&, NvBuffer*)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:252: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::mapOutputBuffers(v4l2_buffer&, int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:282: undefined reference to `NvBufferGetParams'
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:294: undefined reference to `NvBufferMemMap'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::unmapOutputBuffers(int, int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:330: undefined reference to `NvBufferMemUnMap'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::getFormat(v4l2_format&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:357: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::setFormat(v4l2_format&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:368: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::getCrop(v4l2_crop&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:393: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::setSelection(unsigned int, unsigned int, v4l2_rect&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:419: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::reqbufs(v4l2_memory, unsigned int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:495: undefined reference to `v4l2_ioctl'
out/NvV4l2ElementPlane.o:/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:542: more undefined references to `v4l2_ioctl' follow
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::startDQThread(void*)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:863: undefined reference to `pthread_create'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::stopDQThread()':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:879: undefined reference to `pthread_join'
out/NvV4l2ElementPlane.o: In function `NvV4l2ElementPlane::waitForDQThread(unsigned int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2ElementPlane.cpp:914: undefined reference to `pthread_join'
out/NvV4l2Element.o: In function `NvV4l2Element::NvV4l2Element(char const*, char const*, int, int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:60: undefined reference to `v4l2_open'
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:72: undefined reference to `v4l2_ioctl'
out/NvV4l2Element.o: In function `NvV4l2Element::~NvV4l2Element()':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:94: undefined reference to `v4l2_close'
out/NvV4l2Element.o: In function `NvV4l2Element::dqEvent(v4l2_event&, unsigned int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:106: undefined reference to `v4l2_ioctl'
out/NvV4l2Element.o: In function `NvV4l2Element::setControl(unsigned int, int)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:141: undefined reference to `v4l2_ioctl'
out/NvV4l2Element.o: In function `NvV4l2Element::getControl(unsigned int, int&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:163: undefined reference to `v4l2_ioctl'
out/NvV4l2Element.o: In function `NvV4l2Element::setExtControls(v4l2_ext_controls&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:182: undefined reference to `v4l2_ioctl'
out/NvV4l2Element.o: In function `NvV4l2Element::getExtControls(v4l2_ext_controls&)':
/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:200: undefined reference to `v4l2_ioctl'
out/NvV4l2Element.o:/usr/src/jetson_multimedia_api/samples/common/classes/NvV4l2Element.cpp:225: more undefined references to `v4l2_ioctl' follow
collect2: error: ld returned 1 exit status
Makefile:39: recipe for target 'out/vid' failed
make: *** [out/vid] Error 1

I put your code to this path: /usr/src/jetson_multimedia_api/samples/vidrepro and run “sudo make”, but got errors, please help check your code. Thanks!

I have a new tar, the Makefile has been updated and now works (tested in the host, unclear how it worked for me earlier). vidrepro_fix.tar.gz (32.7 KB)

Hi lawrence8pqxb,

The new tar can build success and we can reproduce your issue.
We will investigate the issue and update to you.

Hi,
Your usecase is similar to running the command:

01_video_encode$ gst-launch-1.0 videotestsrc num-buffers=100 ! video/x-raw,width=1920,height=1080 ! filesink location=~/a.yuv
01_video_encode$ ./video_encode /home/nvidia/a.yuv 1920 1080 H264 /home/nvidia/a.h264 -mem_type_oplane 1 -s 900 --max-perf

We do not observe any issue in running the default sample. It looks to be an issue in your code and the EoS handling looks different. Please refer to the default sample and do further check.

For debugging, you can check the memory leak by executing

# cat /sys/kernel/debug/nvmap/iovmm/clients

Hi,
We have checked the code and looks like you don’t create/destroy encoder in the loop. In every execution, please do

// Instantiate a video encoder
encoder_ = NvVideoEncoder::createVideoEncoder(name_.c_str());
encoder_->capture_plane.waitForDQThread(MAX_WAIT_MS);
delete encoder_;

Hi Dane,

As you say, if we delete the encoder completely the memory leak goes away and we can create and delete many videos.
The issue with doing that is that deleting the encoder is a slow process, in our tests it takes between 8 and 12 ms to delete the encoder, (only 1 to 2 ms to recreate it).

Would you be able to check if there is a way we reuse the encoder and buffers, only that we stop the video encoder and start a new one? For real time operation on the robot this time matters.

Thank you,

Hi,
It is more like a feature request. We have to check and evaluate this, and see it it can be supported in future release. On latest r32.4.3 or r32.4.4, we would suggest initialize/destroy NvVideoEncoder. This flow is verified and tested by SQA team. It is stable, although there is some latency.

Hi,

Please share more information about the usecase. Would like to know why you have to initialize/de-initialize encoder, and why not keep encoder always live and split into different files when seeing key frames(IDR frames).

Hi Dane,

You mentioned: “why not keep encoder always live and split into different files” . That would be ideal, but I do not know how to do that.
Our use case is a robot, powered by the NVIDIA Jetson, with several cameras (4k, high framerate) meant to be operating and recording for hours at a time. Video splitting is done in order to have protection in case part of the disk gets corrupted, or the program crashes, and to help upload and manage files of reduced size.

The code we have tries to implement your suggestion, yet this is what gives us the memory leak. This is what we do when we want to split videos:

v4l2_buffer v4l2Buffer{};
std::array<v4l2_plane, MAX_PLANES> planes{};

v4l2Buffer.m.planes = planes.data();
v4l2Buffer.m.planes[0].bytesused = 0;

if (frameIndex_ < encoder_->output_plane.getNumBuffers()) {
  v4l2Buffer.index = uint32_t(frameIndex_);
} else {
  NvBuffer* buffer;
  int ret = encoder_->output_plane.dqBuffer(v4l2Buffer, &buffer, nullptr, NUM_RETRIES);
  if (0 != ret) {
    vlog_error(VCAT_VID, "NVMME %s error %d dequeueing EOS frame buffer %zu", name_.c_str(), ret,
               frameIndex_);
    return false;
  }
}

int ret = encoder_->output_plane.qBuffer(v4l2Buffer, nullptr);
if (0 != ret) {
  vlog_error(VCAT_VID, "NVMME %s error %d queueing EOS frame buffer %zu", name_.c_str(), ret,
             frameIndex_);
  return false;
}

// Wait for encoding to finish then destroy output/capture streams
CALL_OR_FALSE(encoder_->capture_plane.waitForDQThread(MAX_WAIT_MS));
encoder_->output_plane.deinitPlane();
encoder_->capture_plane.deinitPlane();

That code attempts to drain the queue of buffers and clean up. And to start again the next video:

  const auto w = uint32_t(width_);
  const auto h = uint32_t(height_);
  const auto fps = uint32_t(fps_);

  // Configure the video encoder
  CALL_OR_FALSE(encoder_->setCapturePlaneFormat(V4L2_PIX_FMT_H264, w, h, MAX_OUT_BYTES));
  CALL_OR_FALSE(encoder_->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, w, h));
  CALL_OR_FALSE(encoder_->setMaxPerfMode(1));
  CALL_OR_FALSE(encoder_->setHWPresetType(GetPresetType(profile_)));
  CALL_OR_FALSE(encoder_->setProfile(V4L2_MPEG_VIDEO_H264_PROFILE_MAIN));
  CALL_OR_FALSE(encoder_->setIFrameInterval(uint32_t(fps_)));
  CALL_OR_FALSE(encoder_->setFrameRate(uint32_t(fps_), 1));
  if (profile_ == Profile::LOSSLESS) {
    CALL_OR_FALSE(encoder_->setConstantQp(0));
  } else {
    CALL_OR_FALSE(encoder_->setBitrate(GetH264Bitrate(w, h, fps, profile_)));
  }
  CALL_OR_FALSE(encoder_->output_plane.setupPlane(V4L2_MEMORY_MMAP, INPUT_BUFFERS, true, false));
  CALL_OR_FALSE(encoder_->capture_plane.setupPlane(V4L2_MEMORY_MMAP, OUTPUT_BUFFERS, true, false));
  encoder_->capture_plane.setDQThreadCallback(EncoderCapturePlaneDequeued);
  CALL_OR_FALSE(encoder_->output_plane.setStreamStatus(true));
  CALL_OR_FALSE(encoder_->capture_plane.setStreamStatus(true));
  encoder_->capture_plane.startDQThread(this);

  // Enqueue all the empty capture plane buffers
  for (uint32_t i = 0; i < encoder_->capture_plane.getNumBuffers(); i++) {
    v4l2_buffer v4l2Buffer{};
    std::array<v4l2_plane, MAX_PLANES> planes{};
    v4l2Buffer.index = i;
    v4l2Buffer.m.planes = planes.data();
    CALL_OR_FALSE(encoder_->capture_plane.qBuffer(v4l2Buffer, nullptr));
  }

Are there some changes, something we should do only once on initialization?

Hi,
The h264 stream can be split at SPS/PPS/IDR. Please refer to running 01_video_encode:

  1. Apply the patch
diff --git a/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp b/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp
index d68d4ec..947e48f 100644
--- a/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp
+++ b/multimedia_api/ll_samples/samples/01_video_encode/video_encode_main.cpp
@@ -224,6 +224,15 @@ encoder_capture_plane_dq_callback(struct v4l2_buffer *v4l2_buf, NvBuffer * buffe
         if ( (ctx->enableGDR) && (ctx->GDR_out_file_path) && (num_encoded_frames >= ctx->gdr_out_frame_number+1))
             write_encoder_output_frame(ctx->gdr_out_file, buffer);
 
+#define NAL_UNIT_SPS 7
+uint8_t *data_ptr;
+data_ptr = (uint8_t *)buffer->planes[0].data;
+if (*data_ptr == 0 && *(data_ptr+1) == 0 && *(data_ptr+2) == 0 && *(data_ptr+3) == 1) {
+    if ((*(data_ptr+4) & 0x1F) == NAL_UNIT_SPS)
+        printf("[%d]th: x%02x %02x %02x %02x %02x is SPS/PPS/IDR \n",
+            num_encoded_frames, *data_ptr, *(data_ptr+1), *(data_ptr+2), *(data_ptr+3), *(data_ptr+4));
+}
+
     num_encoded_frames++;
 
     if (ctx->report_metadata)

  1. Rebuild the sample and run
nvidia@nvidia:/usr/src/jetson_multimedia_api/samples/01_video_encode$ ./video_encode /home/nvidia/a.yuv 320 240 H264 /home/nvidia/a.264 -idri 30 --insert-spspps-idr
Creating Encoder in blocking mode
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
875967048
842091865
H264: Profile = 66, Level = 51
[1]th: x00 00 00 01 67 is SPS/PPS/IDR
[31]th: x00 00 00 01 67 is SPS/PPS/IDR
[61]th: x00 00 00 01 67 is SPS/PPS/IDR
[91]th: x00 00 00 01 67 is SPS/PPS/IDR
[121]th: x00 00 00 01 67 is SPS/PPS/IDR
[151]th: x00 00 00 01 67 is SPS/PPS/IDR
[181]th: x00 00 00 01 67 is SPS/PPS/IDR
[211]th: x00 00 00 01 67 is SPS/PPS/IDR
[241]th: x00 00 00 01 67 is SPS/PPS/IDR
[271]th: x00 00 00 01 67 is SPS/PPS/IDR
Could not read complete frame from input file
File read complete.
Got 0 size buffer in capture
App run was successful

It sets IDR interval to 30 and enables inserting SPS/PPS at IDR frames.

You can apply the same and when seeing SPS/PPS/IDR, terminate the current MP4 and start a new MP4.This can get valid MP4 files.

IDR interval can be adjusted per your usecase. For IDR interval=30 in 30fps source, you can see the split point every second.

1 Like