tegra multimedia samples not working properly

I am using Jetson TX2 platform with L4T 32.2.

I am using the Jetpack version 3.3 (JetPack-L4T-3.3-linux-x64_b39.run), since this is the latest available jetpack package for Jetson TX2. But this package is specified to use for L4T 28.2.

I followed the instruction provided in the line Jetson Linux API Reference: Main Page | NVIDIA Docs to install the required libraries using Jetpack.

I could build few application and few other application fails to build due to some missing header files like error.

I tried executing, one of the built application “10_camera_recording”. But it always throws the following error,

Error Message:
Set governor to performance before enabling profiler
Error generated. main.cpp, ececute:492 No cameras available

What is the reason for this failure?
Can anyone please help me resolve this issue?


Do you plugin in the camera board? Here is the default board

We also have camera boards from partners.

Hi DaneLLL,

Thanks for your response.

We are using custom camera board to connect our camera modules with TX2.

I have verified that the camera modules streaming properly with our v4l2 based application.

When tried with sample tegra multimedia applications, the error mentioned in the first post comes.
I am using Jetpack which is compatible with L4T with 28.2. Is it a problem?


10_camera_recording is for Bayer sensors going through hardware ISP engine. If your camera is a YUV sensor, please run 12_camera_v4l2_cuda.

Hi DaneLLL,

Thanks for your response.

Yes. Our camera sensor is a YUV sensor and I could able to stream the camera module with 12_camera_v4l2_cuda.

Now we are trying to encode the camera frames to h264 and save it in a file. Is there any working samples available for reference.

Can you please give thoughts on how to proceed with this.


There are two posts for your reference:
[url]tegra_multimedia_API:dq buffer from encoder output_plane can not completed - Jetson TX2 - NVIDIA Developer Forums
[url]CLOSED. Gst encoding pipeline with frame processing using CUDA and libargus - Jetson TX1 - NVIDIA Developer Forums

Hi DaneLLL,

I have used the reference application found in above mentioned post to check for basic streaming using the following command.
./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f UYVY

But we face the following error when executing the application.

[ERROR] (NvV4l2ElementPlane.cpp:178) Capture Plane:Error while DQing buffer: Broken pipe
Segmentation fault (core dumped)

We are still debugging the issue. Have you encountered such issue?


The cpp is based on r28.2.1. Please refer to code diff and adapt to your 12_camera_v4l2_cuda.
add_nvvideoencoder.zip (2.77 KB)

Hi DaneLLL,

On debugging the issues, I found that,

  1. The fd retrieved in the start_capture function as follows is always zero.
    ctx->enc->output_plane.dqBuffer(enc_buf, &buffer, NULL, 10);
    fd = enc_buf.m.planes[0].m.fd;

    This leads to the following error,
    [ERROR] (NvV4l2ElementPlane.cpp:178) Output Plane:Error while DQing buffer: Broken pipe
    [ERROR] (NvV4l2ElementPlane.cpp:257) Output Plane:Error while Qing buffer: Device or resource busy
    nvbuf_utils:dmabuf_fd 0 mapped entry NOT found
    Segmentation fault (core dumped)

    However changing fd as follows eliminated the segmentation fault alone,
    fd = outplane_fd[0];
    Is it correct to assign the fd as above.

Also I couldn’t find much difference between 28.2.1 and 32.2.1 sample sources. Is it possible to explain the changes which we need to concentrate between two revisions.


The call sequences of NvVideoEncoder is different. Please check deviation in 01_video_encode.
On r32.2, it requires to call

if(ctx.output_memory_type == V4L2_MEMORY_DMABUF)
    v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
    v4l2_buf.memory = V4L2_MEMORY_DMABUF;
    ret = ctx.enc->output_plane.mapOutputBuffers(v4l2_buf, ctx.output_plane_fd[i]);

    if (ret < 0)
        cerr << "Error while mapping buffer at output plane" << endl;
        goto cleanup;

and do not need to set fd.

Attach a patch for r32.2. FYR.
r32_2_add_nvvideoencoder.zip (2.85 KB)

Hi DaneLLL,

Thanks for the patch. We could able to properly stream, encode and record the video from a single camera module.

Is it possible to extend this support to multiple camera modules?

We could see a sample application (13 multi camera) based on Argus. Is there any reference available for V4l2 based application?


You may refer to 10_camera_recording and integrate into 13_multi_camera.