LibArgus EGLStream to nvivafilter

Is there an example showing LibArgus EGLStream as the source for nvivafilter? We tried adding nvivafilter to the gstVideoEncode example, but the gst_pipeline only processes four frames before it generates a segmentation fault.

Please refer to below sample:

It runs [appsrc ! h264parse ! qtmux ! filesink]. In appsrc, it is Argus -> NvVideoEncoder. You may access video buffers via NvBuffer APIs.

Thank you for your quick reply, and I’ve applied the patch listed in the referenced example, but it did not include the nvivafilter plug-in.

I need the nvivafilter plug-in so that I can add OpenCV and CUDA processing into the GStreamer pipeline. The pipeline works when nvcamerasrc is linked to nvivafilter, but does not work with LibArgus and nveglstreamsrc.

We need LibArgus because there is too much latency with nvcamerasrc, and I need EGLStream events to control the camera. Is it possible to link nveglstreamsrc to nvivafilter? I’ve tried adding a queue in between, and also the nvvidconv plug-in, without success. Thank you for your help.

You do not need nvivafilter. You can call NvBuffer APIs to get EGLImage:

ctx.eglimg = NvEGLImageFromFd(ctx.eglDisplay, buffer->planes[0].fd);
NvDestroyEGLImage(ctx.eglDisplay, ctx.eglimg);

I’m new to LibArgus, so where is the function HandleEGLImage described, and what does it do? I’ve found several examples that use it, but it is not clear to me how it would call my CUDA/OpenCV code.

Another question: the video from the camera is Block Linear and I420. I think we need to convert this to Pitch Linear and RGB so it can be used as a GpuMat. Is that correct?

We would like to use the hardware video converter with zero buffer copies. The examples that use the video converter are in V4L2. So it seems like the video pipeline would be as follows:

LibArgus => EGLStream (NVMM?) => V4L2 buffer => video converter => V4L2 buffer => GpuMat => HandleEGLImage?

We are using the video pipeline for a quadcopter camera, so low latency and zero-copy buffers is really important. Is it possible to keep the video buffers in NVMM or GPU memory? What is the difference between NVMM and GPU memory? Is it possible to avoid copying image buffers between CPU and GPU memory?

Thank you, Rick H.

Please install tegra_multimedia_api samples via Jetpack and CUDA code is at


You can start with comment #2 link:

The sample runs in zero copy.

Attach a sample to demonstrate tegra_multimedia_api + OpenCV GpuMat.

1 Do not install OpenCV 3.3.1 via Jetpack. It is installed by default. Please un-check OpenCV 3.3.1
2 Jetpack will ban installing the sample package if you un-check OpenCV, so please download it from
3 Get the script
4 Execute the script

$ mkdir OpenCV
$ ./ OpenCV

5 Apply the patch and rebuild 09_camera_jpeg_capture
6 Run

$ export DISPLAY=:0
09_camera_jpeg_capture$ ./camera_jpeg_capture --disable-jpg --cap-time 10

Please try steps above. (2.11 KB)

On r32.2.1, you can add ‘-D OPENCV_GENERATE_PKGCONFIG=YES’ to the script and build OpenCV:

The patch is still valid with minor changes:

+CVLDFLAGS:=`pkg-config --libs <b>opencv4</b>`
+CVCCFLAGS:=`pkg-config --cflags <b>opencv4</b>`

Hi Dane,

Per your information, after I have downloaded the openCV4.1.1 source package and then build using the scripts provided, now the openCV4 C++ program works!

I think that the original build-in OpenCV4.1.1 via JetPack4.3 should works as well except that it need to use the opencv4 command options rather opencv.

+CVLDFLAGS:=pkg-config --libs opencv4
+CVCCFLAGS:=pkg-config --cflags opencv4

Thanks a lot