LibArgus EGLStream to nvivafilter

Is there an example showing LibArgus EGLStream as the source for nvivafilter? We tried adding nvivafilter to the gstVideoEncode example, but the gst_pipeline only processes four frames before it generates a segmentation fault.

Hi,
Please refer to below sample:
[url]CLOSED. Gst encoding pipeline with frame processing using CUDA and libargus - Jetson TX1 - NVIDIA Developer Forums

It runs [appsrc ! h264parse ! qtmux ! filesink]. In appsrc, it is Argus → NvVideoEncoder. You may access video buffers via NvBuffer APIs.

Thank you for your quick reply, and I’ve applied the patch listed in the referenced example, but it did not include the nvivafilter plug-in.

I need the nvivafilter plug-in so that I can add OpenCV and CUDA processing into the GStreamer pipeline. The pipeline works when nvcamerasrc is linked to nvivafilter, but does not work with LibArgus and nveglstreamsrc.

We need LibArgus because there is too much latency with nvcamerasrc, and I need EGLStream events to control the camera. Is it possible to link nveglstreamsrc to nvivafilter? I’ve tried adding a queue in between, and also the nvvidconv plug-in, without success. Thank you for your help.

Hi,
You do not need nvivafilter. You can call NvBuffer APIs to get EGLImage:

ctx.eglimg = NvEGLImageFromFd(ctx.eglDisplay, buffer->planes[0].fd);
HandleEGLImage(&ctx.eglimg);
NvDestroyEGLImage(ctx.eglDisplay, ctx.eglimg);

DaneLLL,
I’m new to LibArgus, so where is the function HandleEGLImage described, and what does it do? I’ve found several examples that use it, but it is not clear to me how it would call my CUDA/OpenCV code.

Another question: the video from the camera is Block Linear and I420. I think we need to convert this to Pitch Linear and RGB so it can be used as a GpuMat. Is that correct?

We would like to use the hardware video converter with zero buffer copies. The examples that use the video converter are in V4L2. So it seems like the video pipeline would be as follows:

LibArgus => EGLStream (NVMM?) => V4L2 buffer => video converter => V4L2 buffer => GpuMat => HandleEGLImage?

We are using the video pipeline for a quadcopter camera, so low latency and zero-copy buffers is really important. Is it possible to keep the video buffers in NVMM or GPU memory? What is the difference between NVMM and GPU memory? Is it possible to avoid copying image buffers between CPU and GPU memory?

Thank you, Rick H.

Hi,
Please install tegra_multimedia_api samples via Jetpack and CUDA code is at

tegra_multimedia_api\samples\common\algorithm\cuda

You can start with comment #2 link:
https://devtalk.nvidia.com/default/topic/1028387/jetson-tx1/closed-gst-encoding-pipeline-with-frame-processing-using-cuda-and-libargus/post/5256753/#5256753

The sample runs in zero copy.

Attach a sample to demonstrate tegra_multimedia_api + OpenCV GpuMat.

1 Do not install OpenCV 3.3.1 via Jetpack. It is installed by default. Please un-check OpenCV 3.3.1
2 Jetpack will ban installing the sample package if you un-check OpenCV, so please download it from
https://developer.nvidia.com/embedded/dlc/multimedia-api-r2821
3 Get the script https://github.com/AastaNV/JEP/blob/master/script/install_opencv3.4.0_TX2.sh
4 Execute the script

$ mkdir OpenCV
$ ./install_opencv3.4.0_TX2.sh OpenCV

5 Apply the patch and rebuild 09_camera_jpeg_capture
6 Run

$ export DISPLAY=:0
09_camera_jpeg_capture$ ./camera_jpeg_capture --disable-jpg --cap-time 10

Please try steps above.

r28_2_1-multimedia-api-hook-cv3-gpuMat.zip (2.11 KB)

Hi,
On r32.2.1, you can add ‘-D OPENCV_GENERATE_PKGCONFIG=YES’ to the script and build OpenCV:
https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.1.1_Jetson.sh

The patch is still valid with minor changes:

+CVLDFLAGS:=`pkg-config --libs <b>opencv4</b>`
+CVCCFLAGS:=`pkg-config --cflags <b>opencv4</b>`

Hi Dane,

Per your information, after I have downloaded the openCV4.1.1 source package and then build using the scripts provided, now the openCV4 C++ program works!

I think that the original build-in OpenCV4.1.1 via JetPack4.3 should works as well except that it need to use the opencv4 command options rather opencv.

+CVLDFLAGS:=pkg-config --libs opencv4
+CVCCFLAGS:=pkg-config --cflags opencv4

Thanks a lot
Jimmy

Hello,
Im looking to follow up on this thread. In the proposed solution provided in the diff, the cuEglFrame is of type CU_EGL_FRAME_TYPE_PITCH memory.
This allows using cv::cuda::gpuMat(height, width, CV_8UC4, eglFrame.frame.pPitch[0]).

Is there a way to convert cuEglFrame of type CU_EGLE_FRAME_TYPE_ARRAY to CU_EGL_FRAME_TYPE_PITCH so that one can wrap with cv::cuda::gpuMat().

Im using the example syncSensor which presents the eglFrame as CU_EGL_FRAME_TPYE_ARRAY. When creating the eglFrame can you request what type is should be?

Hi @jwrl
Please refer to these posts:
NVBuffer (FD) to opencv Mat - #6 by DaneLLL
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL

You can call createNvBuffer() to create RGBA NvBuffer and map to cv::cuda::GpuMat.

Reference patch and steps for r32.5.1:

  1. Run the script to re-install OpenCV:
    JEP/install_opencv4.5.0_Jetson.sh at master · AastaNV/JEP · GitHub
  2. Apply the patch and rebuild 00_video_code:
diff --git a/multimedia_api/ll_samples/samples/00_video_decode/Makefile b/multimedia_api/ll_samples/samples/00_video_decode/Makefile
index 06239b1..f5dc104 100644
--- a/multimedia_api/ll_samples/samples/00_video_decode/Makefile
+++ b/multimedia_api/ll_samples/samples/00_video_decode/Makefile
@@ -39,18 +39,30 @@ SRCS := \
 
 OBJS := $(SRCS:.cpp=.o)
 
+OBJS += \
+	$(ALGO_CUDA_DIR)/NvAnalysis.o \
+	$(ALGO_CUDA_DIR)/NvCudaProc.o
+
+CVLDFLAGS:=`pkg-config --libs opencv4`
+
 all: $(APP)
 
 $(CLASS_DIR)/%.o: $(CLASS_DIR)/%.cpp
 	$(AT)$(MAKE) -C $(CLASS_DIR)
 
+$(ALGO_CUDA_DIR)/%.o: $(ALGO_CUDA_DIR)/%.cpp
+	$(AT)$(MAKE) -C $(ALGO_CUDA_DIR)
+
+$(ALGO_CUDA_DIR)/%.o: $(ALGO_CUDA_DIR)/%.cu
+	$(AT)$(MAKE) -C $(ALGO_CUDA_DIR)
+
 %.o: %.cpp
 	@echo "Compiling: $<"
 	$(CPP) $(CPPFLAGS) -c $<
 
 $(APP): $(OBJS)
 	@echo "Linking: $@"
-	$(CPP) -o $@ $(OBJS) $(CPPFLAGS) $(LDFLAGS)
+	$(CPP) -o $@ $(OBJS) $(CPPFLAGS) $(LDFLAGS) $(CVLDFLAGS)
 
 clean:
 	$(AT)rm -rf $(APP) $(OBJS)
diff --git a/multimedia_api/ll_samples/samples/00_video_decode/video_decode_main.cpp b/multimedia_api/ll_samples/samples/00_video_decode/video_decode_main.cpp
index d2dedf5..10befa6 100644
--- a/multimedia_api/ll_samples/samples/00_video_decode/video_decode_main.cpp
+++ b/multimedia_api/ll_samples/samples/00_video_decode/video_decode_main.cpp
@@ -42,6 +42,7 @@
 
 #include "video_decode.h"
 #include "nvbuf_utils.h"
+#include "NvCudaProc.h"
 
 #define TEST_ERROR(cond, str, label) if(cond) { \
                                         cerr << str << endl; \
@@ -616,6 +617,7 @@ query_and_set_capture(context_t * ctx)
 
     input_params.nvbuf_tag = NvBufferTag_VIDEO_CONVERT;
 
+    input_params.colorFormat = NvBufferColorFormat_ABGR32;
     ret = NvBufferCreateEx (&ctx->dst_dma_fd, &input_params);
     TEST_ERROR(ret == -1, "create dmabuf failed", error);
 #else
@@ -1173,6 +1175,12 @@ dec_capture_loop_fcn(void *arg)
 
                 if (!ctx->stats && !ctx->disable_rendering)
                 {
+
+                    EGLImageKHR egl_image = NULL;
+                    egl_image = NvEGLImageFromFd(ctx->renderer->getEGLDisplay(), ctx->dst_dma_fd);
+                    HandleEGLImage(&egl_image);
+                    NvDestroyEGLImage(ctx->renderer->getEGLDisplay(), egl_image);
+
                     ctx->renderer->render(ctx->dst_dma_fd);
                 }
 
diff --git a/multimedia_api/ll_samples/samples/common/algorithm/cuda/Makefile b/multimedia_api/ll_samples/samples/common/algorithm/cuda/Makefile
index dc40f07..75b7d45 100644
--- a/multimedia_api/ll_samples/samples/common/algorithm/cuda/Makefile
+++ b/multimedia_api/ll_samples/samples/common/algorithm/cuda/Makefile
@@ -39,6 +39,8 @@ ifneq ($(CUDA_DEBUG),)
 NVCCFLAGS += -g -G
 endif
 
+CVCCFLAGS:=`pkg-config --cflags opencv4`
+
 # Filter C++11 to workaround cuda compilation error
 ALL_CPPFLAGS := $(NVCCFLAGS)
 ALL_CPPFLAGS += $(addprefix -Xcompiler ,$(filter-out -std=c++11, $(CPPFLAGS)))
@@ -59,7 +61,7 @@ NvAnalysis.o : NvAnalysis.cu
 
 NvCudaProc.o : NvCudaProc.cpp
 	@echo "Compiling: $<"
-	$(NVCC) $(ALL_CPPFLAGS) $(GENCODE_FLAGS) -o $@ -c $<
+	$(NVCC) $(ALL_CPPFLAGS) $(CVCCFLAGS) $(GENCODE_FLAGS) -o $@ -c $<
 
 clean:
 	$(AT)rm -rf *.o
diff --git a/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvCudaProc.cpp b/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvCudaProc.cpp
index 64f058f..369cbb6 100644
--- a/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvCudaProc.cpp
+++ b/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvCudaProc.cpp
@@ -28,6 +28,8 @@
 
 #include <stdio.h>
 #include <cuda_runtime_api.h>
+#include <opencv2/opencv.hpp>
+#include <opencv2/cudafilters.hpp>
 #include <EGL/egl.h>
 #include <EGL/eglext.h>
 #include <GLES2/gl2.h>
@@ -41,6 +43,21 @@
 
 #include "NvCudaProc.h"
 
+static bool create_filter = true;
+cv::Ptr< cv::cuda::Filter > filter;
+
+static void cv_process(void *pdata, int32_t width, int32_t height)
+{
+    if (create_filter) {
+        //filter = cv::cuda::createSobelFilter(CV_8UC4, CV_8UC4, 1, 0, 3, 1, cv::BORDER_DEFAULT);
+        filter = cv::cuda::createGaussianFilter(CV_8UC4, CV_8UC4, cv::Size(31,31), 0, 0, cv::BORDER_DEFAULT);
+        create_filter = false;
+    }
+    cv::cuda::GpuMat d_mat(height, width, CV_8UC4, pdata);
+    filter->apply (d_mat, d_mat);
+}
+
+
 static void
 Handle_EGLImage(EGLImageKHR image);
 
@@ -89,7 +106,7 @@ Handle_EGLImage(EGLImageKHR image)
     if (eglFrame.frameType == CU_EGL_FRAME_TYPE_PITCH)
     {
         //Rect label in plan Y, you can replace this with any cuda algorithms.
-        addLabels((CUdeviceptr) eglFrame.frame.pPitch[0], eglFrame.pitch);
+        cv_process(eglFrame.frame.pPitch[0], eglFrame.width, eglFrame.height);
     }
 
     status = cuCtxSynchronize();
  1. Run
$ sudo ldconfig
$ export DISPLAY=:0(or 1)
$ ./video_decode H264 ../../data/Video/sample_outdoor_car_1080p_10fps.h264