How to use the gstreamer element nvivafilter

Hi!

I am trying to manipulate the camera frames on the fly using the gstreamer element nvivafilter as described in the multimedia user guide:

gst-launch-1.0 nvcamerasrc fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvivafilter cuda-process=true customer-lib-name="libsample_process.so" ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink -e

This example works as there is already a customer-lib named libsample_process.so.
Now i want to create my own customer-lib for manipulating the frames in different ways.
Is there somewhere an example or documentation how to compile your own .so file and which functions it must contain for that purpose?

I don’t know how up-to-date this is, but this may be useful:
http://www.tldp.org/HOWTO/Program-Library-HOWTO/

Probably the biggest hurdle is if you want to use something with a namespace, e.g., from C++. If so, keep in mind the idea of a factory design pattern, where the library code and a factory function are in the library, with new objects in your program using the factory to create the object. Otherwise it is fairly straight-forward.

Hi, thanks for the answer!

Maybe I misstated the question a little:

The problem is not to create an arbitrary .so file.
But I don’t know which functions get called by the nvivafilter element and what the input/output of those functions does look like. I couldn’t find any informations about this. If I use the nvivafilter with an arbitrary .so (even an non existing one) it also seems to work. I cannot see any difference in the video output when I compare this line of code (lib=libsample_process.so):

gst-launch-1.0 nvcamerasrc fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvivafilter cuda-process=true customer-lib-name="libsample_process.so" ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink -e

with this line of code (lib=does_not_even_exist.so):

gst-launch-1.0 nvcamerasrc fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvivafilter cuda-process=true customer-lib-name="does_not_even_exist.so" ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink -e

The only different thing I see is that in the console output of the first gst pipeline there are some messages appearing stating cuGraphicsEGLRegisterImage failed: 304

These are missing when I use the second gst pipeline.

So I guess the first pipeline is doing something (I don’t know what) while the second isn’t.

What I want to do is to manipulate the frames of the videooutput of the camera. And my first guess was that the nvivafilter could be the right element for doing this. But I’m not sure :/

I do not know how customer-lib-name is used in gst-launch. There could be a number of ways it looks for libraries, including through the system’s linker (in which case ld…the linker…would have to have the library in its path), or by some specific gst-launch directory setting.

Hi Salo,

Latest L4T R24.2 public release @ https://developer.nvidia.com/embedded/linux-tegra provides sources of the libsample_process.so library.

You need to download nvsample_cudaprocess_src.tbz2 from source package link of R24.2 release page.

Please refer nvsample_cudaprocess_README.txt for the details of the interface APIs. Source package also provides Makefile for on-target compilation.

Current reference sample implementation can be replaced with your custom implementation.

1 Like