I am working on a machine vision solution based on OpenCV. The algorithm used to process every frame is fairly simple, however the system’s performance is strongly related to the effective framerate.
I am working on creating a custom integration procedure between gstreamer and OpenCV and would like to remove as much of unnecessary processing/copying as it is possible.
Hi westej,
For gst command, you can run
gst-launch-1.0 nvcamerasrc ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! nvegltransform ! nveglglessink window-x=100 window-y=100 -e
and do postprocessing via cuda. Source code is in https://developer.nvidia.com/embedded/dlc/l4t-sources-24-2-1
Starting from r24.2, you also can use Tegra Multimedia API to do postrocessing. Please install via Jetpack check samples in the package.
thank you for your help. I was unable to respond earlier because I had to tend to some other duties.
I have looked into using nvivafilter, however the need to either wrap the program into a library, or split it into two separate modules, made me give up on this. The gst caps limitations of this solution were not really helpful either.
I have decided to try and create a solution based on an EGL stream, similar to the one used in the FrameSource module of the NVXIO framework. I am able to use the data obtained with it directly in OpenCV by accessing the frame’s RGBA plane with a GpuMat interface; the overhead, as measured by nvprof, appears to be negligible.