I am working on a machine vision solution based on OpenCV. The algorithm used to process every frame is fairly simple, however the system’s performance is strongly related to the effective framerate.
I am working on creating a custom integration procedure between gstreamer and OpenCV and would like to remove as much of unnecessary processing/copying as it is possible.
Has the NVMM image data specification been released yet? It has been mentioned on the forum before (https://devtalk.nvidia.com/default/topic/903438/jetson-tx1/hw-accelerated-jpeg-encoding-/post/4777535/#4777535), but I was not able to find any information about that.
Would it be possible to force nvvidconv to return its regular (non-NVMM) output in zero-copy (shared) memory?
For gst command, you can run
gst-launch-1.0 nvcamerasrc ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! nvegltransform ! nveglglessink window-x=100 window-y=100 -e
and do postprocessing via cuda. Source code is in
Starting from r24.2, you also can use Tegra Multimedia API to do postrocessing. Please install via Jetpack check samples in the package.
thank you for your help. I was unable to respond earlier because I had to tend to some other duties.
I have looked into using nvivafilter, however the need to either wrap the program into a library, or split it into two separate modules, made me give up on this. The gst caps limitations of this solution were not really helpful either.
I have decided to try and create a solution based on an EGL stream, similar to the one used in the FrameSource module of the NVXIO framework. I am able to use the data obtained with it directly in OpenCV by accessing the frame’s RGBA plane with a GpuMat interface; the overhead, as measured by nvprof, appears to be negligible.
Could you please share any pointer or other information about your frame reading solution ?