Rotation + apply mask to video in real time

Hello everyone,

my team and I are currently working on a project where we need to perform a rotation given an angle (provided by a gyroscope) and then apply a circle mask to the frames that we get from a CSI camera and stream it in real time. We are trying to use the Jetson Nano as it is more powerful than what we used before. Currently, we are using GStreamer + OpenCV with CUDA to do it and we already get better results than before. However, it is still not enough and I believe it could be optimized even more. Ideally, it would be amazing if we could avoid downloading the frames to the CPU and work only on the GPU.
This is the GStreamer pipeline we currently use to get the frames:

nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, video/x-raw, width=1920, height=1080, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink

Also, we read about Deepstream and CUDA Streams but we’re kinda new to CUDA and we’ve never used Deepstream so we don’t know if those could be valid ways to do it or not.
What would be the best way to do it?

There is a limitation in hardware VIC engine:
[Gstreamer] nvvidconv, BGR as INPUT - #2 by DaneLLL
So for using OpenCV, we would need to copy data from NVMM buffer to CPU buffer and convert to BGR. This takes additional CPU usage.

For optimal solution, we would suggest keep frame data in NVMM buffer from head to tail. One possible solution is to use VPI. Please check if the required functions are support in VPI:
VPI - Vision Programming Interface: Release Notes v1.1

If you need to run deep learning inference, please try DeepStream SDK. It is based on gsreamer.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.