Hello everyone,
my team and I are currently working on a project where we need to perform a rotation given an angle (provided by a gyroscope) and then apply a circle mask to the frames that we get from a CSI camera and stream it in real time. We are trying to use the Jetson Nano as it is more powerful than what we used before. Currently, we are using GStreamer + OpenCV with CUDA to do it and we already get better results than before. However, it is still not enough and I believe it could be optimized even more. Ideally, it would be amazing if we could avoid downloading the frames to the CPU and work only on the GPU.
This is the GStreamer pipeline we currently use to get the frames:
nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, video/x-raw, width=1920, height=1080, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink
Also, we read about Deepstream and CUDA Streams but we’re kinda new to CUDA and we’ve never used Deepstream so we don’t know if those could be valid ways to do it or not.
What would be the best way to do it?