I have a USB 3.0 See3Cam 130 camera (https://www.e-consystems.com/UltraHD-USB-Camera.asp) and I want to encode the frames to a video and also process them using Visionworks. A requirement is that the video is 4k30 which requires the camera is using MJPEG.
I’ve set up gstreamer to decode the mjpeg stream using nvjpegdec and then encode it. I can also the stream to visualise the frames.
gst-launch-1.0 -v v4l2src device=/dev/video0 ! image/jpeg, width=3840, height=2160, framerate=30/1 ! nvjpegdec ! 'video/x-raw, format=(string)I420' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! tee name=myt ! omxh264enc preset-level=3 profile=8 bitrate=15000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! queue ! mpegtsmux ! rtpmp2tpay ! udpsink host=192.168.1.X port=5000 myt. ! nvoverlaysink sync=false
I believe the nvvideosink will allow me to create an EGLStream that is accessible through CUDA. “Video Sink Component. Accepts YUV-I420 format and produces EGLStream (RGBA)”. I’m just not sure how I can access this through the NVXIO framesource or other method.
So how can I encode the MJPEG stream to video as well as access the frames from within Visionworks with minimal overhead?