I would like to explore getting the result as below gst-launch command using jetson-utils and python.
gst-launch-1.0 -v nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1’ ! nvvidconv ! ‘video/x-raw, width=640, height=480, format=I420, framerate=30/1’ ! videoconvert ! identity drop-allocation=1 ! ‘video/x-raw, width=640, height=480, format=RGB, framerate=30/1’ ! v4l2sink device=/dev/video2
Example python scripts have good examples of capturing image and processing it. But doesn’t explain how to pipe it to a virtual camera using V4l2sink or other methods. Any pointers?