I am using USB camera as video input for 12_camera_v4l2_cuda sample in Jetson TX1.
I want to use RGBA image for target tracking and yuv420M image for H264 video encode.
As usb camera can’t use the libargus, so I must use v4l2 for video capture.
In 12_camera_v4l2_cuda sample, the camera video buffer shares with VIC output_plane，
and VIC can convert video to RGBA or YUV420M image. How can I get RGBA image and YUV420M
image at the same time without using memory copy, just like libargus used in frontend sample,
one camera producer generates 4 consumers, each consumer can do different job.