Using USB camera as producer and generates multiple consumers

I am using USB camera as video input for 12_camera_v4l2_cuda sample in Jetson TX1.

I want to use RGBA image for target tracking and yuv420M image for H264 video encode.

As usb camera can’t use the libargus, so I must use v4l2 for video capture.

In 12_camera_v4l2_cuda sample, the camera video buffer shares with VIC output_plane,

and VIC can convert video to RGBA or YUV420M image. How can I get RGBA image and YUV420M

image at the same time without using memory copy, just like libargus used in frontend sample,

one camera producer generates 4 consumers, each consumer can do different job.

Hi dynasty13,
For your case, you need to have two video converters. One is for RGBA and the other is for YUV420M. We have same implementation in libargus. For USB cameras, you need to implement it yourself.

I am confused how can I use one video stream as input for two video converter, is there an example?

should I need to use copyToNvBuffer to implement it?

We don’t have sample for this case. By configuring output plane to V4L2_MEMORY_DMABUF, you can send same buffer to multiple converters.

Or you may try to implement one conversion via CUDA. We have a sample at
https://github.com/dusty-nv/jetson-inference/blob/master/util/cuda/cudaYUV-NV12.cu