Best way to clone Argus CSI camera using v4l2loopback

Dear Experts,

I am using v4l2loopback to clone my CSI2-based IMX477 camera to virtual devices for purpose of simultaneous streaming, snapshoting. recording (as mp4 format) and AI processing.

So far the below command works :

export SONY_FSM_DEV_SOURCE=/dev/video1
export SONY_FSM_DEV_STREAM=/dev/video11 // UDP stream over network
export SONY_FSM_DEV_RECORD=/dev/video21 // Record to mp4
export SONY_FSM_DEV_SNAPSHOT=/dev/video31 
export SONY_FSM_DEV_AIPROCESSING=/dev/video41 // On-board AI processing
export SONY_FSM_UDP_STREAM_MAIN=8551 

export SONY_FSM_CAPTURE_W=3840
export SONY_FSM_CAPTURE_H=2160
export SONY_FSM_CAPTURE_FPS=30
export SONY_FSM_STREAM_W=1920
export SONY_FSM_STREAM_H=1080
export SONY_FSM_AIPROCESSING_W=1280
export SONY_FSM_AIPROCESSING_H=720

# Clone
gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=NV12, width=${SONY_FSM_CAPTURE_W}, height=${SONY_FSM_CAPTURE_H}, framerate=${SONY_FSM_CAPTURE_FPS}/1" ! nvvidconv ! video/x-raw,format=YUY2 ! identity drop-allocation=1 ! \
        tee name=t ! queue ! v4l2sink device=${SONY_FSM_DEV_STREAM}  sync=false async=true \
        t. ! queue ! v4l2sink device=${SONY_FSM_DEV_RECORD} sync=false async=true \
        t. ! queue ! v4l2sink device=${SONY_FSM_DEV_SNAPSHOT} sync=false async=true \
        t. ! queue ! v4l2sink device=${SONY_FSM_DEV_AIPROCESSING} sync=false async=true
# Below causes the format of every tee branch being forced to  SONY_FSM_AIPROCESSING_W(1280) x SONY_FSM_AIPROCESSING_H(720) which is smaller than SONY_FSM_CAPTURE_W(3840) x SONY_FSM_CAPTURE_H(2160) so it is not used
#        t. ! queue ! videoscale ! "video/x-raw,width=${SONY_FSM_AIPROCESSING_W},height=${SONY_FSM_AIPROCESSING_H}" ! v4l2sink device=${SONY_FSM_DEV_AIPROCESSING} sync=false async=true

As you could see, I converted the NV12 format to YUY2 before teeing. Now that I would like to have a different resolution for the AI processing tee branch by consulting the following comment Get two video streams of different resolutions from a single camera with NVIDIA Gstreamer - #7 by Honey_Patouceul and seeing that BGRx format is used in each tee branch before sinking :

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=0 ! 'video/x-raw(memory:NVMM)' ! \
        tee name=t ! queue ! nvvidconv ! video/x-raw, format=BGRx !  identity drop-allocation=1 ! v4l2sink device=/dev/video1 \
        t. ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM),width=640,height=480' ! nvvidconv ! video/x-raw, format=BGRx ! identity drop-allocation=1 ! v4l2sink device=/dev/video2

I would like to have your advice for the format conversion (before/within the tee branches) as well and the optimal format (YUY2 or BGRx), please!

Best Regards,
Khang

hello khang.l4es,

BGRx format is common used if you’re going to use VPI or deepstream for AI processing,

1 Like

Hi @JerryChang,

Thanks for your reply. While I am more about low-level hardware bringing-up, could you help to clarify what is the advantage of BGRx format over other formats as input of VPI for AI processing, please ?

Thanks and regards,
Khang

you may refer to VPI - Vision Programming Interface: Convert Image Format, and check the [Performance] section for more details.

Thanks @JerryChang

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.