Hi,
I was using Jetson Nano to do multiple livestreams, what I did is using gstreamer to split the CSI camera stream into 2 virtual devices, then one was used for opencv, another I was using ffmpeg directly pushing RTMP stream into my server, here’s the gstreamer command:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! \
'video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=30/1' ! \
nvvidconv flip-method=0 ! \
'video/x-raw, width=1280, height=720, format=BGRx' ! \
queue ! videoconvert ! \
'video/x-raw, format=BGR' ! \
tee name=t ! \
queue ! v4l2sink device=/dev/video2 t. ! \
queue ! v4l2sink device=/dev/video3
And for ffmpeg we used h264_nvmpi encoding, but CPU is still not enough to run our other programs, so we just updated to Jetson Orin Nano and we update camera from csi to USB.
Later I found out that If I directly use ffmpeg on Jetson Orin Nano, it costs too many CPU, and h264_nvmpi was based on jetson nano, how can I use the hardware acceleration to avoid massive CPU usage on spliting stream or pushing RMTP stream?
Thanks