FFmpeg or gstreamer hardware acceleration on h264/h265 using USB camera

Hi,

I was using Jetson Nano to do multiple livestreams, what I did is using gstreamer to split the CSI camera stream into 2 virtual devices, then one was used for opencv, another I was using ffmpeg directly pushing RTMP stream into my server, here’s the gstreamer command:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! \
  'video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=30/1' ! \
  nvvidconv flip-method=0 ! \
  'video/x-raw, width=1280, height=720, format=BGRx' ! \
  queue ! videoconvert ! \
  'video/x-raw, format=BGR' ! \
  tee name=t ! \
  queue ! v4l2sink device=/dev/video2 t. ! \
  queue ! v4l2sink device=/dev/video3

And for ffmpeg we used h264_nvmpi encoding, but CPU is still not enough to run our other programs, so we just updated to Jetson Orin Nano and we update camera from csi to USB.

Later I found out that If I directly use ffmpeg on Jetson Orin Nano, it costs too many CPU, and h264_nvmpi was based on jetson nano, how can I use the hardware acceleration to avoid massive CPU usage on spliting stream or pushing RMTP stream?

Thanks

Hi,
Please use Jetpack 6.0GA and install the ffmpeg package:

Accelerated Decode with ffmpeg — NVIDIA Jetson Linux Developer Guide 1 documentation

And you can run the command to use hardware decoding:

$ ffmpeg -c:v h264_nvv4l2dec -i /home/nvidia/test.mp4 a.yuv

Hi,

I am not decoding videos, I am useing ffmpeg encoding like ffmpeg -f v4l2 -i /dev/video2 -c:v libx264 -f flv rtmp://HOST_IP/live/livestream

Hi,
Orin Nano does not have hardware encoders so you would need to use software encoder. It is expected CPU usage is high.

Okay thank you for explaination, but 1920x1080 20 FPS resolution in h264 costs 400%(4 cores fully running) is too much…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.