Does USB Camera Make Use of GPU/Hardware Encoding?

Hello All,

We have some queries on Jetson Nano since we are still relatively new to this.

Does USB camera take advantage of Jetson Nano GPU (hardware video encoder) for HEVC/h265 encoding? We want to make sure that the video from the USB camera is routed automatically to Nano’s GPU, leaving the CPU available for vision processing application tasks. Can we confirm that using a USB camera requires a lot of system level CPU? What are the steps to ensure that it does uses GPU instead of CPU?

Thanks in advance!

Cheers!

Hi,
We would suggest use tegra_multimedia_api. 12_camera_v4l2_cuda is reference for this usecase. For cuda postprocessing, please check the function cuda_postprocess().

firstly, you can run the default sample with your v4l2 source first. If it can be run successfully, you can then try to apply the following patch:
https://devtalk.nvidia.com/default/topic/1062492/jetson-tx2/tegra-multimedia-samples-not-working-properly/post/5383923/#5383923
The patch is to add NvVideoEncoder.

For checking system loading, you can run ‘sudo tegrastats’. More information about the tool is in
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2FAppendixTegraStats.html%23wwconnect_header

One clarification is video encoding/decoding is not done on GPU. There is independent hardware engine and you can see it as ‘NVENC’ in tegrastats.