I am using an Orin NX 16GB to encode 5x20MP@10FPS real-time camera streams to H265 or JPEG formats. I need to offload these tasks to the GPU or hardware, as the CPU is required for other tasks.
The nvv4l2h265enc doesn’t meet my needs since the hardware only supports 4K resolution, while my camera dimensions are 4502x4502.
I came across the Jetson Metadata API and explored the samples along with the APIs. Does this already utilize hardware acceleration? If so, are there alternative methods to make this work? Additionally, are there CUDA libraries available for GPU software acceleration?
Another approach I’m considering is tiling the camera output frames, handling them through hardware acceleration, and then reconciling them afterward.
The camera dimensions could not be adjusted, they need to be 4502x4502 and 5 cameras must be in the pipeline.
For the H265 HW-accelerated encoder, I am aware that this use-case is not possible for Jetson platforms. That’s why I am thinking of a way to offload the encoding to the GPU. I am also open to Hybrid (SW & HW encoding) if this is possible.
For the NVJPEG HW-Blocks, it is mentioned in the DS that the input maximum resolution is 16K x16K which means that 3 cameras output could be easily encoded by just one block (Orin has 2). Am I missing something here?
Hi,
All samples under /usr/src/jetson_multimedia_api are to demonstrate hardware acceleration. If you would like to profile JPEG encoding, you can try 05 sample: