Hello,
I am using an Orin NX 16GB to encode 5x20MP@10FPS real-time camera streams to H265 or JPEG formats. I need to offload these tasks to the GPU or hardware, as the CPU is required for other tasks.
The nvv4l2h265enc
doesn’t meet my needs since the hardware only supports 4K resolution, while my camera dimensions are 4502x4502.
I came across the Jetson Metadata API and explored the samples along with the APIs. Does this already utilize hardware acceleration? If so, are there alternative methods to make this work? Additionally, are there CUDA libraries available for GPU software acceleration?
Another approach I’m considering is tiling the camera output frames, handling them through hardware acceleration, and then reconciling them afterward.
Any assistance would be greatly appreciated.
Hi,
This use-case exceeds capability of Orin NX, so we would suggest reframe the use-case per module data sheet:
Jetson Download Center | NVIDIA Developer
As of now all Jetson platforms support up to 4K. 4502x4502 exceeds 4K so it would need to be adjusted.
Thank you for your fast reply!
The camera dimensions could not be adjusted, they need to be 4502x4502 and 5 cameras must be in the pipeline.
-
For the H265 HW-accelerated encoder, I am aware that this use-case is not possible for Jetson platforms. That’s why I am thinking of a way to offload the encoding to the GPU. I am also open to Hybrid (SW & HW encoding) if this is possible.
-
For the NVJPEG HW-Blocks, it is mentioned in the DS that the input maximum resolution is 16K x16K which means that 3 cameras output could be easily encoded by just one block (Orin has 2). Am I missing something here?
-
Does the encoding APIs in the Multimedia APIs use the HW-acceleration block? (Jetson Linux API Reference: 01_video_encode (video encode) | NVIDIA Docs)
-
Is it possible to do the encoding using CUDA?
Thanks for your help, I know that there is no straight forward solution but a solution needs to be provided using Orin NX.