Hi,
I am planning to order the Jetson Orin Nano and originally planned to stream the output of 2 arducam cameras to the cloud and run computations (honestly nothing too major) and i see that it has no support for NVENC.
I’ve been thinking of maybe doing the edge-computing on the device itself seeing that its quite powerful (object detection, a few kalman filters, etc)
my main question is even though it doesn’t have hardware video encoder, when i get the 2 cameras output at 1080p 30fps can i output it directly to the GPU? or does it have to go through encoding?
That is a great question, and a very smart one to be asking in your situation.
If you are planning on doing inference on the board, you should not require to have the cameras go through any encoding for you to feed you AI models.
The NVIDIA Jetson capture subsystem will allow you to capture camera buffers straight into GPU memory already into a format investable by NN. This of course will depend on what network you are planning on using and it input layer requirements, but most likely the network will require RGB or BGR or any of those formats.
In our experience, what we do for our customers running AI models on the edge, is creating a GStreamer media system with minimal memory copies to avoid overhead. Then we use DeepStream or NNStreamer to run the inferencing.
This allows for a streamlined data processing pipeline which you can later easily interface with your applications.
Please do not hesitate to reach out if you need further help with your project, we would love to help.
best regards,
Andrew
Embedded Software Engineer at ProventusNova