Augmented Reality on Jetson Orin Nano

Hello Nvidia Community,

I am a bit lost with all the Nvidia offer in terms of software and technology and I would need some help.

I have a project where I need to display on screen the image from a camera connected through RTSP on my jetson Orin Nano in real time (<150ms). Today I achieve 100ms glass-to-glass latency streaming the video with Deepstream. I chose to use deepstream because its low latency and I will need in the future to run object detection.

But in order to evaluate the right Jetson for my usecase I have to experiment 3D on it. The aim is to add augmented reality (as polygon object) on my live video.

I didn’t find any tools from Nvidia to do so on Nvidia embedded device, did I miss something? Also is there a way to integrate the augmented reality in the deepstream pipeline as it is done with the detection ?

I started with OpenGL to test the device performances, can I integrate it in my deepstream pipeline?

Thank you very much for your help

For information, do you use Jetson Orin Nano with Jetpack 5.1.1 + DeepStream SDK 6.2? Would like to confirm which release you are using.

I am using Jetpack 5.0.2 with Deepstream SDK 6.1

A possible solution would be like this patch:
How to create opencv gpumat from nvstream? - #18 by DaneLLL

You can call NvBufSurfaceMapEglImage() to get EGLImage. And can use CUDA APIs to process the buffer.

How can I use the Deepstream SDK to perform streaming and detection with this method ?

We would need more information about your use-case. Please check the pilpeine of deepstream-app:
DeepStream Reference Application - deepstream-app — DeepStream 6.2 Release documentation

This is the general pipeline for running deep learning inference. Please check it and share us the piepline you would like to run in your use-case.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.