Hello Nvidia Community,
I am a bit lost with all the Nvidia offer in terms of software and technology and I would need some help.
I have a project where I need to display on screen the image from a camera connected through RTSP on my jetson Orin Nano in real time (<150ms). Today I achieve 100ms glass-to-glass latency streaming the video with Deepstream. I chose to use deepstream because its low latency and I will need in the future to run object detection.
But in order to evaluate the right Jetson for my usecase I have to experiment 3D on it. The aim is to add augmented reality (as polygon object) on my live video.
I didn’t find any tools from Nvidia to do so on Nvidia embedded device, did I miss something? Also is there a way to integrate the augmented reality in the deepstream pipeline as it is done with the detection ?
I started with OpenGL to test the device performances, can I integrate it in my deepstream pipeline?
Thank you very much for your help