In the past I have successfully deployed a DS project running on a Jetson Nano and doing object detection with up to three USB cameras in parallel.
For a new project this is the requirement:
The video is delivered via WebRTC to a Kurento Media Server in the cloud, preferably but not necessarily AWS (Ubuntu based, x86_64)
In the Media Server there an “OpenCV” plugin (C++), which provides a way to intercept the incoming video RGB frame by RGB frame, so that it can be processed as an OpenCV::Mat.
I’m wondering now, if I could insert some “AI” code utilizing the DS SDK, processing each frame and doing the same as I did before with the edge app, of course also with GPU support by the cloud.
Would that generally be possible? Could you provide me some starters, if yes?
DeepStream is optimized for NVIDIA GPUs; the application can be deployed on an embedded edge device running Jetson platform or can be deployed on larger edge or datacenter GPUs like T4. DeepStream provides the unified plugin and API for two platforms.
Thanks. But I suppose there is something to be installed additionally?
I mean with a Jetson Nano I simply flashed your SD image. What would I have to install on the cloud instance? I bet at least the DeepStream SDK, right? And information on this?