A general question regarding DS and cloud computing


In the past I have successfully deployed a DS project running on a Jetson Nano and doing object detection with up to three USB cameras in parallel.

For a new project this is the requirement:

  1. The video is delivered via WebRTC to a Kurento Media Server in the cloud, preferably but not necessarily AWS (Ubuntu based, x86_64)
  2. In the Media Server there an “OpenCV” plugin (C++), which provides a way to intercept the incoming video RGB frame by RGB frame, so that it can be processed as an OpenCV::Mat.
  3. I’m wondering now, if I could insert some “AI” code utilizing the DS SDK, processing each frame and doing the same as I did before with the edge app, of course also with GPU support by the cloud.

Would that generally be possible? Could you provide me some starters, if yes?


Hi @foreverneilyoung ,
Yes, I think it’s feasiable!

DeepStream is optimized for NVIDIA GPUs; the application can be deployed on an embedded edge device running Jetson platform or can be deployed on larger edge or datacenter GPUs like T4. DeepStream provides the unified plugin and API for two platforms.


Thanks. Good news. Is there sample code/documentation for the T4 use case?

As said above, DeepStream provides the unified plugin and API for two platforms. Basically, you could run the same DeepStream code on T4.

Thanks. But I suppose there is something to be installed additionally?

I mean with a Jetson Nano I simply flashed your SD image. What would I have to install on the cloud instance? I bet at least the DeepStream SDK, right? And information on this?

Oh… got it, please check Quickstart Guide — DeepStream 5.1 Release documentation


1 Like