Help a noob in the Nano universe (Keywords: Deepstream, RTSP, Win10).

Hi guys.

Coming from the Arduino / RPI universe, its time for me to explore the Nano with a Raspberry Pi v2 camera, combined with perhaps Deepstream.

While I’m still doing all I can to research how to get started from scratch, I hope some of you can help me a bit in the right direction, or at least share some good advice on what to look at.

So here’s the case I want to build:

Raspberry Pi v2 camera attached to the Nano. I’m hoping I can figure out how to start playing around with Deepstream, but what’s most interesting, is how to play around with deepstream via a Windows 10 computer on the same network? The AI part is the icing on the cake, but first I want to at least get a live stream from the camera, to a Win10 machine on the same network.

I have some experience with Raspberry Pi 4 and RTSP streaming to VLC player on a Win10 machine, but this is a quite different to me so far.

So something I have to learn about is Qstreamer and perhaps dive into Deepstream? Or how would you recommend one get started from scratch? How much of a headache am I facing doing the streaming on a win10 computer, compared to just watch the Deepstream software on the Nano itself?

While I’m hoping some of you can share some great advices, I’ll keep reading into the wonderful step-by-step guides I found.

Thanks guys!

You should be able to do just about anything with a video stream using Nvidia’s gstreamer components, but if you’re not yet familiar with gstreamer it could be a steep-ish learning curve. I’d recommend starting with the tutorials. The tutorials don’t use the accelerated components, but give you an idea of how gstreamer works and how to make your own pipeline. Even if you don’t know any C, you might consider retyping the examples, since gstreamer looks similar in most languages, and it never hurts to learn.

Note that you don’t have to write your gstreamer in pure C. You can use Python or c++ or Vala/genie among others, or you can write static pipelines (vs dynamic, which the tutorial explains) as an easy to read string, which you can test with gst-launch (the tutorial covers this as well). Some common accelerated examples (playing, encoding) can be found in the Accelerated Gstreamer User’s guide. Basically you have “source ! element ! sink”, and you can change out the sources, sinks and elements in between to fit your needs. A source could be a camera or a file or a network stream, an element could do some scaling or decoding or converting, and a sink could be a file or network stream or another application.

Once you have an idea of things, you can “sudo apt install deepstream-4.0” on your nano and check out the more advanced examples in /opt/nvidia/deepstream/deepstream-4.0/

Those example are more oriented towards doing inferences on video streams. If you want to, for example, monitor camera feeds for people/places/things, there is some good stuff there.