Stream from camera to RTSP on a VLC

Hi,

I just got the jetson nano B01, I was able to stream a video on Jetson nano to a VLC over the network using the the deepstream python app.

How do I modify the app to get it to stream live feeds from the CSI camera on the jetson nano the the VLC over the network using RTSP . Emphasis on using the camera rather than a video.

What modifications do i need to make to the script or something.

Thanks

PS: This is the script I used deepstream-test1-rtsp-out.py

Hi,
In deepstream-test1-rtsp-out.py, the source is

filesrc ! h264parse ! nvv4l2decoder ! nvstreammux ! ...

You need to customize it it to trun

nvarguscamerasrc bufapi-version=1 ! nvvideoconvert ! nvstreammux ! ...

Okay, but I’m using the nano B01 with 2 cameras, I have just 1 installed.
How do I specify just one

Hi,
You can set sensor-id:

  sensor-id           : Set the id of camera sensor to use. Default 0.
                        flags: readable, writable
                        Integer. Range: 0 - 255 Default: 0

Usually it is 0 fo rsingle camera, you may run gst-launch command to make sure the camera is working.

gst-launch-1.0 nvarguscamerasrc sensor-id=0 bufapi-version=1 ! nvoverlaysink

Thanks, let me kindly explain our design flow, if you can suggestion on how to execute this, we want to stream security feeds from the an IP camera, to the NVIDIA jetson running a PyTorch based face detection model and also stream the live feeds to a mobile application.

I’m the team lead but really need assistance on the best approach to execute this.

What are the best approach to execute this please.

Thanks

Hi,
If your source is an IP camera, probably you should check deepstream-test3 to use uridecodebin. Generally IP cameras are treated as RTSP sources.

In DeepStream SDK, deep learning inference is run through nvinfer with Tensor RT models. You may consider to convert your model to be executable on Tensor RT.

How do I train our model using the pytorch for face detection and recognition for private project which I can deploy to deepstreamer.

I don’t know the specifics on how to accomplish this in other for it to work on Nvidia jetson using deepstream. As I understand jetson run a bit differently.

If there is a way I hope I can do this with aws sagemaker

Hi,

1.
If you don’t have dependency on pyTorch, it’s recommended to train a detector/classifier with our Transfer Learning Toolkit:

Once the model is ready, you can follow the sample below to inference it with Deepstream:

2.
We also support the model that trained on other frameworks.
For pyTorch user, please convert the model into TensorRT engine first.

After that, you can update the engine path in configure file and inference it with Deepstream API directly.

Thanks.

Awesome, one other question. I’mcreating a mobile application to stream these post processed live streams to but it can’t do it over the network, only streams on the same network. is there a workaround to this.

Hi,

It is more about network setting. If the device is with public IP address, you should be able to construct public RTSP streaming like

rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen02.stream

We usually run in LAN and do not have experience in constructing a public server. Probably you can go to gstreamer forum.

Awesome, How do i create a port forward because I have a couple of team members in different countries connecting to the jetson for development

Hi,

Most companies should work remotely in this mode:
https://www.softether.org/4-docs/2-howto/1.VPN_for_On-premise/2.Remote_Access_VPN_to_LAN
So you can have Jetson Nano online in the LAN. Members can login the LAN through VPN and access the device.

Let me rephrase the question. So we want to stream live video feed to our mobile application from jetson nano but we are required to create port forwarding - using waveshare sim7000x hat for jetson nano. How do we do this?

Also what’s he best pipeline to send picture notification which is stored on the jetson nano disk to the mobile app.

Hi,
General RTSP cases are shared in


FYR.