IP camera RSTP using with Jetson Nano.

Can You give me some tips how to create gst-streamer to receive rtsp stream ?
How can I change gstcamera.cpp to get correct receiving of stream ?
Please give some tips for my source of stream.

import jetson.utils
import argparse

# parse the command line
#parser = argparse.ArgumentParser()

#parser.add_argument("--width", type=int, default=1280, help="desired width of camera stream (default is 1280 pixels)")
#parser.add_argument("--height", type=int, default=720, help="desired height of camera stream (default is 720 pixels)")
#parser.add_argument("--camera", type=str, default="0", help="index of the MIPI CSI camera to use (NULL for CSI camera 0), or for VL42 cameras the /dev/video node to use (e.g. /dev/video0).  By default, MIPI CSI camera 0 will be used.")

#opt = parser.parse_args()

# create display window
display = jetson.utils.glDisplay()

# create camera device
camera = jetson.utils.gstCamera(640,480,"rtspsrc location=rtsp:// latency=0 ! queue ! rtph264depay ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink")

# open the camera for streaming

# capture frames until user exits
while display.IsOpen():
	image, width, height = camera.CaptureRGBA()
	display.RenderOnce(image, width, height)
	display.SetTitle("{:s} | {:d}x{:d} | {:.0f} FPS".format("Camera Viewer", width, height, display.GetFPS()))
# close the camera

You may try one of these:

1. Install v4l2loopback (check out v0.10.0).
Then use gstreamer for reading from rtspsrc, decoding and feeding your virtual camera (assuming here it is created as /dev/video1):

gst-launch-1.0 rtspsrc location=rtsp:// latency=200 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! identity drop-allocation=true ! v4l2sink device=/dev/video1

Then access it as a V4l2 camera.


2. [EDIT June2020: @dusty_nv has recently added support for other video sources such as RTSP, so this dirty patch is obsolete.]

Modify jetson-utils. @Dusty_nv mentionned 2 PR. You may have a look to these.
Alternatetively, I did a quick and dirty patch for utils/camera (attached) that would try the passed string as user gst pipeline instead of failing when it does not have the camera expected format.
Then it is possible to use:

camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp:// latency=0 ! rtph264depay !  h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")


camera = jetson.utils.gstCamera(640, 480, "filesrc location=/opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")

Side note: if your IP camera has high resolution/framerate, you might have to increase kernel socket buffer max size on receiver side (jetson):

sudo sysctl -w net.core.rmem_max=26214400

camera.patch-UserPipelineColorsCorrected.txt (2.05 KB)


Hi Honey.

I have made second Your suggestion about changing dirty utils camera.
In attached files send You files after changing. Result same as before changing.
I’m rebooting nano and try Your gstcamera configuration.
How can I create virtual camera /dev/video1 ?

gstCamera.cpp (17.5 KB)
gstCamera.h (13.2 KB)

Seems you still have the original version. Did you build and install again ?

cd jetson-inference/build
sudo make install

Right I forgot to compile code.
Thank You I will give feedback when I compile :)


Hello Honey.

Your suggestions push me to find out how to make it but with success :)
handy is working very well with nano but I’m scared about buffer is increasing (free memory goes down in my opinion couse gst streamer buffer ???). Maybe i’m wrong.

here is sample of great Your job :)
ps. why colours are not natural ? how can I change it ?

Glad to see it worked out on nano.

For colors, I mentioned quick & dirty patch ;-) Attached a corrected patch in post #2.

For memory usage I can’t say more, but other users may share their findings.

Thanks for code changed.
I’m thinking about v4l2, maybe it is easier to do my task with it ( receive h264 or mpeg stream with hardware decoding ?). Is Your GST implement hardware decoding of stream or software ?

The pipelines I’ve posted use nvv4l2decoder which uses NVDEC HW engine for decoding.

Hello Honey.

Excellent so decoding is going via hardware not software and not so much cpu is consumed :)
I want to clarify uploaded file with colour correction.
I compare both files and see no changes.
I upload it to You for verification.
Maybe I’m missed something :(
4 changies in cpp and one in h :) in original version
camera.patch-UserPipelineColorsCorrected.txt (2.05 KB)
camera.patch-UserPipeline.txt (1.99 KB)

It does have a change. In the first version I used BGR format and it is corrected to RGB in the latter patch. Build, install and try !

Thanks for answer.
Sorry for bother You.
Tommorow I will try to change make and install

Sorry I missed this change.


excellent working :) I had to compile it :)

Hi Honey.
I would like to train network to recognize element. I would like to have sure is it this detail not other. I would like to know in which pose of recognized element. Is this possible with jetson-inference or I have to use other repository. I have found some on github known as SSD-6D. I don’t know how to train. I have to shoot plenty of pics to prepare dataset to generate network ? In a bin will be only one kind of details but in various poses. I would like to use some 3d camera maybe kinetc 2 to get knowleadge how far is this element from sensor
and will be easy to send correction data to kuka robot to handle element from bin.

I’d suggest you create a new topic for this. Someone else may better advise. Although not impossible, I’d advise not to train on nano but rather on host.

Thank You for Your help.

This fails for me with this error. Any ideas? TIA.

Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: Cannot identify device ‘/dev/video1’.
Additional debug info:
v4l2_calls.c(616): gst_v4l2_open (): /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0:
system error: No such file or directory
Setting pipeline to NULL …
Freeing pipeline …

Seems /dev/video1 was not found.
Did you install loopback and installed for managing virtual node /dev/video1 ?

sudo modprobe v4l2loopback devices=1 video_nr=1 exclusive_caps=1

I may be missing a step. I used the gstCamera.cpp and gstCamera.h files below , added a filesrc line to detectnet-camera and got this make error:

error: ‘jetson’ was not declared in this scope
gstCamera* camera = jetson.utils.gstCamera(640, 480, "filesrc location=/home/jetbot/video.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")

What did I forget? TIA.

I’m a bit puzzled… I was thinking you were trying way 1, but now it seems you’re trying way2.

It seems it failed to compile because you are adding python code into C++ source.

In way2 you would patch utils/camera (from an unmodified version), build it and install it.
Then in python code such as detectnet-camera.py you would change as in my initial post way2.

Hi, I’ve also used this method (second one for modifying jetson/utils), so thank you very much for the effort. I was wondering if I can tweak the modifications made to gstCamera.cpp to have the gstreamer pipeline ‘appsink sync=false’ at the end. You see, my original gst pipeline used with opencv is like this:
‘rtspsrc location=rtspt://my_camera_ip ! queue ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvvidconv ! videoconvert ! video/x-raw, format=BGR ! appsink sync=false’

As per your statement, we cannot use the last appsink statement at the end of the pipeline inside the detection python script. So I assume we would have to change what’s inside the gstCamera.cpp file. The last sync=false statement is the one that gets rid of the lag and it would be great to have it here with detectNet example as well. Really appreciate any thoughts on this. Thanks!