IP camera RSTP using with Jetson Nano.

Thanks for answer.
Sorry for bother You.
Tommorow I will try to change make and install

Sorry I missed this change.

Marek

excellent working :) I had to compile it :)

https://www.youtube.com/watch?v=nYuiDYtbrUs

Hi Honey.
I would like to train network to recognize element. I would like to have sure is it this detail not other. I would like to know in which pose of recognized element. Is this possible with jetson-inference or I have to use other repository. I have found some on github known as SSD-6D. I don’t know how to train. I have to shoot plenty of pics to prepare dataset to generate network ? In a bin will be only one kind of details but in various poses. I would like to use some 3d camera maybe kinetc 2 to get knowleadge how far is this element from sensor
and will be easy to send correction data to kuka robot to handle element from bin.
https://www.youtube.com/watch?v=SxqRkU9kdlI

I’d suggest you create a new topic for this. Someone else may better advise. Although not impossible, I’d advise not to train on nano but rather on host.

Thank You for Your help.
Marek

This fails for me with this error. Any ideas? TIA.

Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: Cannot identify device ‘/dev/video1’.
Additional debug info:
v4l2_calls.c(616): gst_v4l2_open (): /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0:
system error: No such file or directory
Setting pipeline to NULL …
Freeing pipeline …

Seems /dev/video1 was not found.
Did you install loopback and installed for managing virtual node /dev/video1 ?

sudo modprobe v4l2loopback devices=1 video_nr=1 exclusive_caps=1

I may be missing a step. I used the gstCamera.cpp and gstCamera.h files below , added a filesrc line to detectnet-camera and got this make error:

error: ‘jetson’ was not declared in this scope
gstCamera* camera = jetson.utils.gstCamera(640, 480, "filesrc location=/home/jetbot/video.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")

What did I forget? TIA.

I’m a bit puzzled… I was thinking you were trying way 1, but now it seems you’re trying way2.

It seems it failed to compile because you are adding python code into C++ source.

In way2 you would patch utils/camera (from an unmodified version), build it and install it.
Then in python code such as detectnet-camera.py you would change as in my initial post way2.

Hi, I’ve also used this method (second one for modifying jetson/utils), so thank you very much for the effort. I was wondering if I can tweak the modifications made to gstCamera.cpp to have the gstreamer pipeline ‘appsink sync=false’ at the end. You see, my original gst pipeline used with opencv is like this:
‘rtspsrc location=rtspt://my_camera_ip ! queue ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvvidconv ! videoconvert ! video/x-raw, format=BGR ! appsink sync=false’

As per your statement, we cannot use the last appsink statement at the end of the pipeline inside the detection python script. So I assume we would have to change what’s inside the gstCamera.cpp file. The last sync=false statement is the one that gets rid of the lag and it would be great to have it here with detectNet example as well. Really appreciate any thoughts on this. Thanks!

You would just edit the patched gstCamera.cpp, in buildLaunchStr() find the extra user pipeline section and add sync=false to appsink such as:

	else // GST_SOURCE_USERPIPELINE
    {
		ss << mCameraStr;
		ss << " ! videoconvert ! video/x-raw, format=RGB, width=(int)" << mWidth << ", height=(int)" << mHeight << " ! "; 
		ss << "appsink name=mysink sync=false";
		mSource = GST_SOURCE_USERPIPELINE;
	}

Not tried, but you’ll tell us.

PS: It may also help to specify caps video/x-raw(memory:NVMM), format=BGRx between nvvidconv and videoconvert. I420 or NV12 may be used here, but would make videoconvert much slower, while it will just have to remove the extra fourth byte from BGRx format.

OMG. I have literally been trying to get this hack to work for a year and finally got it to work. Thank you. My issue is that I am way out of practice on C programming and have never applied a patch in my life. I have been using darknet to recognize RTSP of our chickens using a custom YOLOv3 model at 20 fps and now I can use detectnet-camera at 100 fps. Yay.

1 Like

Glad it worked out for you.
Hope I shouldn’t be sorry for the chickens ;-)

Only for eggs. We love our 6 chickens. You can check out my chicken naming robot here: https://github.com/DennisFaucher/ChickenDetection

1 Like

Hi, thanks for the quick reply. The last format=BGRx did the trick there! Very much appreciate it!

Accessing .mp4 from gstCamera works like a charm. I am playing around with different syntaxes for my rtsp camera. The gst-launch-1.0 CLI that I have gotten to work with my RTSP camera is “gst-launch-1.0 rtspsrc location=rtsp://dennis:password@192.168.86.42:88/videoMain ! queue ! decodebin ! nvvidconv ! videoconvert ! xvimagesink” What would this look like in “gstCamera* camera = gstCamera::Create(” format in gstCamera.cpp ? TIA

You would just cut after nvvidconv and add output caps video/x-raw, format=BGRx:

rtspsrc location=rtsp://dennis:password@192.168.86.42:88/videoMain ! queue ! decodebin ! nvvidconv ! video/x-raw, format=BGRx
1 Like

It works @ 100 fps. Thank you so much.

gstCamera* camera = gstCamera::Create(640, 480, "rtspsrc location=rtsp://dennis:password@192.168.86.42:88/videoMain ! queue ! decodebin ! nvvidconv ! video/x-raw, format=BGRx");

Hello,

I have a problem with both methods.

For the first one i start new process :

css-jetson-dev@cssjetsondev-desktop:~$ gst-launch-1.0 rtspsrc location=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ latency=100 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! identity drop-allocation=true ! v4l2sink device=/dev/video1

That’s what I get:

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

And thats whats in my code:

camera = jetson.utils.gstCamera(1920,1080,"/dev/video1")

And that’s the error I get:

(python3:10191): GStreamer-CRITICAL **: 08:58:37.917: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
[gstreamer] gstCamera failed to create pipeline
[gstreamer]    (no source element for URI "/dev/video1")
[gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA, camera /dev/video1)
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_V4L2, camera /dev/video1
[gstreamer] gstCamera pipeline string:
v4l2src device=/dev/video1 ! video/x-raw, width=(int)1920, height=(int)1080, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera /dev/video1
jetson.utils -- PyDisplay_New()
jetson.utils -- PyDisplay_Init()
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstCamera failed to set pipeline state to PLAYING (error 0)
[gstreamer] gstCamera failed to capture frame
Traceback (most recent call last):
  File "detectnet-camera.py", line 65, in <module>
    img, width, height = camera.CaptureRGBA()
Exception: jetson.utils -- gstCamera failed to CaptureRGBA()
PyTensorNet_Dealloc()
jetson.utils -- PyCamera_Dealloc()
jetson.utils -- PyDisplay_Dealloc()

I’am able to see livestream using :

gst-launch-1.0 -v playbin uri=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ uridecodebin0::source::latency=100

The second one:

rtsp_src="rtspsrc location=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ latency=100 ! rtph264depay! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx, width=1280,height=720  "

camera = jetson.utils.gstCamera(1280,720,rtsp_src)

I get :

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera::Create('rtspsrc location=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ latency=0 ! rtph264depay! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx, width=1280,height=720  ') as user pipeline, may fail...
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera rtspsrc location=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ latency=0 ! rtph264depay! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx, width=1280,height=720  
[gstreamer] gstCamera pipeline string:
rtspsrc location=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ latency=0 ! rtph264depay! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx, width=1280,height=720   ! videoconvert ! video/x-raw, format=BGR, width=(int)1280, height=(int)720 ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_USERPIPELINE, camera rtspsrc location=rtsp://username:password@172.16.1.3:554/Streaming/Channels/101/ latency=0 ! rtph264depay! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx, width=1280,height=720  
jetson.utils -- PyDisplay_New()
jetson.utils -- PyDisplay_Init()
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
Opening in BLOCKING MODE 
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> nvv4l2decoder0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0
[gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvv4l2decoder0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvv4l2decoder0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

Which results in creating window from glDisplay ‘Nvidia Jetson’ i presume but it seems like it doesn’t has anything apart from the name bar. I have to force close it.

Seems the pipeline correctly launched, so it might be another issue.
I have seen such case where nvv4l2decoder was stalling when h264parse was used. So I’d suggest to replace it with omxh264dec or remove h264parse for a try.

Also note that @dusty_nv has recently added support for other video sources such as RTSP, so my dirty patch is obsolete.
Better check out the new version.