jetson.utils.videoSource() and RTSP with low latency on Nano - is it possible?

Hi. I am trying to retrieve images from a network camera using RTSP which I will then perform object detection on. I have tried a number of different ways, none of which seem to work well at this stage.

To confirm that my camera (Anpvis UND950W) is OK and that the basics all see to be working, I have executed the following and it produces a nice image with no noticable lag.

$ gst-launch-1.0 rtspsrc latency=0 location=rtsp://admin:secret@192.168.2.17/media/video2 ! rtph264depay ! h264parse ! decodebin ! nvvidconv ! xvimagesink

The output from jtop suggests it is using the GPU, Wireshark confirms UDP (and not TCP) packets, and pretty much zero noticeable lag, so all good! Now I just need to be able to get at the images instead of displaying them on the screen.

I wrote some Python in an attempt to do something similar inside a program with a call to jetson.utils.gstCamera() but I couldn’t get it to work, and the post at detectnet-video - #11 by martin2wu0d said that this is the wrong way to do it and that jetson.utils.videoOutput() is in fact the way to access RTSP. This is what is done in detectnet-camera.py. So I tried a few things for a proof of concept and the following does indeed work and displays images on the screen.

$ detectnet-camera.py --input-codec=h264 rtsp://admin:secret@192.168.2.17/media/video2

The problem is this has lots of lag. About three seconds of lag and I cannot find a way to reduce the lag (what I did successfully in the gstreamer path by specifying latency=0). Is there a way to somehow specify latency=0, or to achieve the same effect? If there is, I couldn’t find it.

Another thing I tried was to use OpenCV instead. … Just the important bits follow…

net = jetson.inference.detectNet(myNetwork, sys.argv, myThreshold)
pipe = “rtspsrc latency=0 location=rtsp://admin:secret@192.168.2.17/media/video2 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink”

cap = cv2.VideoCapture(pipe)

ret, frame = cap.read()

OK, the retrieved frame is good and I can display it using cv2.imshow(‘display’, frame) for example. The thing is I really want to do something like pass the frame to net.Detect(frame, overlay=myOverlay) but it seems the image formats differ and I get an exception that jetson.utils function wasn’t passed a valid cudaImage or cudaMemory object. Hmm, I’m not sure how to convert the image returned by cap.read() into an acceptable format. So I’m stuck there too.

So is there a way to do low-latency RTSP from a network camera while making use of the Nano GPU?

I think I’m so close to a solution, yet…

Hi @martin2wu0d, please see this line from jetson-inference/utils/codec/gstDecoder.cpp:

Uncomment it and change it to latency=0. Then you want to re-run make and sudo make install.

If this improves the latency for you, let me know, and I will enable it in master. On my RTSP test sources I was unable to test it myself.

Hi @dusty_nv, that change tentatively looks to have solved my problems with latency! I do wonder if it might break things for some cameras and so might not be a good thing to put in master? Ideally, some way to specify the rather long and convoluted gstreamer path might be better, but more work of course.

Thanks for replying - now I can get on with my application to rule the world, mwahahaha! :-)

OK, cool - glad that helped!

1 Like

FYI - for updates on streaming latency/performance improvements in jetson-inference, please see this topic regarding the integration of NVMM memory: