Detecting objects from RSTP Stream (IP Camera)

Hi Guys!
I’ve implemented the object detection model by following the tutorial in YouTube (Real-Time Object Detection in 10 Lines of Python Code on Jetson Nano) and it’s using the Raspberry Pi V2 Camera attached on my Jetson Nano.

What can I do to detect objects from my IP Camera (RSTP) instead of the raspberry pi camera?
I had tested out the solution to edit the gstCamera.cpp code but it can’t work
(text files attached below are gstcamera.cpp, error message when launching program and mydetection python script

Could someone please provide me some well-detailed solutions?
Thank you so much!

gstCamera.cpp (17.2 KB)
Error.txt (4.9 KB)
my-detection.txt (611 Bytes)

You may have a look to this post.

Hi @Honey_Patouceul
Thank you for your fast reply!

I had tried your Solution #2 but when i launched the program it just hang over there forever and i need to xkill it
Any fix for this?

gstCamera.cpp (17.5 KB)

        import jetson.inference
    import jetson.utils

    net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
    camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://192.168.0.90:554/mpeg4/media.amp latency=0 ! rtph264depay !  h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")
    display = jetson.utils.glDisplay()

    while display.IsOpen():
    	img, width, height = camera.CaptureRGBA()
    	detections = net.Detect(img, width, height)
    	display.RenderOnce(img, width, height)
    	display.SetTitle("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))

Seems the pipeline starts, but it encounters an error when trying to read from RTSP server.

  • Do you need login/password for this resource in this server ?
  • Is this stream really H264 ? because the path/name says mpeg4

Have a look to this. First try a pipeline from rtspsrc to display with gst-launch, just adapt for gstCamera when it works.

Thanks again @Honey_Patouceul ! I can now launch the program.
However, the color seems not accurate at all. How can I fix this ? (already applied your colorCorrection patch)

I cannot say with so few details. You may tell what resolution, framerate and encoding your IP cam is sending.
I’d suggest to first try without jetson-inference and just try to have a gstreamer pipeline to display:

gst-launch -ev rtspsrc location=rtsp://192.168.0.90:554/mpeg4/media.amp ! rtph264depay !  h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ! videoconvert ! xvimagesink

Does this display fine ?

I tried to use gstreamer pipeline to display but it seems not working

gst-launch -ev rtspsrc location=rtsp://root:leadSingtel@192.168.0.90:554/axis-media/media.amp latency=0 ! rtph264depay !  h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480

This is the launch line that I used (with updated url)
Resolution: 640x480 Encoding: H264 The IP camera I’m using is Axis M2025-LE

Sorry, I made a typo. The command is gst-launch-1.0:

gst-launch-1.0 -ev rtspsrc location=rtsp://192.168.0.90:554/mpeg4/media.amp ! application/x-rtp, media=video, encoding-name=H264, payload=96 ! rtph264depay !  h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ! videoconvert ! xvimagesink

Note that if it doesn’t work and you’re running R32.4.2, you may try removing h264parse before nvv4l2decoder.

Woah the display runs just fine! (accurate color display)

However, if I run it with jetson inference, it can’t work

import jetson.inferenc
import jetson.utils

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://root:leadSingtel@192.168.0.90:554/axis- 
media/media.amp latency=0 ! rtph264depay !  h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, 
format=BGRx, width=640, height=480 ! videoconvert ! xvimagesink")
display = jetson.utils.glDisplay()

while display.IsOpen():
    img, width, height = camera.CaptureRGBA()
    detections = net.Detect(img, width, height)
    display.RenderOnce(img, width, height)
    display.SetTitle("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))

You would change the tail of the pipeline for jetson-inference:

camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://root:leadSingtel@192.168.0.90:554/axis-media/media.amp latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")

#or
camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://root:leadSingtel@192.168.0.90:554/axis-media/media.amp latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ")

Tested out both code. The results are the same, it can be launch but the color, resolution and detector a bit weird.

Attach below is the error i encoutered after launched, the result is shown above (full text in the file)

(python:13292): GStreamer-CRITICAL **: 21:06:49.135: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
[gstreamer] gstCamera onPreroll
[gstreamer] gstCamera -- allocated 16 ringbuffers, 921600 bytes each
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0

launch.txt (15.7 KB)

You may try NV12 instead of RGBx:

camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://root:leadSingtel@192.168.0.90:554/axis-media/media.amp latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=NV12, width=640, height=480 ")

Hi @Honey_Patouceul
After I edited the gstCamera.cpp code, things got better although there’s long delays when detecting objects and the color seems a bit off.

Tested both format (BGRx and NV12) as mentioned above, obtained the same result

Original color of my keyboard

gstCamera.cpp (17.5 KB)

I did the same mistake at first…using BGR instead of RGB.
If you look to my patch, you’ll see it has name with ColorsCorrected.
In buildLaunchStr, change BGR into RGB, rebuild, reinstall and it might be ok.

3 Likes

Hi @Honey_Patouceul
After changing the code, finally it worked as expected !

No words can express how thankful I am for your guidance. It was a huge help!
Thank you for your time and attention!

1 Like

Hi @Honey_Patouceul

Sorry for disturbing you again!
Do you mind explaining the patch code you applied to gstCamera.cpp and .h file in order to make RTSP streaming possible?

Thank you so much!

First note that this patch is obsolete. Now jetson-utils supports RTSP sources.

In older versions, it was expecting cameras only, with a different syntax/prefix for CSI cameras or V4L2 cameras. If the provided string was not matching one of these syntaxs, the video source was returning an error and aborting. My patch just added, if both cameras syntax were not recognized, a 3rd case where a pipeline with the provided string (plus some other code added in by buildLaunchStr) is tried.