Unable to stream RTSP Camera Stream for detectnet-camera to apply

Hi Forum,

This is my first post here, so pardon me if i have any mistakes in categories etc.
I am facing a problem while trying to make detectnet-camera apply the model onto a h264 rtsp ip camera stream.

Error code is as follows:

jetson.utils – PyCamera_New()
jetson.utils – PyCamera_Init()
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera::Create('rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ') – invalid camera device requested
jetson.utils – PyCamera_Dealloc()
Traceback (most recent call last):
File “my-detection.py”, line 5, in
camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")
Exception: jetson.utils – failed to create gstCamera device
PyTensorNet_Dealloc()

gstCamera.cpp (17.5 KB)

My detectnet-camera.py is:

import jetson.inference
import jetson.utils

net = jetson.inference.detectNet(“ssd-mobilenet-v2”, threshold=0.5)
camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")
display = jetson.utils.glDisplay()

while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

My IP Camera is currently connected through WiFi and my jetson nano is able to ping the IP of the camera and I ran the gst-launch 1.0 pipeline and is working fine

gst-launch-1.0 rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=1280, height=720 ! videoconvert ! xvimagesink

I am trying to pass the video stream and convert it to a format that jetson-inference accepts and apply the model but am always facing this error.

Thanks,
amosang76

Did you recompile and re-install after changing gstCamera.cpp ?

cd jetson-inference/build
make
sudo make install

The following error occurred:

/home/dlinano/jetson-inference/utils/camera/gstCamera.cpp: In function ‘const char* gstCameraSrcToString(gstCameraSrc)’:
/home/dlinano/jetson-inference/utils/camera/gstCamera.cpp:46:18: error: ‘GST_SOURCE_USERPIPELINE’ was not declared in this scope
else if (src == GST_SOURCE_USERPIPELINE) return “GST_SOURCE_USERPIPELINE”;
^~~~~~~~~~~~~~~~~~~~~~~
/home/dlinano/jetson-inference/utils/camera/gstCamera.cpp:46:18: note: suggested alternative: ‘GST_TYPE_PIPELINE’
else if (src == GST_SOURCE_USERPIPELINE) return “GST_SOURCE_USERPIPELINE”;
^~~~~~~~~~~~~~~~~~~~~~~
GST_TYPE_PIPELINE
/home/dlinano/jetson-inference/utils/camera/gstCamera.cpp: In member function ‘bool gstCamera::buildLaunchStr(gstCameraSrc)’:
/home/dlinano/jetson-inference/utils/camera/gstCamera.cpp:447:13: error: ‘GST_SOURCE_USERPIPELINE’ was not declared in this scope
mSource = GST_SOURCE_USERPIPELINE;
^~~~~~~~~~~~~~~~~~~~~~~
/home/dlinano/jetson-inference/utils/camera/gstCamera.cpp:447:13: note: suggested alternative: ‘GST_TYPE_PIPELINE’
mSource = GST_SOURCE_USERPIPELINE;
^~~~~~~~~~~~~~~~~~~~~~~
GST_TYPE_PIPELINE
utils/CMakeFiles/jetson-utils.dir/build.make:2121: recipe for target ‘utils/CMakeFiles/jetson-utils.dir/camera/gstCamera.cpp.o’ failed
make[2]: *** [utils/CMakeFiles/jetson-utils.dir/camera/gstCamera.cpp.o] Error 1
CMakeFiles/Makefile2:854: recipe for target ‘utils/CMakeFiles/jetson-utils.dir/all’ failed
make[1]: *** [utils/CMakeFiles/jetson-utils.dir/all] Error 2
Makefile:129: recipe for target ‘all’ failed
make: *** [all] Error 2

Where did you get the gstCamera.cpp file ? It seems you miss the modified gstCamera.h.
For reference, the original patch is here.

I got it as a reference to my partner who was in this small project with me. His worked but when i used it on my Jetson Nano it did not.

I will try the patch again and get back to you. Thank you.

Thanks @Honey_Patouceul !!

It works now ! Thank you for your help.

Hi @Honey_Patouceul

Sorry to trouble you again ! I got this to work already but i would like to know if there is a way to reduce the frame loss/choppy video stream ? In LAN it works perfectly with less delay but in WAN, how can it be improved?

Not sure at all as I can’t try, but you may try something like:

gst-launch-1.0 rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=500 ! application/x-rtp,encoding-name=H264,payload=96 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=10 ! queue max-size-time=1 min-threshold-time=5 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink sync=false async=false

and if it works better adapt to jetson-utils command. I’m afraid I cannot help further. More experienced users may also share.

Hi @Honey_Patouceul

I recently cloned the dev branch to include the support for videoSource and videoOutput (correct me if i am wrong)
I tried to pull the rtsp but it came out with this error

My python code is this: