IP camera connection with jetson.utils.gstCamera

I’m having some issue with the jetson.utils.gstCamera command. The same IP camera is working with gst-launch-1.0 command on the terminal.

Not working comment:
camera = jetson.utils.gstCamera(640, 480, “rtspsrc location=rtsp://192.168.3.162:554/media/video1 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480”)

Error message:
[gstreamer] gstCamera failed to create pipeline
[gstreamer] (could not parse caps “video/x-raw, format=(string) , width=(int)640, height=(int)480”)
[gstreamer] gstCamera – failed to create device rtspsrc location=rtsp://192.168.3.162:554/media/video1 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480

Working command:
gst-launch-1.0 rtspsrc location=rtsp://192.168.3.162:554/media/video1 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! gtksink

I went through a lot of blogs and was not able to find a solution. Did I missed simple ?
Thanks,
Terry

Hi,
Please check the source code:

The default implementation supports nvarguscamerasrc and v4l2src. Your case is rtsprc and you may need to customize the code.

As @DaneLLL mentioned, gstCamera does not support RTP/RTSP - however, the newer videoSource interface in jetson.utils does. It has a similar API:

Thanks a lot for the information. I would not have figure it out that the “jetson.utils.gstCamera” does not support IP camera without your helps.

I tried out the new jetson.utils with:
input = jetson.utils.videoSource(“rtsp://0000:0000aaaa@192.168.3.162:554”, “my_video.mp4” )

It return error of : [gstreamer] gstDecoder – failed to create decoder for rtsp://192.168.3.162:554

Did you miss anything ?

My goal is to run object detection with IP camera.
I’m currently able to do so by the following steps.

  1. using opencv to acquire images
  2. convert to rgba with: frame_rgba = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
  3. convert to cuda with: image = JTUils.cudaFromNumpy(frame_rgba)
  4. object detection with :detections = self.net.Detect(image, image.width, image.height)

While this work, i’m concerning that the process is not efficient. The process of converting image could take a lot of time then needed. Could you comment on the best way to accomplish this task?
Thanks,

It is not expecting the second argument of “my_video.mp4”. What I would try running first, is the video-viewer sample, which you can see being used for RTSP input here:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#rtsp

Then if you can view the stream, you can modify your application to use similar code to video-viewer.py.

I’m not having much luck with video-viewer.

video-viewer rtsp://0000:0000aaaa@192.168.3.162:554
return message of:
[gstreamer] gstDecoder – Could not open resource for reading.
[gstreamer] gstDecoder – try manually setting the codec with the --input-codec option
[gstreamer] gstDecoder – failed to create decoder for rtsp://0000:0000aaaa@192.168.3.162:554
video-viewer: failed to create input stream

It does work in openCV with:
cap = cv2.VideoCapture(“rtsp://0000:0000aaaa@192.168.3.162:554/cam/realmonitor?channel=1&subtype=0”)

Anything else that I can try? Did I miss some installation?
Thanks,

Can you try running it as video-viewer --input-codec=h264 rtsp://0000:0000aaaa@192.168.3.162:554 (assuming your RTSP is encoded with H.264)

1 Like

It’s working. Thanks a lot.
I need to add "/media/video1 " at the end.
video-viewer --input-codec=h264 rtsp://0000:0000aaaa@192.168.3.162:554/media/video1

Could you give me some idea on the processing time efficiency if openCV is used to capture images (detail at previous blog ) for object detection ?

Not sure what you mean.
If you use jetson-utils for receiving/decoding the RTSP stream, what do you mean with ‘openCV is used to capture images’ ?
Jetson-utils would use gstreamer for providing a RGB image than can be used instead of opencv_video BGR.
Opencv videoio may not be so efficient on jetson. Jetson-utils might be faster for providing frames.

Since I will do some image manipulation such as selected an area, rotate resize add display… before sending it to network. I wanted to see if using opencv to capture images is an feasible option or not.
If I understand correctly, it will work. However, there will be extra processing time.
Thanks a lot for your help,
Terry

I see.
For C++ processing with opencv CUDA, you might check this example.
However, for further opencv python processing, you may also check:

…someone else may better advise.

Very helpful links. I did not know that you can use cuda to speed up the processing.
The current openCV in the default install for me is: 4.1.1
Would you need if openCV 4.4.0 version is needed for CUDA to work on openCV?

The opencv lib provided by JetPack doesn’t support CUDA. You have to build your own version enabling CUDA.
Many opencv versions would work, but I’d suggest to use 4.4.
You may try one of these scripts for downloading/building:

1 Like

Thanks for sharing. It will save me a lot of time trying to find out.