Jetson nano development kit – h264 IP camera live streaming with gstreamer for video analytics is not working

The below gstreamer command plays the video directly on the terminal:
gst-launch-1.0 rtspsrc location=rtsp://username:password@192.168.1.225:554/profile2/media.smp ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink -e

Jtop output session shows that the Hardware Encoder is running:

The below playbin command works:
gst-launch-1.0 -v playbin uri=rtsp://username:password@192.168.1.225:554/profile2/media.smp

I am able to open the rtsp stream in the VLC player without any issues.
Also, I am able to open the camera without hardware acceleration. The CPU utilisation without HW decoding is about 70 to 80%.

When I integrate the stream into the python code, I am getting an error code. The python code is as below:

import cv2
pipeline = ‘rtspsrc location=rtsp://username:password@192.168.1.225:554/profile2/media.smp ! rtph264depay ! h264parse ! omxh264dec ! appsink’
capture = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)
while capture.isOpened():
res, frame = capture.read()
cv2.imshow(“Video”, frame)
key = cv2.waitKey(1) & 0xFF
if key == ord(“q”):
break
capture.release()
cv2.destroyAllWindows()

I am getting the below error code:

(python3:7616): GStreamer-CRITICAL **: 20:28:10.657: gst_caps_is_empty: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:7616): GStreamer-CRITICAL **: 20:28:10.658: gst_caps_truncate: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:7616): GStreamer-CRITICAL **: 20:28:10.658: gst_caps_fixate: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:7616): GStreamer-CRITICAL **: 20:28:10.658: gst_caps_get_structure: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:7616): GStreamer-CRITICAL **: 20:28:10.658: gst_structure_get_string: assertion ‘structure != NULL’ failed

(python3:7616): GStreamer-CRITICAL **: 20:28:10.658: gst_mini_object_unref: assertion ‘mini_object != NULL’ failed
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 1920x1088 (x 11), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 1920, nFrameHeight = 1080
[ WARN:0] global /home/username/test/workspace/opencv-4.5.0/modules/videoio/src/cap_gstreamer.cpp (1761) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module omxh264dec-omxh264dec0 reported: Internal data stream error.
[ WARN:0] global /home/username/test/workspace/opencv-4.5.0/modules/videoio/src/cap_gstreamer.cpp (888) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/username/test/workspace/opencv-4.5.0/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

Please help me to resolve the issue.
Thank you

omx plugins are deprecated, better use nvv4l2decoder instead.
The decoder would output NV12 format into NVMM memory. So you need to convert into BGR format in system memory for opencv. Try:

pipeline = 'rtspsrc location=rtsp://username:password@192.168.1.225:554/profile2/media.smp ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'

Hi,
Thank you for your fast response and to-the-point solution. The pipeline works. But, I have some issues.

  1. The CPU utilisation (~20 to 30%) has increased drastically (~70 to 80%) compared to the previous pipeline.
  2. The live video with the new pipeline lags sometimes.
    The pipeline output has some warnings and it is attached for your reference.

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[ WARN:0] global /home/username/test/workspace/opencv-4.5.0/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/username/test/workspace/opencv-4.5.0/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded

Is there anything we can do to reduce the CPU workload?
Is the code output as attached above normal or does it need any further modifications?

Once again thank you for your support and I look forward to seeing your response.

Hi,
I would like to add one more piece of info to the above post.
I would like to use the imageNet/detectNet to further process the image and these models expect to receive cudaImage in one of the formats like rgb8, rgba8, rgb32f, rgba32f.

Is it possible to convert the image directly in the GStreamer pipeline?

If so, what is the best way to do it with hardware acceleration?
Please suggest. Thank you.

The CPU overhead is probably due to:

  • cv imshow being not that fast on Jetson Nano. An alternative is to use VideoWriter with a gstramer pipeline to a display sink such as this example.
  • videoconvert used for BGRx → BGR conversion is done on CPUs. I proposed that because most opencv algorithms expect this format, but recent versions of opencv support capture in 4 channels formats such as BGRx or RGBA, so it is also possible to remove the BGRx → BGR conversion.

If you want to process with ImageNet, using OpenCV may not be the best option. Frames are captured into a CPU Mat, so you would have to upload/download into a GpuMat.

You may also consider jetson-inference or DeepStream that would provide better performance for this case.

Hi,
Thank you for your explanation. The pipeline has become more efficient after removing the BGRx to BGR conversion. I have decided not to follow this way as the image is to be converted to cudaImage in the CPU.

I pursued jetson-inference as suggested by you and it works beautifully. I followed the Hello world example of Dusty-NV (jetson-inference/detectnet-example-2.md at master · dusty-nv/jetson-inference · GitHub). Amazing library utilizing GPU and hardware decoding.

Once again, thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.