Running `gst-inspect-1.0 nvv4l2h264dec` will show that there is no such element as `nvv4l2h264dec`

Just as the title suggests, such an error will be reported.

def get_rtsp_h264_gst(rtsp_uri, width, height, latency):

**gst_str = (f"rtspsrc location={rtsp_uri} latency={latency} ! "**

           **f"rtph264depay ! h264parse ! "**

           **f"nvvideo4linux2 dec=1 ! "  

           **f"nvvidconv ! "**

           **f"video/x-raw, width={width}, height={height}, format=BGRx ! "**

           **f"videoconvert ! appsink")**

**print(f"gst: {gst_str}")**

**return gst_str**

gst_str = get_rtsp_h264_gst(rtsp_uri, width, height, latency)

cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

The above is my code, and cap.isOpened() returns False.

My JetPack version is 6.1, and my Ubuntu version is 22.04.Thank you very much for helping me with this. It’s quite an urgent matter.

Hi,
Please check if you have installed install nvidia-l4t-gstreamer:

Accelerated GStreamer — NVIDIA Jetson Linux Developer Guide

nvidia@tegra-ubuntu:~/yolo$ apt list --installed | grep nvidia-l4t-gstreamer WARNING: apt does not have a stable CLI interface. Use with caution in scripts. nvidia-l4t-gstreamer/stable,now 36.4.4-20250616085344 arm64 [installed, automatic]

It has been installed.

Hi,
The name nvv4l2h264dec looks wrong. Please try

$ gst-inspect-1.0 nvv4l2decoder

The universal decoder nvv4l2decoder is available.

However, when I execute the following command: gst-launch-1.0 rtspsrc location=rtsp://admin:123456@192.168.1.170:554/mpeg4 latency=200 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1280,height=720,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink, I still encounter the following error:

Setting pipeline to paused …

Opening in BLOCKING MODE

Pipeline is live and does not need PREROLL …

Progress: (open) Opening Stream

Pipeline is PREROLLED …

Pre-roll loaded, waiting for processing to complete…

Progress: (connect) Connecting to rtsp://admin:123456@192.168.1.170:554/mpeg4

Progress: (open) Retrieving server options

Progress: (open) Retrieving media info

Progress: (request) SETUP stream 0

Progress: (open) Opened Stream

Setting pipeline to playing …

New clock: GstSystemClock

Progress: (request) Sending PLAY request

Reallocating latency time…

Progress: (request) Sending PLAY request

Reallocating latency time…

Progress: (request) Sent PLAY request

NvMMLiteOpen : Block : BlockType = 261

NvMMLiteBlockCreate : Block : BlockType = 261

Reallocating latency time…

Reallocating latency time…9.

Error: from element /GstPipeline:pipeline0/GstH264Parse:h264parse0: Error parsing H.264 stream Additional debug info:

../gst/videoparsers/gsth264parse.c(1454): gst_h264_parse_handle_frame (): /GstPipeline:pipeline0/GstH264Parse:h264parse0:

No H.264 NAL unit found Execution ended after 0:01:04.093865268

Setting pipeline to NULL …

Freeing pipeline resources …

My NVDEC is activated, running at 112 MHz (is 112 MHz a bit low?). Another issue is that the memory usage keeps increasing.

There are three questions in total: 1. Is there any problem with my command? 2. Is 112 MHz a bit low? 3. Why does the memory usage keep increasing?

Thank you again for your answer.

I have replied to you. You can take a look.

Hi,
Please try this:
UDP video from ffmpeg to gstreamer - #8 by DaneLLL

It seems like the source is mpeg4 stream. Hardware decoder does not support this format and it should pick software decoder.

nvidia@tegra-ubuntu:~/yolo$ ffprobe rtsp://admin:123456@192.168.1.170:554/mpeg4
ffprobe version 4.4.2-1ubuntu0.1 Copyright (c) 2007-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04)
configuration: --prefix=/usr --enable-nvv4l2dec --enable-libv4l2 --enable-shared --extra-libs=‘-L/usr/lib/aarch64-linux-gnu/tegra -lv4l2 -lnvbufsurface -lnvbufsurftransform’ --extra-cflags=-I/usr/src/jetson_multimedia_api/include/ --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
Input #0, rtsp, from ‘rtsp://admin:123456@192.168.1.170:554/mpeg4’:
Metadata:
title : RTSP/RTP stream from Network Video Server
comment : mpeg4
Duration: N/A, start: 0.039000, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 180k tbc

But it shows H.264 like this.

Hi,
It looks to be H264 but hardware decoder cannot decode the stream. Please try software decoder avdec_h264 and see if the stream can be decoded.

Setting pause pipeline…
The pipeline is in use and does not require PREROLL…
Progress: (open) Opening Stream
Pipeline PREROLLED…
Trailer loaded, waiting for processing to complete…
Progress: (connect) Connecting to rtsp://admin:123456@192.168.1.170:554/mpeg4
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting up playback pipeline…
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Reallocating delay time…
Progress: (request) Sending PLAY request
Reallocating delay time…
Progress: (request) Sent PLAY request
Reallocating delay time…
Reallocating delay time…9.
0:01:34.1 / 99:99:99.

The above is the result after I entered the following command: gst-launch-1.0 rtspsrc location=rtsp://admin:123456@192.168.1.170:554/mpeg4 latency=200 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,width=1280,height=720,format=BGR ! appsink

However, after executing the above command, my memory usage keeps increasing, and the total 16GB of memory and 8GB of swap will all be filled up.

Hi,
In gst-launch-1.0 command, please run with fakesink. And replac it with appsink when apply to cv2.VideoCapture()

Thank you very much, I have solved this problem.

I’ve found where my problem lies. After executing print(cv2.getBuildInformation()):

General configuration for OpenCV 4.10.0 =====================================
Version control: 4.10.0-dirty

Platform:
Timestamp: 2024-06-17T18:00:16Z
Host: Linux 5.3.0-28-generic aarch64
CMake: 3.29.5
CMake generator: Unix Makefiles
CMake build tool: /bin/gmake
Configuration: Release

CPU/HW features:
Baseline: NEON FP16
Dispatched code generation: NEON_DOTPROD NEON_FP16 NEON_BF16
requested: NEON_FP16 NEON_BF16 NEON_DOTPROD
NEON_DOTPROD (1 files): + NEON_DOTPROD
NEON_FP16 (2 files): + NEON_FP16
NEON_BF16 (0 files): + NEON_BF16

C/C++:
Built as dynamic libs?: NO
C++ standard: 11
C++ Compiler: /opt/rh/devtoolset-10/root/usr/bin/c++ (ver 10.2.1)
C++ flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /opt/rh/devtoolset-10/root/usr/bin/cc
C flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -L/ffmpeg_build/lib -Wl,–gc-sections -Wl,–as-needed -Wl,–no-undefined
Linker flags (Debug): -L/ffmpeg_build/lib -Wl,–gc-sections -Wl,–as-needed -Wl,–no-undefined
ccache: YES
Precompiled headers: NO
Extra dependencies: /lib64/libopenblas.so Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Test Qt5::Concurrent /usr/local/lib/libpng.so /lib64/libz.so dl m pthread rt
3rdparty dependencies: libprotobuf ade ittnotify libjpeg-turbo libwebp libtiff libopenjp2 IlmImf tegra_hal

OpenCV modules:
To be built: calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python3 stitching video videoio
Disabled: world
Disabled by dependency: -
Unavailable: java python2 ts
Applications: -
Documentation: NO
Non-free algorithms: NO

GUI: QT5
QT: YES (ver 5.15.13 )
QT OpenGL support: NO
GTK+: NO
VTK support: NO

Media I/O:
ZLib: /lib64/libz.so (ver 1.2.7)
JPEG: build-libjpeg-turbo (ver 3.0.3-70)
SIMD Support Request: YES
SIMD Support: YES
WEBP: build (ver encoder: 0x020f)
PNG: /usr/local/lib/libpng.so (ver 1.6.43)
TIFF: build (ver 42 - 4.6.0)
JPEG 2000: build (ver 2.5.0)
OpenEXR: build (ver 2.3.0)
HDR: YES
SUNRASTER: YES
PXM: YES
PFM: YES

Video I/O:
DC1394: NO
FFMPEG: YES
avcodec: YES (59.37.100)
avformat: YES (59.27.100)
avutil: YES (57.28.100)
swscale: YES (6.7.100)
avresample: NO
GStreamer: NO
v4l/v4l2: YES (linux/videodev2.h)

Parallel framework: pthreads

Trace: YES (with Intel ITT)

Other third-party libraries:
Lapack: YES (/lib64/libopenblas.so)
Eigen: NO
Custom HAL: YES (carotene (ver 0.0.1, Auto detected))
Protobuf: build (3.19.1)
Flatbuffers: builtin/3rdparty (23.5.9)

OpenCL: YES (no extra features)
Include path: /io/opencv/3rdparty/include/opencl/1.2
Link libraries: Dynamic load

Python 3:
Interpreter: /opt/python/cp39-cp39/bin/python3.9 (ver 3.9.19)
Libraries: libpython3.9m.a (ver 3.9.19)
Limited API: YES (ver 0x03060000)
numpy: /home/ci/.local/lib/python3.9/site-packages/numpy/_core/include (ver 2.0.0)
install path: python/cv2/python-3

Python (for build): /opt/python/cp39-cp39/bin/python3.9

Java:
ant: NO
Java: NO
JNI: NO
Java wrappers: NO
Java tests: NO

Install to: /io/_skbuild/linux-aarch64-3.9/cmake-install

It’s that my OpenCV does not support GStreamer.Could it be because my OpenCV was not installed via JetPack?

Since my programming language is Python, and all subsequent data preprocessing, model inference, and post-processing are written in Python, could you give me a suggestion? What would be a good choice for hard decoding? Should I use OpenCV with GStreamer, jetson-utils, or something else? I would really appreciate your advice. Thank you again.

Hi,
The default OpenCV has gstreamer enabled. You probably replaced it. May try the script to manually build and install OpenCV again:
Jetson AGX Orin FAQ

Since my programming language is Python, and all subsequent data preprocessing, model inference, and post-processing are written in Python, could you give me a suggestion? What would be a good choice for hard decoding? Should I use OpenCV with GStreamer, jetson-utils, or something else? I would really appreciate your advice. Thank you again.

Is it inconvenient to answer the above question?

Hi,
Using python + OpenCV should be fine, there are samples for reference:
Increasing play speed decoding from mp4 file - #9 by DaneLLL
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL
Displaying to the screen with OpenCV and GStreamer - #9 by DaneLLL
Stream processed video with OpenCV on Jetson TX2 - #5 by DaneLLL

Okay, thank you. I’ll recompile OpenCV first. I’ll come to you again if I have any problems.

Q: How to build an OpencV with CUDA support on JetPack 6.0 GA?

An auto-build script is available. Please check here for info.

Is this what you’re referring to?