I am writing Python code for processing several of RTSP streams on an 8G Jetson Orin NX system, using opencv. I’ve been told that it should be able to handle up to 12 4K streams at 15 fps provided that I used GStreamer for decoding them. However, I can’t get it to go past 8 1280x960 streans at 15fps.
Here’s the pipeline I am using:
rtspsrc location={rtsp_url} ! rtph264depay ! h264parse ! queue ! nvv4l2decoder ! queue ! nvvidconv ! video/x-raw, format=(string)RGBA ! appsink
Which is the first one of many that I tried that actually used the hardware (For each of the pipelines I tried, I used both jtop
and tegrastats
to check for the NVDEC usage).
Here’s my test code:
gst_pipeline = f"rtspsrc location={rtsp_url} ! rtph264depay ! h264parse ! queue ! nvv4l2decoder {nv_args} ! queue ! nvvidconv ! video/x-raw, format=(string)RGBA ! appsink"
cap = cv2.VideoCapture(gst_pipeline, cv2.CAP_GSTREAMER)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
print(f"Resolution: {w}x{h}")
bar = tqdm(desc="Capturing...", leave=False) # This will print the iterations per second
while cap.isOpened():
ret, frame = cap.read()
bar.update(1)
if not ret:
print("Failed to capture frame")
print("Cap closed")
And here’s the output that I get whenever I try to spawn a new capture process with 8 already running on the background:
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
InitNVDEC: Host1x channel open failed
NVMEDIA: NvMMDecNvVideoCreateParser: 7930: - Failed to get NVDEC Channel handle
NvMMLiteBlockCreate : Block : BlockType = 261
NVDEC_COMMON: Host1x channel open failed
******tegraH264DecoderCreate 1477 Failed in the API InitNVDEC_safe, calling tegraH264DecoderDestroy ******
NVMMLITE_NVVIDEODEC, <cbBeginSequence, 1776> ErrorInfo = VideoErrorInfo_NvVideoDecoderCreate cctx = 0x580498e0
NVMMLITE_NVVIDEODEC, <NvVideoBufferProcessing:6596> video_parser_parse Unsupported Codec
Stream format not found, dropping the frame
Stream format not found, dropping the frame
NVMMLITE_NVVIDEODEC, <NvMMLiteNvVideoDecDoWork:7100> NVVIDEO Video Dec Unsupported Stream
NVMMLITE_NVVIDEODEC, <NvVideoBufferProcessing:6596> video_parser_parse Unsupported Codec
NVMMLITE_NVVIDEODEC, <NvMMLiteNvVideoDecDoWork:7100> NVVIDEO Video Dec Unsupported Stream
NVMMLITE_NVVIDEODEC, <NvVideoBufferProcessing:6596> video_parser_parse Unsupported Codec
[ WARN:0@0.612] global cap_gstreamer.cpp:2838 handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module nvv4l2decoder0 reported: Failed to process frame.
malloc_consolidate(): invalid chunk size
Aborted (core dumped)
Does anyone have an idea on how I could improve this pipeline?
Hi,
For information, do you run decoding tasks in single process with multiple decoding threads? Or multiple processes?
Hi DaneLLL, each decoding task is run as a separate process. Thanks for the quick reply :)
Hi,
Please apply the patches and rebuild kernel:
Jetson AGX Orin FAQ
And try again.
Hi DaneLLL
I started back again with a blank installation of my dev kit with JetPack 6.2 (L4T 36.4.3, Kernel Version 5.15.148-tegra). For some reason I am not able to install OpenCV with GStreamer support now. Is there any official Nvidia tutorial for that?
The previous time I installed it, I followed this tutorial and got it to work first try, but now I get the following error:
$ sudo make -j$(nproc)
[ 2%] Built target libjasper
[ 3%] Built target ade
c++: error: unrecognized command-line option ‘--param=ipcp-unit-growth=100000’; did you mean ‘--param=ipa-cp-unit-growth=’?
make[2]: *** [3rdparty/carotene/hal/carotene/CMakeFiles/carotene_objs.dir/build.make:76: 3rdparty/carotene/hal/carotene/CMakeFiles/carotene_objs.dir/src/absdiff.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:2140: 3rdparty/carotene/hal/carotene/CMakeFiles/carotene_objs.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 3%] Built target opencv_videoio_plugins
[ 10%] Built target libwebp
[ 10%] Building CXX object modules/core/CMakeFiles/opencv_core_pch_dephelp.dir/opencv_core_pch_dephelp.cxx.o
[ 10%] Building CXX object modules/ts/CMakeFiles/opencv_ts_pch_dephelp.dir/opencv_ts_pch_dephelp.cxx.o
cc1plus: warning: command-line option ‘-Wmissing-prototypes’ is valid for C/ObjC but not for C++
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
cc1plus: warning: command-line option ‘-Wmissing-prototypes’ is valid for C/ObjC but not for C++
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[ 14%] Built target libprotobuf
[ 14%] Linking CXX static library ../../lib/libopencv_core_pch_dephelp.a
[ 14%] Built target opencv_core_pch_dephelp
[ 14%] Linking CXX static library ../../lib/libopencv_ts_pch_dephelp.a
[ 14%] Built target opencv_ts_pch_dephelp
make: *** [Makefile:166: all] Error 2
Hi DaneLLL, will using multiple decoding threads solve the issue instead of doing it per-process?
And also, when will Jetpack 6.3 containing the kernel patch be released? Would rather wait for the official upgrade rather than patching ourselves which I am not sure is suitable for production. Thanks!
Hi,
The issue does not occur if you run multiple decoding threads in single process. Would be great if you can try this.
And the patches are not included to Jetpack versions in r36.4.x. Will be in future r36.5.x.
Hi DaneLL,
Thank you, I completely misunderstood. I will try to apply patches manually (If I understood correctly). To be frank I am completely new to working with Jetson devices and I’m a bit lost. Could you point me, please, to some tutorial or post about how to apply said patches?
Thank you very much :)