Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU • DeepStream Version
6.1.1 • JetPack Version (valid for Jetson only) • TensorRT Version
8.4 • NVIDIA GPU Driver Version (valid for GPU only)
A40,535.54.03 • Issue Type( questions, new requirements, bugs)
questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
i run the deepstream program on the ubuntu A40 server and capture images through rtsp in the factory area, but the following exception occurs. is it caused by network isolation? i can see the picture normally when i connect to this rtsp through VLC. i am a little confused now, can you help help me
executing the following command does not work normally, why is this? i installed it according to the official textbook (Quickstart Guide — DeepStream 6.1.1 Release documentation dgpu-setup-for-ubuntu), but an exception occurred during the installation of ‘librdkafka’, and then I skipped the installation
SSTY-001:/home/cv/deepstream_python_apps-1.1.4/apps/deepstream-test1# python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264
Traceback (most recent call last):
File “deepstream_test_1.py”, line 28, in
import pyds
ModuleNotFoundError: No module named ‘pyds’
hello, i have solved the network problem here, but it still cannot run. how to deal with this ‘Cuda failure: status=801’? do i need to bind python? i remember that after installing the environment on other machines according to the official tutorial, the program can run normally without python binding
2023-08-28 11:12:24 - INFO - Creating Pipeline
2023-08-28 11:12:24 - INFO - Creating streamux
2023-08-28 11:12:24 - INFO - bin_name:source-bin-00
2023-08-28 11:12:24 - INFO - Creating nvvidconv1
2023-08-28 11:12:24 - INFO - Creating filter1
2023-08-28 11:12:24 - INFO - Creating Fakesink
2023-08-28 11:12:24 - INFO - Now playing...
2023-08-28 11:12:24 - INFO - 0:rtsp://xxx/Streaming/Channels/101
2023-08-28 11:12:24 - INFO - Starting pipeline
2023-08-28 11:12:24 - INFO - Decodebin child added: source
2023-08-28 11:12:24 - INFO - Decodebin child added: decodebin0
2023-08-28 11:12:24 - INFO - Decodebin child added: rtph264depay0
2023-08-28 11:12:24 - INFO - Decodebin child added: h264parse0
2023-08-28 11:12:24 - INFO - Decodebin child added: capsfilter0
2023-08-28 11:12:24 - INFO - Decodebin child added: decodebin1
2023-08-28 11:12:24 - INFO - Decodebin child added: rtppcmadepay0
2023-08-28 11:12:24 - INFO - Decodebin child added: nvv4l2decoder0
2023-08-28 11:12:24 - INFO - only decode key frame
2023-08-28 11:12:24 - INFO - Decodebin child added: alawdec0
Cuda failure: status=801
In order to reduce the gap between the deepstream software version and the existing software version of the A40 server, I reinstalled deepstream6.3 and ran ./autogen.sh when binding python. The exception is as follows. How should I proceed?
root@SSTY-001:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/3rdparty/gst-python# deepstream-app --version-all
deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.1
TensorRT Version: 8.5
cuDNN Version: 8.7
libNVWarp360 Version: 2.0.1d3
root@SSTY-001:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/3rdparty/gst-python# ./autogen.sh
ln: failed to create symbolic link '.git/hooks/pre-commit': Not a directory
+ check for build tools
checking for autoconf >= 2.60 ... found 2.69, ok.
checking for automake >= 1.10 ... found 1.16.1, ok.
checking for libtoolize >= 1.5.0 ... found 2.4.6, ok.
checking for pkg-config >= 0.8.0 ... found 0.29.1, ok.
+ checking for autogen.sh options
This autogen script will automatically run ./configure as:
./configure --enable-maintainer-mode
To pass any additional options, please specify them on the ./autogen.sh
command line.
+ running libtoolize --copy --force...
libtoolize: putting auxiliary files in '.'.
libtoolize: copying file './ltmain.sh'
libtoolize: putting macros in 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
libtoolize: Consider adding 'AC_CONFIG_MACRO_DIRS([m4])' to configure.ac,
libtoolize: and rerunning libtoolize and aclocal.
+ running aclocal -I m4 -I common/m4 ...
+ running autoheader ...
+ running autoconf ...
+ running automake -a -c -Wno-portability...
configure.ac:47: installing './compile'
configure.ac:13: installing './missing'
gi/overrides/Makefile.am: installing './depcomp'
plugin/Makefile.am:3: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')
+ running configure ...
./configure default flags: --enable-maintainer-mode
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking target system type... x86_64-pc-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether UID '0' is supported by ustar format... yes
checking whether GID '0' is supported by ustar format... yes
checking how to create a ustar tar archive... gnutar
checking nano version... 0 (release)
checking whether to enable maintainer-specific portions of Makefiles... yes
checking whether make supports nested variables... (cached) yes
checking how to print strings... printf
checking whether make supports the include directive... yes (GNU style)
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking for a sed that does not truncate output... /usr/bin/sed
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for fgrep... /usr/bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for ar... ar
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for a working dd... /usr/bin/dd
checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1
checking for mt... mt
checking if mt is a manifest tool... no
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking for shl_load... no
checking for shl_load in -ldld... no
checking for dlopen... no
checking for dlopen in -ldl... yes
checking whether a program can dlopen itself... yes
checking whether a statically linked program can dlopen itself... no
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking whether gcc understands -c and -o together... (cached) yes
checking dependency style of gcc... (cached) gcc3
checking for gcc option to accept ISO C99... none needed
checking for gcc option to accept ISO Standard C... (cached) none needed
checking for python... no
checking for python2... no
checking for python3... /usr/bin/python3
checking for python version... 3.8
checking for python platform... linux
checking for python script directory... ${prefix}/lib/python3.8/site-packages
checking for python extension module directory... ${exec_prefix}/lib/python3.8/site-packages
checking for python >= 2.7... checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for GST... yes
checking for PYGOBJECT... yes
okay
checking for headers required to compile python extensions... found
checking for pygobject overrides directory... ${exec_prefix}/lib/python3.8/site-packages/gi/overrides
checking for GST... yes
configure: Using /usr/local/lib/gstreamer-1.0 as the plugin install location
checking for PYGOBJECT... yes
checking for libraries required to embed python... no
configure: error: Python libs not found. Windows requires Python modules to be explicitly linked to libpython.
configure failed
root@SSTY-001:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/3rdparty/gst-python# python
Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3
root@SSTY-001:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/3rdparty/gst-python# python3
Python 3.8.10 (default, May 26 2023, 14:05:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
I have completed the python binding and can successfully execute deepstream-test1, but when I run my own deepstream program, I still get an error. Why is this?
root@SSTY-001:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1# python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file /opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
0:00:00.170115632 2147632 0x3ed42d0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:02.834150027 2147632 0x3ed42d0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:02.957512413 2147632 0x3ed42d0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.961472842 2147632 0x3ed42d0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=13 Vehicle_count=9 Person_count=4
0:00:03.547790887 2147632 0x34cd6a0 WARN nvinfer gstnvinfer.cpp:2397:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:03.547818769 2147632 0x34cd6a0 WARN nvinfer gstnvinfer.cpp:2397:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2397): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Frame Number=2 Number of Objects=11 Vehicle_count=7 Person_count=4
nvstreammux: Successfully handled EOS for source_id=0
Exception log:
2023-08-30 15:57:22 - INFO - Creating Pipeline
2023-08-30 15:57:22 - INFO - Creating streamux
2023-08-30 15:57:22 - INFO - bin_name:source-bin-00
2023-08-30 15:57:22 - INFO - Creating nvvidconv1
2023-08-30 15:57:22 - INFO - Creating filter1
2023-08-30 15:57:22 - INFO - Creating Fakesink
2023-08-30 15:57:22 - INFO - Now playing...
2023-08-30 15:57:22 - INFO - 0:rtsp://xxx/Streaming/Channels/101
2023-08-30 15:57:22 - INFO - Starting pipeline
2023-08-30 15:57:22 - INFO - Decodebin child added: source
2023-08-30 15:57:22 - INFO - Decodebin child added: decodebin0
2023-08-30 15:57:22 - INFO - Decodebin child added: rtph264depay0
2023-08-30 15:57:22 - INFO - Decodebin child added: h264parse0
2023-08-30 15:57:22 - INFO - Decodebin child added: capsfilter0
2023-08-30 15:57:22 - INFO - Decodebin child added: nvv4l2decoder0
2023-08-30 15:57:22 - INFO - only decode key frame
2023-08-30 15:57:22 - INFO - Decodebin child added: decodebin1
2023-08-30 15:57:22 - INFO - Decodebin child added: rtppcmadepay0
2023-08-30 15:57:22 - INFO - Decodebin child added: alawdec0
Cuda failure: status=801
Error(-1) in buffer allocation
** (python3:2142481): CRITICAL **: 15:57:22.945: gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed
2023-08-30 15:57:22 - INFO - Error: gst-resource-error-quark: Failed to allocate the buffers inside the Nvstreammux output pool (1),gstnvstreammux.cpp(866): gst_nvstreammux_alloc_output_buffers (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer
Do you still need me to list any hardware or software version information? Or something else? This problem tortures me so much. I hope I can get your help
I execute this command root@SSTY-001:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app# deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt The output is as I described above, it looks like Is it normal? Does it mean that my cuda or deepstream operating environment is normal?
However, when I execute the custom deepstream python code, it cannot run normally. This code is executed normally on other machines in the deepstream6.1.1 environment, but an error is reported in the A40 deepstream6.3 environment. The debugging information is as follows: