Runtime Error - Unable to Read Configuration

DeepStream Version - 7.1
Docker Image - nvcr.io/nvidia/deepstream:7.1-gc-triton-devel
GPU - RTX A6000
NVIDIA GPU Driver - 535.183.01

Upon running a standard pipeline implementation, I’m getting the following error -

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.251994325 63355 0x55dffc8e99c0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<detector> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_tao_apps/models/peoplenet/1/resnet34_peoplenet_int8.onnx_b2_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:327 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       3x544x960       min: 1x3x544x960     opt: 2x3x544x960     Max: 2x3x544x960     
1   OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60        min: 0               opt: 0               Max: 0               

0:00:00.252035944 63355 0x55dffc8e99c0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<detector> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_tao_apps/models/peoplenet/1/resnet34_peoplenet_int8.onnx_b2_gpu0_int8.engine
0:00:00.255168540 63355 0x55dffc8e99c0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<detector> [UID 1]: Load new model:config.txt sucessfully
terminate called after throwing an instance of 'std::runtime_error'
  what():  Unable to read configuration
Aborted (core dumped)

Following is the gdb output on the python program

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffed691640 (LWP 61697)]
Initializing gstreamer

Creating pipeline

Creating nvstreammux

Creating source bin source-bin-000 for stream rtsp://admin:Vct280620@10.232.80.90:560/Streaming/Channels/101

Creating nvvideoconvert
 
Creating capsfilter

Creating nvinfer

Creating nvtracker

Creating nvvideoconvert

Creating nvdsosd

Creating nveglglessink

Setting pipeline element properties

[New Thread 0x7fffd45ff640 (LWP 61698)]
Creating plugins

Creating queues

Adding elements to the pipeline

Linking elements in the pipeline

Starting pipeline

[New Thread 0x7fffd11ba640 (LWP 61699)]
[New Thread 0x7fffd0938640 (LWP 61708)]
[New Thread 0x7fffc08b7640 (LWP 61714)]
[New Thread 0x7fffbafde640 (LWP 61715)]
[New Thread 0x7fffa5fff640 (LWP 61716)]
[New Thread 0x7fffa57fe640 (LWP 61717)]
[New Thread 0x7fffa4ffd640 (LWP 61718)]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
[New Thread 0x7fff49fff640 (LWP 61735)]
[New Thread 0x7fff497fe640 (LWP 61736)]
[New Thread 0x7fff48ffd640 (LWP 61737)]
0:00:00.553069733 61678 0x55555a5c3e90 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<detector> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_tao_apps/models/peoplenet/1/resnet34_peoplenet_int8.onnx_b2_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:327 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       3x544x960       min: 1x3x544x960     opt: 2x3x544x960     Max: 2x3x544x960     
1   OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60        min: 0               opt: 0               Max: 0               

0:00:00.553118125 61678 0x55555a5c3e90 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<detector> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_tao_apps/models/peoplenet/1/resnet34_peoplenet_int8.onnx_b2_gpu0_int8.engine
[New Thread 0x7fff3bfff640 (LWP 61738)]
[New Thread 0x7fff3b7fe640 (LWP 61739)]
[New Thread 0x7fff3affd640 (LWP 61740)]
0:00:00.556646238 61678 0x55555a5c3e90 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<detector> [UID 1]: Load new model:config.txt sucessfully
[New Thread 0x7fff3a7fc640 (LWP 61741)]
[New Thread 0x7fff39ffb640 (LWP 61742)]
[New Thread 0x7fff397fa640 (LWP 61743)]
[New Thread 0x7fff38ff9640 (LWP 61751)]
[New Thread 0x7fff23fff640 (LWP 61752)]
[New Thread 0x7fff237fe640 (LWP 61753)]
[Detaching after fork from child process 61754]
[New Thread 0x7fff22ffd640 (LWP 61755)]
[Switching to Thread 0x7fff22ffd640 (LWP 61755)]

Thread 23 "pool-python3" hit Catchpoint 1 (exception thrown), 0x00007ffff6c4f4a1 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) bt
#0  0x00007ffff6c4f4a1 in __cxa_throw () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007fffd0030856 in  () at /usr/lib/x86_64-linux-gnu/libproxy.so.1
#2  0x00007fffd0039827 in px_proxy_factory_get_proxies () at /usr/lib/x86_64-linux-gnu/libproxy.so.1
#3  0x00007fffd55e6827 in  () at /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
#4  0x00007fffecbe1644 in g_task_thread_pool_thread (thread_data=0x7fff1c012de0, pool_data=<optimized out>) at ../gio/gtask.c:1531
#5  0x00007ffff6e7d384 in g_thread_pool_thread_proxy (data=<optimized out>) at ../glib/gthreadpool.c:350
#6  0x00007ffff6e7cac1 in g_thread_proxy (data=0x7fff980027c0) at ../glib/gthread.c:831
#7  0x00007ffff7cd5ac3 in  () at /usr/lib/x86_64-linux-gnu/libc.so.6
#8  0x00007ffff7d67850 in  () at /usr/lib/x86_64-linux-gnu/libc.so.6

What could be causing this error?
Attached is my code for your reference, thanks in advance!

video_intel.txt (9.8 KB)

How do you start docker and run the program? I can run your program normally.

I do it as follows

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 nvcr.io/nvidia/deepstream:7.1-gc-triton-devel
 python3 user.py -c dstest3_pgie_config.txt -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 

I’m running the container in detached state as -

sudo docker run -d --name video_intel --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 video_intel

and then I access it as -

sudo docker exec -it video_intel bash

and then I run the code as follows -

python3 video_intel.py -i <rtsp_stream_uri> -c config.txt

Also, before any of this I do xhost + to disable access control to X11 server
I’m running my program on RTSP feeds which I have checked are working.

Can you share the config.txt file? Or try using my command line to make sure there is no problem with the installation.

In addition, did you build this docker image based on 7.1-gc-triton-devel? Did you make any changes?

Attached is the config.txt file.
config.txt (1.7 KB)

My code was working fine with the same config file until I added the source probe for the detector - detector_src_pad_buffer_probe().

Yes, my current setup was built on 7.1-gc-triton-devel and following is how the installation was performed -

xhost +

sudo docker pull nvcr.io/nvidia/deepstream:7.1-gc-triton-devel
sudo docker run -d --name video_intel --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 nvcr.io/nvidia/deepstream:7.1-gc-triton-devel
sudo docker exec -it video_intel bash

# install deepstream apps

./user_deepstream_python_apps_install.sh -b

# install every package that might be needed

apt install \
    libssl3 \
    libssl-dev \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    libgstreamer-plugins-base1.0-dev \
    libgstrtspserver-1.0-0 \
    libjansson4 \
    libyaml-cpp-dev

apt-get install \
    libssl3 \
    libssl-dev \
    libgles2-mesa-dev \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    libgstreamer-plugins-base1.0-dev \
    libgstrtspserver-1.0-0 \
    libjansson4 \
    libyaml-cpp-dev \
    libjsoncpp-dev \
    protobuf-compiler \
    gcc \
    make \
    git \
    python3

apt-get install cuda-toolkit-12-6
apt-get install libflac8 libmp3lame0 libxvidcore4 ffmpeg
apt install python3-gi python3-dev python3-gst-1.0
apt install libgirepository1.0-dev
apt install python3-opencv python3-numpy

# install tao models

apt install git-lfs
git lfs install --skip-repo

cd /opt/nvidia/deepstream/deepstream-7.1/sources/
git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git

./download_models.sh
./build_triton_engine.sh

and then I commit everything to save my work in the docker container.

Using the same steps, I cannot reproduce the issue. Please check if some configuration files have been deleted.

Also you don’t have to install these dependencies, include cuda-toolkit-12-6, this script will install the required packages

./user_deepstream_python_apps_install.sh -b -v v1.2.0
1 Like