I tried your dockerfile as below, I can’t reproduce your issue.
Build the docker
$ docker build -t mchi_ds_test_docker --network=host .
docker file (referring to your dockerfile)
FROM nvcr.io/nvidia/deepstream:4.0.2-19.12-devel
ADD python_release_bind /root/deepstream_sdk_v4.0.2_x86_64/sources/python
#RUN ls /app
RUN apt-get update -y
RUN apt-get install python-gi-dev -y
RUN export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0"
RUN export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include"
RUN apt-get install git -y
RUN mkdir -p /app/panel_factory/research/deepstream
RUN cd /app/panel_factory/research/deepstream
RUN git clone https://github.com/GStreamer/gst-python.git
RUN cd gst-python && \
git checkout 1a8f48a && \
apt-get install autoconf automake libtool python-dev python3-dev libgstreamer1.0-dev -y && \
./autogen.sh PYTHON=python3 && \
./configure PYTHON=python3 && \
make &&\
make install
EXPOSE 8554
RUN ls /root/deepstream_sdk_v4.0.2_x86_64/sources
RUN ls /root/deepstream_sdk_v4.0.2_x86_64/sources/python
Launch the docker
$ docker run --gpus all -it --rm -p 8554:8554 -w /root mchi_ds_test_docker
Operate in docker
1). change “sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)” to “sink = Gst.ElementFactory.make(“fakesink”, “fakesink”)”
2). run
root@72df2daef6ea:~/deepstream_sdk_v4.0.2_x86_64/sources/python/apps/deepstream-test1# python3 deepstream_test_1.py /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Creating LL OSD context new
0:00:00.500260585 161 0x1a68190 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:09.939989665 161 0x1a68190 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Creating LL OSD context new
Frame Number=0 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=1 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=2 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=3 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=4 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=5 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=6 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=7 Number of Objects=6 Vehicle_count=4 Person_count=2
…