Deepstream - Docker Runtime Error

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6.1-b110
• TensorRT Version 8.2
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have created a dockerfile utilising the 6.0.1 base l4t container. The container runs inference on a video and sends the data to cloud via the kafka azure config. When I enter the container and run in interactive mode using sudo docker run -it --rm --net=host --runtime nvidia -w /opt/nvidia/deepstream/deepstream-6.0 -v /tmp/.X11-unix/:/tmp/.X11-unix containername it and then cd into the correct folder found under the path /opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/apps/iotapp and then run the python script, it works perfectly. However when I try to run the docker container on its own without interactive mode I get the following 2 errors

1)
sudo docker run --rm --net=host --runtime nvidia -w /opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/apps/iotapp -v /tmp/.X11-unix/:/tmp/.X11-unix containername
-c cfg_azure.txt -p libnvds_azure_proto.so -i /opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/videos/video1.h264 --no-display

This gives me an error of “Error: gst-library-error-quark: Could not configure supporting library. (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmsgbroker/gstnvmsgbroker.cpp(402): legacy_gst_nvmsgbroker_start (): /GstPipeline:pipeline0/GstNvMsgBroker:nvmsg-broker:
unable to connect to broker library”

2)
sudo docker run --rm --net=host --runtime nvidia -w /opt/nvidia/deepstream/deepstream-6.0 -v /tmp/.X11-unix/:/tmp/.X11-unix containername
-c cfg_azure.txt -p libnvds_azure_proto.so -i /opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/videos/video1.h264 --no-display

This gives me an error of “from common.is_aarch_64 import is_aarch64
ModuleNotFoundError: No module named ‘common’”

I am not sure what I am doing wrong

NB:
When i run interactive then I set dockerfile with

CMD [“/bin/bash”]
WORKDIR /opt/nvidia/deepstream/deepstream-6.0

When i switch to entrypoint for script then i use

ENTRYPOINT [ “python3”, “/opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/apps/iotapp/main.py”,“-c”, “-p”, “-i”]

Hi, @seankearsney ,What do you mean by interactive mode? Did you use docker created by yourself?

Interactive mode is when I got into docker shell and run queries directly in the container, I am using “6.0.1 base l4t” as my base container

Could you attach your dockerfile that created your own docker?

Dockerfile.txt (3.0 KB)
Here we go

It means cannot connect your message server.

It seems that you didn’t do the python binding.

Can you reconfirm that they can work normally in Docker?

Correct - when i change entry to CMD [“/bin/bash”] and pass through -it in my command to run the container this allows me to run the my specified python script from within the container directly, and here it works perfectly, but when I put the entrypoint as the python script itself, then it does not work

Hi @seankearsney , Is cfg_azure.txt in the iotapp directory? You can try dumping conn_str from main.py to make sure it’s correct.

About the msgbroker problem:

Sometimes –net=host causes problems.
You can try pinging the kafka broker first from entry point instead of running the app.

About the pyds problem:

The common module path is relative to the app (../),
hence running from iotapp/ works but not from other directory. 
You can adjust the path in the app if running from different directory.

Thank you, I have appended paths at the top of my python file and this assists with the running of the app from the correct directory, therefore the pyds issue is no longer an issue. I am struggling with this broker connection error, how do I ping the broker in the entrypoint file? I have researched this but not coming across something that seems to assist. As I am researching more I am wondering if the problem is that the kafka server is not yet up when it tries to connect where as when I run the docker container in interactive mode, it first spins up the container and then I manually run the python script which means the kafka broker is already up and running?

Any assistance would be appreciated.

You can ping the server outside the docker with the IP address and make sure it is started. In your script, you didn’t start a kafka server. You can refer the /opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor to learn how to start a kafka server.

It is confusing me why this is happening, I have spent hours and hours on this. When my dockerfile entrypoint is # CMD [“/bin/bash”] and pass through --it in the docker run command, I then enter the container. From within the container I am then able to run the python script with no issue and it successfully connects to the broker. But when I set my entrypoint of the dockerfile to the script itself like ENTRYPOINT [ “python3”, “/opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/apps/iotapp/main.py”,“-c”, “-p”, “-i”] and do a docker run, it then gives me the message that it cannot connect to the broker, do you know why this is happening and how I can resolve?

  1. You can add a ping command in your Dockerfile.txt and Make sure the kafka broker is brought up before you run your python.
    2.How do you bring up the kafka broker? You should make sure that the conn_str is the same as you set when bring up the broker.
    3.You can add some debug log in the source code and debug it byyourself cause the kafka is open source. sources\libs\kafka_protocol_adaptor

My entrypoint is Python file, so I cant ping anything before I run my python file? I am using the cfg_azure.txt file with the libnvds_azure_proto.so file in order to send my message directly to Azure IoT Hub, so I am not spinning up my own kafka broker or server etc.

It doesnt really make sense to me when I try run the container why it does not want to connect to the service and why I get this error, from this error message

Error: gst-library-error-quark: Could not configure supporting library. (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmsgbroker/gstnvmsgbroker.cpp(402): legacy_gst_nvmsgbroker_start (): /GstPipeline:pipeline0/GstNvMsgBroker:nvmsg-broker:
unable to connect to broker library

It seems like it is a gst-plugin which is causing the error. When enter dockefile using cmd and then cd into my correct folder and run the python file everything works perfect, but when I make entrypoint the file itself and run the container like that then it cannot connect.

I have researched a lot around your comments above, but I am not seeming to find something which aligns with the error messages I am receiving

Hi seankerasney, can you try giving full paths for cfg_azure.txt and libnvds_azure_proto.so in your entry point? Perhaps the relative paths are different when running from there.

hi @zhliunycm2

Thanks for your response, I have tried many many variations as per below

-c /opt/nvidia/deepstream/deepstream-6.0/deepstream_python_apps/apps/iotapp/cfg_azure.txt
-p /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_azure_proto.so \

In addition I have researched quite a few other options which include

  • Removing --net=host
  • Adding --ipc=host

So I am not sure why this is happening, I will try to investigate further if there are some paths somewhere which are potentially incorrect, but in general I always use absolute paths

Any other ideas?

Hi @seankearsney , Could you provide us your project, such as the latest dockerfile, the minimized code, the server you used, etc…? We can try to repro it in our env. If it can be reproduced in our environment, we can analyze this problem more quickly.

@seankearsney Thanks for provide us your project, we can repro the issue in our environment. It may take some time to resolve the issue.

Thanks @yuweiw appreciate the support, happy to provide anything further so we can come to a joint resolution

It seems that there are some problems in passing the path in the ENTRYPOINT mode.
Could you try to write a shell script? When build a docker, copy the script into the docker. Run the shell script when use the ENTRYPOINT instead of passing path to it.

HI @yuweiw

Thanks for your guidance, so I managed to get it working by amending entrypoint to
CMD [“/bin/bash”, “./iotapp.sh”]

Then secondly I amended the deployment.template.json file to contain references to the required volume paramters and runtime and now it kickstarts perfectly.

Thanks very much for your guidance!