Deepstream unable to connect to kafka in docker

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
DS6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version

  • TensorRT 8.6.1.6
    • NVIDIA GPU Driver Version (valid for GPU only)
    535.171.04
    • Issue Type( questions, new requirements, bugs)
FROM nvcr.io/nvidia/deepstream:6.4-gc-triton-devel

This image is used to containerize my app, however I have tried using deepstream_python_apps to run deepstream_test4.
It couldn’t connect to kafka. Two container was used one is kafka and deepstream6.4 image, deepstream container is able to ping kafka.

%3|1715587438.323|ERROR|rdkafka#producer-1| [thrd:app]: rdkafka#producer-1: localhost:9092/1: Connect to ipv6#[::1]:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)

This is the error I was facing

python3 deepstream_test_4.py -i /app/0320Daylight2.h264 -p /opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_kafka_proto.so --conn-str="kafka_docker;9092;testTopic" -s 0 --no-display

This is the command I use to run the python app in docker container
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

  1. could you share a whole log by referring to this topic? Thanks!
  2. please refer to my test command-line.
    python3 deepstream_test_4.py -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so --conn-str=“localhost;9092” -t deepstream -s 0 --no-display

Hi, I dont find any log file or the directory /tmp/nvds/ds.log.

/tmp/
MLNX_OFED_LINUX.1.logs/ tmp.w62DKGXW8j/

This is the directory I got in my docker

please enter to /opt/nvidia/deepstream/deepstream-7.0/sources/tools/nvds_logger, then execute ./setup_nvds_logger.sh, then start the application. please share the running log and low-level log.

May 14 02:56:09 f6c81f5bdf41 python3: DSLOG:NVDS_KAFKA_PROTO: getaddrinfo returned error -3#012
May 14 02:56:09 f6c81f5bdf41 python3: DSLOG:NVDS_KAFKA_PROTO: Kafka connection successful#012
May 14 02:58:47 f6c81f5bdf41 python3: DSLOG:NVDS_KAFKA_PROTO: Kafka connection successful#012

This is what my log shows

terminal.log (35.5 KB)

%3|1715655598.038|ERROR|rdkafka#producer-1| [thrd:app]: rdkafka#producer-1: localhost:9092/1: Connect to ipv6#[::1]:9092 failed: Connection refused (after 0ms in state CONNECT)
  1. from the log, 9092 port is inaccessible, you need to use “–net=host” to start server docker or use “-p” to map the port.
  2. if still can’t work, I suggested using /opt/nvidia/deepstream/deepstream-7.0/sources/libs/kafka_protocol_adaptor to test kafka sending specifically. please refer to the readme, you need to use “make -f Makefile.test” to compile, then execute ./test_sample_proto_async

I think I solve the error, however, the new error was about kafkaConsumer. I have a python code that will grab the metadata from deepstream python app and push to postgres.
The step I start my app:

  1. docker compose up for kafka, zookeeper and postgres
  2. Then run the docker which contain the code to grab the metadata.
  3. Start the deepstream python app.
%4|1715668584.242|TERMINATE|rdkafka#producer-1| [thrd:app]: Producer terminating with 20 messages (35129 bytes) still in queue or transit: use flush() to wait for outstanding message delivery

I receive the error which say 20 messages are waiting but then my python code wasn’t able to grab the metadata. Do you have any idea what should I do?

from kafka import KafkaConsumer
import json

kafka_topic = "testTopic"
kafka_server = "kafka:9092"

consumer = KafkaConsumer(kafka_topic, bootstrap_servers=kafka_server, auto_offset_reset='earliest', enable_auto_commit=True)

for message in consumer:
    print(message)

This is my python code to receive metadata.

could you share the fix? Thanks! it helps others.

this issue would be outside of DeepStream. please refer to this topic.

I changed a new images for zookeeper and kafka

I think is in the same topic because the app wasn’t pushed to kafka do you have any idea about the issue. I doesn’t have any error about the connection

  1. can python code get one metadata?
  2. please make sure the server environment is fine first. I suggest setting up a kafka server by this method to check if the server can receive successfully.

Yes i could receive metadata while python code and deepstream_app is running in localhost, that means I couldn’t send or receive data once I migrate it to docker.

  1. what do you mean about " running in localhost"? could you share the two docker start command-lines?
  2. In your case, seems there are two docker containers, one is running deepstream application to send kafka messages, you can run a kafka server on the other side to check if the receiving is fine.
  3. if the receiving is fine, you can run your python code including KafkaConsumer instead of kafka server. if receiving is abnormal, that should be related to KafkaConsumer usage.
✔ Container kafka-kafka-1      Started         0.0s 
✔ Container kafka-zookeeper-1  Started         0.0s 
✔ Container kafka-postgres-1   Started         0.0s

This is docker container for what I need for my backend

docker run  --gpus all --network dock-db-test -it ae10514a69f6 bash

I run the DS images with the same network as the backend container

  • The code is running inside bash instead
python3 deepstream_test_4.py -i /app/0320Daylight2.h264 -p /opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_kafka_proto.so --conn-str="kafka_docker;9092;testTopic" -s 0 --no-display

Then this is the command line to run deepstream python app.

Before the deepstream python app was run, kafka python code is running by using

docker run --network dock-db-test -it 0493a7f66f2e

This is the dockerfile that is receiving kafka metadata

FROM python:3.10

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python3", "kafka_to_postgres.py"]

The python code I have send before. BTW The python code is running find without any error, it is just waiting for metadata to push to kafka. Therefore, I assumed that the deepstream app wasn’t able to push metadata to kafka lead to the python code to not able to receive the metadata

As you could see the network is connected as the same and it could ping kafka and postgres in deepstream and python code images

  1. please rule out the network issue. for example, if in the the same container, can the deepstream application and python code including KafkaConsumer work well?
  2. if yes, that should be the network issue. can you add “–net=host” to start the two docker and try again? “ping ok” can’t make sure some ports are accessible.

Here to update my progress. I have successfully make it work. The solution is to add a bridge to the driver in docker.

1 Like

Glad to know you fixed it, thanks for the sharing!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.