Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU • DeepStream Version
DS6.4 • JetPack Version (valid for Jetson only) • TensorRT Version
TensorRT 8.6.1.6 • NVIDIA GPU Driver Version (valid for GPU only)
535.171.04 • Issue Type( questions, new requirements, bugs)
FROM nvcr.io/nvidia/deepstream:6.4-gc-triton-devel
This image is used to containerize my app, however I have tried using deepstream_python_apps to run deepstream_test4.
It couldn’t connect to kafka. Two container was used one is kafka and deepstream6.4 image, deepstream container is able to ping kafka.
%3|1715587438.323|ERROR|rdkafka#producer-1| [thrd:app]: rdkafka#producer-1: localhost:9092/1: Connect to ipv6#[::1]:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
This is the command I use to run the python app in docker container • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
please enter to /opt/nvidia/deepstream/deepstream-7.0/sources/tools/nvds_logger, then execute ./setup_nvds_logger.sh, then start the application. please share the running log and low-level log.
%3|1715655598.038|ERROR|rdkafka#producer-1| [thrd:app]: rdkafka#producer-1: localhost:9092/1: Connect to ipv6#[::1]:9092 failed: Connection refused (after 0ms in state CONNECT)
from the log, 9092 port is inaccessible, you need to use “–net=host” to start server docker or use “-p” to map the port.
if still can’t work, I suggested using /opt/nvidia/deepstream/deepstream-7.0/sources/libs/kafka_protocol_adaptor to test kafka sending specifically. please refer to the readme, you need to use “make -f Makefile.test” to compile, then execute ./test_sample_proto_async
I think I solve the error, however, the new error was about kafkaConsumer. I have a python code that will grab the metadata from deepstream python app and push to postgres.
The step I start my app:
docker compose up for kafka, zookeeper and postgres
Then run the docker which contain the code to grab the metadata.
Start the deepstream python app.
%4|1715668584.242|TERMINATE|rdkafka#producer-1| [thrd:app]: Producer terminating with 20 messages (35129 bytes) still in queue or transit: use flush() to wait for outstanding message delivery
I receive the error which say 20 messages are waiting but then my python code wasn’t able to grab the metadata. Do you have any idea what should I do?
from kafka import KafkaConsumer
import json
kafka_topic = "testTopic"
kafka_server = "kafka:9092"
consumer = KafkaConsumer(kafka_topic, bootstrap_servers=kafka_server, auto_offset_reset='earliest', enable_auto_commit=True)
for message in consumer:
print(message)
I think is in the same topic because the app wasn’t pushed to kafka do you have any idea about the issue. I doesn’t have any error about the connection
please make sure the server environment is fine first. I suggest setting up a kafka server by this method to check if the server can receive successfully.
Yes i could receive metadata while python code and deepstream_app is running in localhost, that means I couldn’t send or receive data once I migrate it to docker.
what do you mean about " running in localhost"? could you share the two docker start command-lines?
In your case, seems there are two docker containers, one is running deepstream application to send kafka messages, you can run a kafka server on the other side to check if the receiving is fine.
if the receiving is fine, you can run your python code including KafkaConsumer instead of kafka server. if receiving is abnormal, that should be related to KafkaConsumer usage.
Then this is the command line to run deepstream python app.
Before the deepstream python app was run, kafka python code is running by using
docker run --network dock-db-test -it 0493a7f66f2e
This is the dockerfile that is receiving kafka metadata
FROM python:3.10
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python3", "kafka_to_postgres.py"]
The python code I have send before. BTW The python code is running find without any error, it is just waiting for metadata to push to kafka. Therefore, I assumed that the deepstream app wasn’t able to push metadata to kafka lead to the python code to not able to receive the metadata
As you could see the network is connected as the same and it could ping kafka and postgres in deepstream and python code images
please rule out the network issue. for example, if in the the same container, can the deepstream application and python code including KafkaConsumer work well?
if yes, that should be the network issue. can you add “–net=host” to start the two docker and try again? “ping ok” can’t make sure some ports are accessible.