Deepstream-retail-analytics : docker.service failed because the control process exited with error code

• Hardware Platform (Jetson / GPU)* nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2.3-1+cuda12.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 540.2.0
• Issue Type( questions, new requirements, bugs)

Github Link :

When I try to install :

To install NVIDIA container toolkit follow these instructions:

  1. Setup docker repository

sudo apt-get update sudo apt-get install ca-certificates curl gnupg lsb-release sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo “deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] Index of linux/ubuntu/ $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

  1. Install Docker

sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

  1. Install nvidia-docker

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed ‘s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g’ | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

It give error at sudo systemctl restart docker

ter/deepstream-retail-analytics$ sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See “systemctl status docker.service” and “journalctl -xeu docker.service” for details.

the doc above is a little old. please refer to this link for how to install NVIDIA Container Toolkit.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

Sorry to say you, it does not work.

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list |
sed ‘s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g’ |
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sed -i -e ‘/experimental/ s/^#//g’ /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt-get update

sudo apt-get install -y nvidia-container-toolkit

sudo nvidia-ctk runtime configure --runtime=docker

When executed : sudo systemctl restart docker
$ sudo systemctl restart docker
Failed to restart docker.service: Unit docker.service not found.

what is the device model? Orin nano? could you share a whole log? wondering if there is any error tip.

It’s orin nano:

Please find log file here.
logfile_retailanalytics.txt (12.3 KB)

Noticing you are using Jetson and DeepStream 7.0, please refer to this link for how to install docker and nvidia-container-toolkit.

tried this one Ubuntu | Docker Docs as you mentioned, but getting same error.

Job for docker.service failed because the control process exited with error code.
See “systemctl status docker.service” and “journalctl -xeu docker.service” for details.

Log :
logfile_retailanalytics.txt (20.8 KB)

nvcr.io/nvidia/deepstream:6.1-devel is dgpu docker. from the description in the doc, the code was not tested on Jetson. currently there are some suggestions.

  1. please ignore the step " systemctl restart docker", which is not mentioned in the official installation doc.
  2. please refer to the new retail sample.

I actually followed your advice, but unfortunately, during the Docker installation, DeepStream 7.0 got corrupted. I kindly request that you check if it’s working on Jetson or not. I’ve spent a valuable amount of time on this issue. Unfortunately, I lost many important files and data. Thank you.

what do you mean about “DeepStream 7.0 got corrupted”? do you mean DeepStream in host can’t work? is there any error? DeepStream docker image includes DeepStream, you don’t need to install DeepStream in docker container.

DeepStream on the host is not working. After installing Docker, it returns an error, and when I turn off the Jetson device and turn it back on, it seems like the OS crashes. Right now, due to time limitation and lack of information, I gave up it. Sorry!

Hello Sir,

thank you very much once again sir. i am starting same project again.

I wrote following script, script is running , i wanted to send all data to database using the kafta protocol similar concept used in retail analytics, i have question,

is it possible to do implement kafta protocol and other things without using docker ?

YES. please refer to this doc for how to install DeepStream. please refer to deepstream-test4 for how to sending broker.

thank you very much. Its very useful.

Hello Sir, I implemented docker using below step :

To set up the Confluent Platform for Kafka and kSQL, follow these steps:

Step 1: Download and Run Docker Compose

  1. Open a terminal window (not inside the DeepStream container) and run the following command to download the Docker Compose file:

    wget https://raw.githubusercontent.com/confluentinc/cp-all-in-one/7.2.1-post/cp-all-in-one/docker-compose.yml
    
  2. Start the services defined in the Docker Compose file:

    docker-compose up -d
    
  3. Verify that all containers started successfully:

    docker ps
    

Step 2: Create a Kafka Topic

  1. Execute the following command to access the Kafka broker container:

    docker exec -it broker /bin/bash
    
  2. Within the container, create a Kafka topic named “detections”:

    kafka-topics --bootstrap-server "localhost:9092" --topic "detections" --create
    

    Alternatively, you can create the topic through the Confluent Control Center by navigating to Cluster > Topics > Add Topic.

Step 3: Set Up kSQL Stream

  1. To access the kSQL CLI, run:

    docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
    
  2. Once the CLI is active, copy and paste the following SQL command to create the stream:

    CREATE STREAM DETECTIONS_STREAM (
        messageid VARCHAR,
        mdsversion VARCHAR,
        timestamp VARCHAR,
        object STRUCT<
            id VARCHAR,
            speed INT,
            direction INT,
            orientation INT,
            detection VARCHAR,
            obj_prop STRUCT<
                hasBasket VARCHAR,
                confidence DOUBLE>,
            bbox STRUCT<
                topleftx INT,
                toplefty INT,
                bottomrightx INT,
                bottomrighty INT>>,
        event_des STRUCT<
            id VARCHAR,
            type VARCHAR>,
        videopath VARCHAR
    ) WITH (
        KAFKA_TOPIC='detections', 
        VALUE_FORMAT='JSON', 
        TIMESTAMP='timestamp',
        TIMESTAMP_FORMAT='yyyy-MM-dd''T''HH:mm:ss.SSS''Z''
    );
    

Note

but when i run docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
it return error with : docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

ERROR*
Remote server at http://ksqldb-server:8088 does not appear to be a valid KSQL
server. Please ensure that the URL provided is for an active KSQL server.

The server responded with the following error:
Error issuing GET to KSQL server. path:/info
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException:
Connection refused: ksqldb-server/172.18.0.7:8088
Caused by: Could not connect to the server. Please check the server details are
correct and that the server is running.


Error solved with docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

thank you

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.