Issue with DLI dock container

I was following the course for Jetson nano, and found the below issue:
xxxx@xxxx-desktop : ~ $ ./docker_dli_run.sh
“docker run” requires at least 1 argument.
See ‘docker run --help’.
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG…]
Run a command in a new container
./docker_dli_run.sh: line 2: --device: command not found

I guess it should jump into the nano. Did I miss anything?

1 Like

I removed the docker and reinstall, got something below:
xxxxx@xxxxx-desktop : ~ $ echo “sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --volume /tmp/argus_socket:/tmp/argus_socket --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4” > docker_dli_run.sh

xxxxx@xxxxx-desktop : ~ $ chmod +x docker_dli_run.sh

xxxxx@xxxxx-desktop : ~ $ ./docker_dli_run.sh

docker: Error response from daemon: error gathering device information while adding custom device “/dev/video0”: no such file or directory.

ERRO[0000] error waiting for container: context canceled

Hi,

/dev/video0 is usually used for a USB camera.
The error indicates you don’t have a camera connected to the device.

You can launch the docker without using --device /dev/video0 but this will limit you from the live camera demo.

Thanks.

2 Likes

I received the same error,

“docker run” requires at least 1 argument.
See ‘docker run --help’.
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG…]
Run a command in a new container
./docker_dli_run.sh: line 2: --device: command not found

However, I am using a CSI camera (Raspberry Pi) and reinstalling gives the same error. My camera does connect and work on other files. I don’t know why this error is given. Any possible reasons? @AastaLLL

Following the course [Getting Started with AI on Jetson Nano].
https://courses.nvidia.com/courses/course-v1:DLI+S-RX-02+V2/courseware/b2e02e999d9247eb8e33e893ca052206/63a4dee75f2e4624afbc33bce7811a9b/?activate_block_id=block-v1%3ADLI%2BS-RX-02%2BV2%2Btype%40sequential%2Bblock%4063a4dee75f2e4624afbc33bce7811a9b

1 Like

Can you try running this command directly as opposed to putting it in a script?

sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --volume /tmp/argus_socket:/tmp/argus_socket --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4
3 Likes

The command worked running it directly. Thank you.

There appears to be a missing “\” after the line that ends with socket in the example shown. Adding the missing “\” allows the script to run as expected.

Ex:

echo "sudo docker run --runtime nvidia -it --rm --network host \
    --volume ~/nvdli-data:/nvdli-nano/data \
    --volume /tmp/argus_socket:/tmp/argus_socket \
    --device /dev/video0 \
    nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4" > docker_dli_run.sh

Cheers,

Jon

2 Likes

Thanks for point that out, @jonnymovo - we have corrected that missing backslash in the DLI course documentation.

1 Like

I had similar issue, but this doesn’t work for me.

I use 12 MP IMX477 from Arducam B0249.

I have error like this:
docker: Error response from daemon: error gathering device information while adding custom device “/dev/video0”: no such file or directory.

marek@jetson-marek:~$ ./docker_dli_run.sh
“docker run” requires at least 1 argument.
See ‘docker run --help’.

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG…]

Run a command in a new container
marek@jetson-marek:~$
marek@jetson-marek:~$
sumarek@jetson-marek:~$ sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-no/data --volume /tmp/argus_socket:/tmp/argus_socket --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4
docker: Error response from daemon: error gathering device information while adding custom device “/dev/video0”: no such file or directory.

Hi @mar.kalemba, since you are using MIPI CSI camera, remove the --device /dev/video0 option from your command line when you run the container.

2 Likes

Thank you! Unfortunately i have still problem with:

I tried other programs, but their also show “no device found”.
Do you know simply method to check connection with camera?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Outside of the container, try running nvgstcapture-1.0
It should show the camera feed if it is working.

Had you installed a driver for the IMX477 camera?

I’m using a pi v2 camera for the ai tutorial series. If I remove this will this mean that I won’t be able to use my camera with jupyter labs?

It works for me…
here is the results:

nvidia@Jason:~$ sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-no/data --volume /tmp/argus_socket:/tmp/argus_socket nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4
allow 10 sec for JupyterLab to start @ http://192.168.55.1:8888 (password dlinano)
JupterLab logging location: /var/log/jupyter.log (inside the container)
root@Jason:/nvdli-nano#

As I have a Pi Camera I used the next :

$sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-no/data --volume /tmp/argus_socket:/tmp/argus_socket nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4

Note, in the example it shows to use " --device /dev/video0" even for the MIPI CSI camera. It just states to add the socket line :
--volume /tmp/argus_socket:/tmp/argus_socket

Cheers,

Jon

Hi @jonnymovo, if you have a MIPI CSI camera connected, typically a /dev/video0 device will be created for it (but this V4L2 device for the CSI camera will be without ISP applied - i.e. in raw format, without debayering). So the MIPI CSI cameras should not typically be used through the V4L2 device, although it is fine to have it on the Docker run command. In the notebooks for this container, the CSI cameras are indeed used through the GStreamer nvarguscamerasrc element and not the V4L2 way.

Since the user above was having trouble with their IMX477 camera driver (which wasn’t created the /dev/video0 node for it), I suggested they remove it from the Docker run command so it would allow them to start the container.

2 Likes

Hi @dusty_nv ,

I’m using RPi Camera V2. I tried everything what you mentioned above.

  • run the command without "–device…:"part
  • run the command which I just found in the course
  • run the command directly (instead of using a script)

My last command was this:

sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --volume /tmp/argus_socket:/tmp/argus_socket nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4

But I’ve got this error:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused “process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=30961 /var/lib/docker/overlay2/7085184a64ba949a08a40379acfa23ef2147a13d9d55223a883f1d9ab89eb7ed/merged]\\nnvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/7085184a64ba949a08a40379acfa23ef2147a13d9d55223a883f1d9ab89eb7ed/merged/usr/lib/aarch64-linux-gnu/libnvidia-fatbinaryloader.so.440.18: file exists\\n\""”: unknown.

Can anybody help me out?

Thanks in advance :)

I do not know if it helps: I tried to find the lib.

sudo find / -iname libnvidia-fatbinaryloader.so*
[sudo] password for zsirosb:
find: ‘/run/user/1000/gvfs’: Permission denied
/usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.440.18
/var/lib/docker/overlay2/5fe58b3d8c3f765f4f3a34e4d46647a2a9feca44a7d30e45f20ae1856f61c7dc/diff/usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.440.18
/var/lib/docker/overlay2/5fe58b3d8c3f765f4f3a34e4d46647a2a9feca44a7d30e45f20ae1856f61c7dc/diff/usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.32.4.4
/var/lib/docker/overlay2/5fe58b3d8c3f765f4f3a34e4d46647a2a9feca44a7d30e45f20ae1856f61c7dc/diff/usr/lib/aarch64-linux-gnu/libnvidia-fatbinaryloader.so.440.18
/var/lib/docker/overlay2/5fe58b3d8c3f765f4f3a34e4d46647a2a9feca44a7d30e45f20ae1856f61c7dc/diff/usr/lib/aarch64-linux-gnu/libnvidia-fatbinaryloader.so.32.4.4

But I do not get the point…

Hi @zsirosbence, generally you would see that kind of error if you were trying to run a version of the container that was not built against the version of JetPack-L4T you are running. If you run cat /etc/nv_tegra_release, which version of L4T does it report you are running? It should match the one in the container tag (i.e. the r32.4.4 part)

Hi, @dusty_nv

Thanks for your answer. It turned out that I installed different L4T version. Now everything is good.

Thanks!!

Hi, @dusty_nv
I had same issue. so I run cat /etc/nv_tegra_release

I got # R32 (release), REVISION: 5.1, GCCID: 26202423, BOARD: t210ref, EABI: aarch64, DATE: Fri Feb 19 16:45:52 UTC 2021

And I tried to run

echo “sudo docker run --runtime nvidia -it --rm --network host
–volume ~/nvdli-data:/nvdli-nano/data
–volume /tmp/argus_socket:/tmp/argus_socket
nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.1” > docker_dli_run.sh

chmod +x docker_dli_run.sh

./docker_dli_run.sh

again.

and then docker: Error response from daemon: manifest for nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.1 not found: manifest unknown: manifest unknow.

please help…