Tutorials in print

I much prefer the written tutorials over the new Docker format. That could be because I do not really understand Docker or how it works. Please do not remove the written form of the tutorials! I am old (63) and just prefer the written form and find them easier to follow.

8-Dale

May I know where is the written form of the tutorials you mentioned?

Thanks

https://courses.nvidia.com/courses/course-v1:DLI+C-RX-02+V1/course/#block-v1:DLI+C-RX-02+V1+type@chapter+block@b2e02e999d9247eb8e33e893ca052206

8-Dale

Hi @hybotics.wy, this course is the same as it was before, except that it now uses a container to deliver the contents as opposed to SD card image. All of the written documentation is still there, and we won’t be removing this.

I did figure the container out and was able to add some data. However, when I went back in to add more data, I could not find the data I had previously entered.

How do I access previously entered data??

8-Dale

If by adding data you mean you saved images through the data collection parts of the notebook, it should be under ~/nvdli-data on your Jetson. The ~/nvdli-data directory gets mounted to /nvdli-nano/data inside the container when you start the container with this command from the tutorial:

sudo docker run --runtime nvidia -it --rm --network host \
    --volume ~/nvdli-data:/nvdli-nano/data \
    --device /dev/video0 \
    nvcr.io/nvidia/dli/dli-nano-ai:<tag>

So inside the container save your data to /nvdli-nano/data and it will appear under ~/nvdli-data on your Jetson (outside of container).

There is no "–volume" reference in the docker_dli_run command I got from the tutorial. My docker_dli_run command has “sudo docker run --runtime nvidia -it --rm --network host --volume /tmp/argus_socket:/tmp/argus_socket
–device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.6.1” in it.

Hi @hybotics.wy, here is the command copy & pasted from the DLI documentation with the --volume argument:

# create a reusable script
echo "sudo docker run --runtime nvidia -it --rm --network host \
    --volume ~/nvdli-data:/nvdli-nano/data \
    --volume /tmp/argus_socket:/tmp/argus_socket \
    --device /dev/video0 \
    nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.6.1" > docker_dli_run.sh

# make the script executable
chmod +x docker_dli_run.sh

# run the script
./docker_dli_run.sh

want to have my data on my computer where I can get to it anytime. I am sure this can be done but I do not know exactly how. I think it has to do with a bind to a directory but I am not sure how to implement it. I have just started learning about docker.

8-Dale

Yep you are correct, and mounting/binding to a directory is done with the --volume argument. So if you insert --volume ~/nvdli-data:/nvdli-nano/data to your docker_dli_run.sh script, then the ~/nvdli-data directory on your Jetson will be mounted into the container (and the training images will be saved here)

1 Like

Very cool! Thank you. At least I was on the right track. I probably would have figured it out eventually. ;)