When I read through the “Two Days To A Demo” script, it seemed reasonable to attempt installing a DIGITS server on the Jetson Nano. I got stopped fairly early in such an attempt: I did not want to try Docker and I could not find a PCIe driver to use to adapt the script to the Nano. So I would first like to know
has anyone gotten the DIGITS server to a) run on the Nano, and if so, b) how did it install, and c) what are the training times when the server software is used for the examples?
I ended up using PyTorch and doing the retraining for a client using the plantCLEF data. That took about 12 hours, and I chose to run with a 4GB swapfile and power mode 1 (5W) to keep the system from shutting down during training. The script suggested that 8 hours would do for this retraining example. Thus my next question is
How can one do it that fast on a nano? Do they use a fan and power mode 0? Or a larger swap file? Or perhaps they used a faster microSD card for the system install?
If this question is not answered, I would settle for some more examples I could use to show my client about training and retraining using the Nano, combined with expected runtimes and configurations. Besides power mode and swap file size, what other system configuration parameters can I tweak to improve retraining speed?
Some community members have reported being able to install DIGITS on Jetson (I haven’t heard of on Nano in particular), but yes - technically DIGITS is only supported on PC/server platforms.
If your Nano is shutting down in 10W mode, that would indicate a power supply issue. Have you tried a DC barrel jack power adapter? That should improve the training performance. Disconnecting the display and running headless during training may as well.
Another thing you can do is to store the dataset and your swap file on a USB3 SSD (or use a USB3 SATA adapter).
Yes, I used a 5V 4A power supply with the barrel jack. It took three or four retries and some experimentation with an ice pack before I tried using power mode 1 for the retraining. I passed along the suggestion of “headless retraining” to my client, thanks.
If you know of other successful training examples on the Nano, I would appreciate seeing them with timing and environment data.
I have tried the hello world and the docker works though.
nvidia@nvidia-desktop:~$ docker run ubuntu:18.04 /bin/echo “Hello world”
Unable to find image ‘ubuntu:18.04’ locally
18.04: Pulling from library/ubuntu
fbdcf4a939bd: Pull complete
d3463cc4abcf: Pull complete
4cf5b492942e: Pull complete
7799262edbd8: Pull complete
Digest: sha256:8d31dad0c58f552e890d68bbfb735588b6b820a46e459672d96e585871acc110
Status: Downloaded newer image for ubuntu:18.04
Hello world
nvidia@nvidia-desktop:~$
nvidia@nvidia-desktop:~$ docker pull nvidia/digits
Using default tag: latest
latest: Pulling from nvidia/digits
Digest: sha256:9b37921080efcedb93e1cd138b8981de14c65ca4cdb2dbcbb465d02a0fb6a513
Status: Image is up to date for nvidia/digits:latest
nvidia@nvidia-desktop:~$
Hi AK51, DIGITS isn’t officially supported on Jetson, and the Docker image from NGC is built for x86_64 (so it won’t run on ARM aarch64). You could try installing DIGITS from source on Jetson, but it is recommended to run DIGITS on a PC/server system.
Thanks for your reply.
I just need to create a simple and fast object detection on nano, and output the coordinate. i.e. Just to track a robot (one object) if it appears.
I have done the training in Azure and output the onnx. linux_dockerfile and tensorflow, how can I put in your object detection python code?
Or is there any suggestion?
Note: I did try the on-board object tracking sample, but it is easy to lose the object tracking.
Btw, is there a link to build DIGITS on nano? Thx robot.zip (51.3 MB)
Then how to know which one is working or not for Jetson?
I spent quite a time till finding this. Please help us not to repeat the same mistake of mine and AK51.