NeMo Toolkit on Jetson Nano

I’ve been wanting to load the NeMo Toolkit onto the Jetson nano but I seem to be running into issues downloading the container from NGC. I found it on github and followed the install instruction, I received all kinds of error. (I will be giving it another go on a fresh image tomorrow, I will copy the error log to post) any advise on what can be done to smooth things along and maybe get a successful install?


Do you meet the permission issue from docker.sock?
If yes, you can enable its authority as a workaround:

sudo chmod 777 /var/run/docker.sock

Although I don’t have too much experience on this docker image, it’s recommended to check if it support L4T(linux4Tegra) system first.

If it doesn’t support L4T-based OS, you will not be able to use it on Jetson.

Note that the above workaround gives root permissions to all local users (and potentially remote ones).

Thank you, I’m going to try this today. That is the error I’m getting.

Can I make a suggestion.
I love this board. I have messed around on Raspberry pis for years now and remember the growing pains for the community well.
The Jetbot image is pretty well put together. But it feels like the base Jetpack image was put together in a rush. If there were a total of 4 images for people to try. I think you would get faster growth on the board.
1:Base image-Already available, Just needs clean-up… Have things like pytorch and other frameworks.
2:Jetbot- Already available
3:Visual- All the vision ai samples ready to compile. With all need dependencies installed.
4:Speech- Models for developing ASR,NLP and TTS systems. Think a train and use, on device AI assistant SDK
or even make SDK packages that could Be loaded by the SDK manager
just an idea

And I know at some point the Jarvis SDK is meant to come out, Been signed up for it’s early access for about a month and a half now. But there is ZERO talk about it.

1 Like

Thanks for the warning. It’s not something I’m worried about though. locally it is just me and I have other safeguards in place for internet.

Locally it’s probably fine, so long as you trust yourself and anybody who ever connected to your local network not to accidentally click on anything silly (not my case), although you could put a user in the docker group instead.

Or you can “sudo docker”, since docker (unless a roootless install) gives you root anyways.

You might want to “sudo apt install ufw && sudo ufw enable” to cut off all incoming connections.

Ssh is open by default to password authentication with unlimited retries and some images have jupyter running on port 8888. Jupyter can give you a shell which in this case can then give you root.

Just be careful. Even if you’re on a “trusted” local network, it doesn’t hurt to put up additional barriers.

update on the subject:

Hey,thanks for the link! Hopefully it will work. At this point I think you can deploy finished models on the nano. But it doesn’t have the juice for training.