Nano SD-Card Image 32.3.1

Nano SD-Card Image 32.3.1

Does this image include the full Jetpack AI image?
How does one get an inventory of the 32.3.1 image?

If so, how do we access Jupyter Lab locally or via remote desktop (VNC).
Jupyterlab is not currently a running process. (ps -A)

How to start jupyterlab? Where is it? I have tried 192.168.55.100 and .1 nothing comes up.
Its not in the path on the stock startup env.

Where are the python AI libs? Pytorch not installed?
Keras not installed? SciKitLearn not installed?
Where is virtualenv?

Like where is Pandas?

My objective is to run the Nano headless and run notebooks remotely on my Mac so I can develop Machine Learning scripts in Python that run way faster than on my Mac.

Thanks in advance

So it comes with everything you would get if you went through the SDK manager process. But you have to remember some key stuff is missing for basic file management.
sadly that is it.
I’m sure in time there will be a better base image, once people really start wanting it. The raspberry pi community was painful to grow with.

FYI, you can use the “unminimize” command to install the missing packages from Ubuntu 18 LTS. You can also just “apt install” just about any Ubuntu package out of the box. If you want an entirely different image, nvidia’s documentation and scripts make it fairly simple to supply your own aarch64 rootfs.

I have tested Ubuntu Base 18.04 LTS and it works, all the way down to OTA updates, but you should be familiar with that rootfs and the Linux for Tegra documentation before starting. At a minimum you must apply the kernel, modules, and (highly recommended) Nvidia’s software.

I believe it’s enabled in the DLI Nano image, but no on the stock image by default. Most people probably don’t want it running by default for security reasons, however if you wish to install it, the procedure is the same as on Ubuntu for X86.

All are installable directly through apt, pip, or are otherwise available. I believe the DLI Nano image includes most, if not all of these. Otherwise the instructions to install them are either the same as on Ubuntu x86, or can be found by searching this forum (or searching this forum with Google)

Thanks all for the responses.

I did install Jupyter Lab (and Jupyter) using pip. Its running and I can access it remotely via ethernet.

My latest challenge now is to get and build the AI environment using Anaconda. I have done that before on an ioT device before (Raspberry Pi)

My big question now is how to unite the cv, pytorch and tensor flow libs that are already on this device from Jetpack.
The reason I am inclined towards Anaconda is that it has been the most hassle-free environment for Python and data science so far on my other systems and I am quite comfortable with it.

Can’t I just install the latest distributions of cv, pytorch and tensor-flow from open source rather than trying to wire up the disparate installs? Anaconda creates a default environment called ‘base’ where everything new gets installed and works.

Good hunting.

There this port:

https://github.com/Archiconda

However I don’t think any of the packages are cuda aware. You will have to find wheels here for pip or build them from source. I don’t use Anaconda and am not sure what is the procedure for building packages for it.

I am not really sure what you mean by ‘unite’ the packages. If they are installed and you run python3, can you import them? Most of your code should work fine with few changes if the same packages are installed.

@mdegans,

Thanks for the response.

I found that all the packages are installed and I can see them from a pip3 freeze and can import them into command line Python.

One thing I wanted to do was install pandas but that never seemed to succeed.

The reason I was looking at Archiconda is that it brings all the packages I am familiar with in my ML work.
However, if they are not cuda-aware, it isn’t going to work at all.

What I am finding is that advanced developers are making use of Docker containers to build custom application environments. I am familiar with Docker so eventually, I may be doing it that way.

If Nvidia had invested in Archiconda, they would have solved a lot of developer issues especially for the data science community. I believe a lot of data science students and hobbyists are looking to use the Nano to offload their numerically-intense experiments to a separate IoT device. I can see that the task is to build GPU-native versions of each package which may have been a tall order.

Later

Nvidia has a container repository (ngc) with many images you might find useful as a base. Docker in installed and running by default on the nano.

Strange pandas doesn’t install. I haven’t specifically tried pip installing it. You could try apt installing it with “sudo apt install python3-pandas”. (Or python-pandas for python2)

Not all packages (eg. Pandas) need to be or could benefit from GPU acceleration. Those that do usually just need to be rebuilt. Pip does that for you most today the time provided the system dependencies are installed. If you hit an error installing something with pip/pip3, post a thread here and people will help you out.

As to offloading heavy computation onto another device, you’re probably better off with x86+Nvidia. Tegra is great for the edge, but arm cpus are not workhorses.