I can't connect the jupyterLab server according to the course

Dear Sir:
I can’t connect to the jupyterLab server according to jetson course.I have tried to configure the jetson nano use monitor,mouseand keyboard.After that, I can connect to the jetson nano use ssh in my own usr mode.But I can’t do what the jetson nano course said:the headless method to get started.What is wrong? I have tried the debug tips, there is no problem with the hardware.How to deal with this?

Dear Cauchineyang,

The Jetson Nano course has an image of 7.1 GB titled: ‘dlinano_v1-0-0_image_20GB.zip’ on https://developer.download.nvidia.com/training/nano/dlinano_v1-0-0_image_20GB.zip

Etcher that image. Boot your Jetson with it. Then you can open browser at your host and type url = http://192.168.55.1 and it will open Jupyter Lab.

Regards,
Suryadi

Hi Suryadi,

Thanks for your comments.

I am a bit puzzled. To get started is it better to go for the headless setup (using the dlinano boot image) or go down the standalone workstation path (with the jetson boot image)? I didn’t realise that there is more than one boot image and I tried the headless solution with the Jetson image (frankly that’s the way I read the instructions -like Cauchineyang I guess) and I ended up in the weeds.

Are the different ways to get going discussed anywhere?

Thanks Chris

Dear Chrisyf54h,

If you want to learn and finish your DLI Introduction to Jetson Nano, go with DLI Image with headless you can do it remotely from your host to open http://192.168.55.1:8888 and play Jupyter Lab.
Otherwise, you use HDMI monitor, keyboard, mouse, network, webcam and use Jetpack 4.2.1 image.

In my experience, I use DLI Nano image and use complete HDMI, keyboard, mouse, network, webcam too.

Suryadi

Hi Suryadi,

Yup. I got it sorted. I just confused the two boot images because they are presented in the same way in two different parts of the website and I didn’t realise that the link in the headless tutorial was not the same one that I had already downloaded when I was looking at a different (but very similar) page.

Thanks again,

Chris

suryadi,

I installed the main sd card image for nano. 32.3.1.GA
I can access via hardwired ethernet with VNC and ssh.

What I thought was that the jetpack was part of this image? Is it not?

I was not able to access jupyter or jupyterlab via 192.168.55.1 even though I can ping that remotely. It appears that the Jupyterlab is not running.

Where is all the jetpack stuff like jupyter, the AI libraries ?

I manually had to install PIP3 and eventually installed Jupyter and labs but when I run a Python3 notebook, it doesn’t even find pandas. It looks like this card image is lacking all the data sciences (AI) stuff that is touted.

I installed 32.3.1.GA yesterday and that is what I am running now.

There seems to be a lack of documentation showing a package inventory for the SD card images.
If I reflash now, I will lose a day and a half of configs that work.

What do you suggest?

1 Like

I am also unable to get anything to work the way the course expects. I flashed the AI Nano v1_1_1 image as described in the instructions using etcher: Sign In

When I boot up the Jetson, and connect the micro usb to my host desktop computer, I can see it in the /dev/ directory, and even use the screen command to connect to it. I cannot, however, use any internet protocol with it. I can see that the IP address of the Jetson nano is the expected value of 192.168.55.1, yet when I try to ping that address, I get nothing back. The Jupyterlab interface does not connect either.

Everything appears to be working except the IP stack! How ridiculous, what am I supposed to do here?

Hi Joel,

I have since gotten everything going great and I will share that with you.

Make sure you take the getting started course because that will introduce you to some working stuff. It requires you boot off the DLINANO image.

After that:

I set out to use the standard sd-card image (not DLINANO) even though I have that and it does work.
I think we want to be able to create a single card image that has everything installed.

install Pytorch, Torchvision, Keras, Tensorflow and then we can run Jupyter notebooks headless.
CV2 is already there so you don’t need to install that.

If you have a wifi card that’s great. If not, you can just connect an ethernet cable to the i/f on the card. That works well.
The reason they want you to bring the system up with the USB is it is pretty straight forward. It allows you to set some things up without going the monitor, kbd and mouse route. Once you have the IP, you can usually get ssh working and then you are closer to a fully headless setup.

So I followed all the directions on this page :

https://devtalk.nvidia.com/default/topic/1049071/#5324123

It involves a lot of dependencies and sometimes it fails but I eventually got it all installed correctly and now I can import any of those packages into a notebook or a python script.

My proof of concept was to run some of the notebooks that were provided in the DLINANO load. You can find the source here:

https://github.com/sangyy/jetson-dlinano

These notebooks prove that a lot of the pieces work. It will give you a harness in which to experiment with the camera and deep learning.


To get Jupyter Lab to run I edited the jupyter_notebook_config.py to add this line:

c.NotebookApp.ip = '*'

This allows any IP address to access the notebook server remotely. Assuming this is on a private network behind the firewall.

Then you just run:

sudo jupyter lab --allow-root

It runs on the ip address on port 8888. So you invoke it in a browser with:

192.168.x.x:8888

And you will see the swirling logo and orbiting balls indicating you are coming up.


Memory is slim. Setup a 4gb swap (or larger) just so regular program data can swap out and not crush your ram.

I don’t recommend trying to train models in the nano. There really isn’t enough on-board ram for that. I was able to run mnist with keras/TF but not a very deep structure. It had 150mbytes left after the dataset loaded and barely ran but it did. It runs faster than colab w/o GPU but slower than their GPUs.

Here is the model summary that ran on 42,000 28x28 images:

Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1600)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               204928    
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 225,034
Trainable params: 225,034
Non-trainable params: 0

Here is the actual Epochs running:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Train on 42000 samples, validate on 18000 samples
Epoch 1/2
42000/42000 [==============================] - 32s 773us/step - loss: 0.2669 - accuracy: 0.9165 - val_loss: 0.0781 - val_accuracy: 0.9761
Epoch 2/2
42000/42000 [==============================] - 25s 586us/step - loss: 0.0667 - accuracy: 0.9792 - val_loss: 0.0511 - val_accuracy: 0.9848
Elapsed Training Time : 64.940 seconds

According to NV, the nano is a low-cost inference engine that works best on pre-trained image models since it can infer at > 30 fps. That means you can have it do object rec on moving video and it will have cycles to spare. That is on larger images as well.

The challenge for us is to get pre-trained models to use for inference on the Nano. Then we can take the sample code that gets frames from the camera and puts them through the CUDA pipeline where it can make use of the GPU very tightly.

Also, check this repo out. Dustin referred me to it for the inference api.

https://github.com/dusty-nv/jetson-inference

Good luck and share your journey here.

1 Like

mlpracticioner Thanks for your swift and detailed response. I know I can figure out how to manually get some sort of workflow going between my computer and the Jetson. I’m just upset these tutorials seem entirely unrealistic. All of the detailed steps you outlined should already be taken care of, according to the tutorial instructions. There should not be this level of messing around to simply follow the introductory level tutorial.

I’d like to hear from someone from nvidia because they need to fix the tutorial at the very least.

I think NV is aware of this and are working on an improvement.

My suggestion to them was to create some Docker containers that allow the environment to be setup so that all of the tutorial notebooks run without problems. This way we can keep the setup we have and add the containers to handle the dependencies involved.

It isn’t a great idea to force a new bootable sd-card image to fix these issues. A lot of us have already brought up Ubuntu with their previous images and have hefty configurations already in place.

Wiping everything with a new sd-card is not in the cards for me.

I had the same problem and this looks like an abject mess to fix without nuking the original SD card. The fix by mlpractitioner is excellent, but honestly, someone severely dropped the ball on fronting one configuration to install, and a second configuration for the class. Not a great first impression.

I’ve got the same problem and I’m completely lost now…

So far this experience sucks badly!!!

I’m on a slow internet connection so I’ve lost an entire days work to this already…