Error when "import torch" is executed after python3

How can we find out which containers share same base image, which wont share?
Any method to find out them.

We have 240 GB NVM M2 memory slot currently. Can I select this “NVM…” in the SDK manager instead of “eMMC” to flash my “software components”. Hope it flashs on to NVM memory only and things work fine. Please confirm.

oh ok.
Thanks for the info. So it means we can update the mounted folders from the container terminal itself and these updates will be reflected automatically in the folders in the file system outside the containers.
Basically this can be used for the working folders of the CUDA developers, where they can work inside the container terminal itself and can have access to there code/files etc of the outside file system from here itself. Pls confirm if my understanding is correct.

ok fine. Thanks for the information.

uninstalling via apt commands method, will be from the target. As there are many software components that are installed, uninstalling all of them individually through apt may not workout properly. Some dependency files might remain in some folders itself, that might have side effects with application functionalities later on.

As I will be installing many components of all related to deep learning package, can we uninstall all these fully from the SDK manager from Host PC. Just wanted to know?
Anyhow, I might be opening a new thread seperately for SDK manager installations queries soon.

Thanks.

You would have to look at their dockerfiles and what base images they used, I don’t think there is a way to realistically determine it. If you are so concerned about the disk space it is probably best to just pull all the containers you need and see what it actually takes.

I believe so, although I haven’t flashed the OS to NVME before (I tend to flash the OS to eMMC or SD card and use NVME for projects, data, containers, and code). If you encounter problems with flashing to NVME, I recommend you open a new topic about SDK Manager as you alluded to later in your post.

Then I suggest after your initial development, if you end up using containers and don’t need the JetPack packages installed on the device’s native filesystem (outside container), then just re-flash your device again and this time don’t have SDK Manager install them.

@dusty_nv

I was able to successfully install Nvidia-Jetpack on my device.
Later I was trying to install pytorch, in that when executing the below step, it is giving error as shown below:
Kindly let us know, how to resolve this.

root@trident-desktop:/home/trident# pip3 install Cython<3
bash: 3: No such file or directory

I am following the below steps to install as shared by you in the beginning of the thread:

> Python 3.6

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev
pip3 install Cython<3
pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl

@nagesh_accord try pip3 install 'Cython<3' instead

Thanks. It worked for me.

Finally I was able to completely install jetpack 5.1.2 component (with the command “sudo apt-get install nvidia-jetpack”) +pytorch+ opencv built with CUDA on my Jetson Agx Xavier.

Next in case of any further queries or issue, execution of sample programs of CUDa, open cv, torch, onn, tensorrr ,etc from customer will ask here.

Thanks a lot again for the support @dusty_nv

OK great, glad you got it working! For new issues please feel free to open another topic, and consider this one resolved for now.

1 Like

Sure. Thanks a lot for your help and guidance. I learnt a lot from you especially this docker, image, container concepts were very new to me.

1 Like

I tried running this command on the target to force the unit in to recovery mode.
it is throwing error saying:

reboot: invalid option – ‘o’

Any idea why?

@nagesh_accord it looks like it needs --force instead of -force

Yes, It worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.