how can i install the pytorch?

for install pytorch, i follow the PyTorch for Jetson - Jetson Nano - NVIDIA Developer Forums.
in python3,
i do import pytorch, the follow error occurs.

Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
Traceback (most recent call last):
File “”, line 1, in
File “/home/esal/.local/lib/python3.6/site-packages/torch/init.py”, line 81, in
from torch._C import *
ImportError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory

how can i solve this problem?

Hi lee2h, which version of JetPack are you using? Those PyTorch wheels should be installed on JetPack 4.2 or newer.

ImportError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory

libnvToolsExt.so should be installed by CUDA toolkit under /usr/local/cuda/lib64 - so either the SDK Manager did not install CUDA toolkit to your TX2, or you should add these lines to the end of your ~/.bashrc file:

export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

I ran into the same issue after following the same proccess, then realized these libraries are not included in the nvcr.io/nvidia/l4t-base:r32.3.1 image.

What is the recommended way to include these objects into an image?

Hi willcbaker, I believe those libraries/packages should be pulled or mapped from your Jetson root. But if you try the deepstream-l4t container, does it work then?

https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t

I˙ve tried all these images in Jetson NX ( NVIDIA L4T PyTorch | NVIDIA NGC ) and I˙m getting this error message:

root@6d1b568bfc54:/home/detector# python3
Python 3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import torch
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/torch/init.py”, line 81, in
from torch._C import *
ImportError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory

I˙ve tried to install the pytorch, but the error is the same. All the others libraries are working well (TensorRT, NumPy etc), but the pytorch isnt.

Hi @adriano.santos, are you running the container with --runtime nvidia option?

Inside the container, do you see the file /usr/local/cuda/libnvToolsExt.so.1?

What version of JetPack-L4T are you using?

JetPack

Package: nvidia-jetpack
Version: 4.4-b144
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 195

When I add --runtime=nvidia, I got this:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused “process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 1 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\n\""”: unknown.

So JetPack 4.4-b144 is the Developer Preview (L4T R32.4.2), so you should try the containers with r32.4.2 tag, like the following:

nvcr.io/nvidia/l4t-pytorch:r32.4.2-pth1.5-py3

You should be able to run it like so:

sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r32.4.2-pth1.5-py3

Same issue for me with TensorFlow. It works outside of the container but not inside. Impossible to import tensorflow.

Specs:
NVIDIA Jetson TX2
L4T 32.2.1 [ JetPack 4.2.2 ]
CUDA 10.0.326
CUDNN: 7.5.0.56

@dusty_nv, I’ve solve the problem. I saw that the problem was between the Docker Image and the host. Maybe these process could be useful for you, @alexandremg0jh, as well.

a) reinstall the NVidia Docker dependencies:

  • libnvidia-container0_0.9.0_beta.1_arm64
  • libnvidia-container-tools_0.9.0_beta.1_arm64
  • nvidia-container-csv-cuda_10.2.89-1_arm64
  • nvidia-container-csv-cudnn_8.0.0.180-1+cuda10.2_arm64
  • nvidia-container-csv-tensorrt_7.1.3.0-1+cuda10.2_arm64
  • nvidia-container-csv-visionworks_1.6.0.501_arm64
  • nvidia-container-runtime_3.1.0-1_arm64
  • nvidia-container-toolkit_1.0.1-1_arm64
  • nvidia-docker2_2.2.0-1_all

b) Add the key “default-runtime”: “nvidia” into the /etc/docker/daemon.json file and restart the docker service;
c) Run the command ```
sudo docker run -it nvcr.io/nvidia/l4t-pytorch:r32.4.2-pth1.5-py3 /bin/bash

d) Run python3 command;
e) Import the library (or libraries). 

Thats it.
1 Like