Failed to run Pytorch NGC docker on Jetson nano

Hi,
I need to run pytorch model on Jetson nano, so I choose to Torch-TensorRT to convert from pytorch model to TensorRT, but I cannot run Pytorch docker on Jetson

sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/pytorch:22.05-py3
[sudo] password for loc:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: mount operation failed: /usr/src/tensorrt: no such file or directory: unknown.

Here the docker version I’m using

 sudo nvidia-docker version
[sudo] password for loc:
NVIDIA Docker: 2.10.0
Client: Docker Engine - Community
 Version:           20.10.16
 API version:       1.41
 Go version:        go1.17.10
 Git commit:        aa7e414
 Built:             Thu May 12 09:16:54 2022
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.16
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.10
  Git commit:       f756502
  Built:            Thu May 12 09:15:20 2022
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.6.4
  GitCommit:        212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
 runc:
  Version:          1.1.1
  GitCommit:        v1.1.1-0-g52de29d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Could anyone suggest me how to run this docker?
If cannot install Torch-TensorRT, how can I deploy pytorch save model on Jetson nano?

Hi @vision-hobbist1995, please use the l4t-pytorch container on Jetson instead:

https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch

It has PyTorch / torchvision / torchaudio and TensorRT installed in it, but not Torch-TensorRT, so you would need to install that.

1 Like

Could you share with me how to install Torch - TensorRT ?
It seems like not many people use this tool on Jetson Nano for Pytorch models

I haven’t built it myself, but here are the instructions: https://github.com/pytorch/TensorRT#compiling-torch-tensorrt

There’s also torch2trt which may be easier for you to install: https://github.com/NVIDIA-AI-IOT/torch2trt

1 Like

thank you, dusty
as I know, there are ways to run pytorch on jetson:

  • convert to onnx
  • using torch-tensorrt
  • torch2trt
    Do you know which way is most suitable for jetson nano ?

Personally I use the convert-to-ONNX way when possible, and then run it through the TensorRT Python API (or I have a library called jetson-inference that I use). This way typically doesn’t depend on pytorch at runtime for inferencing, and saves memory from not needing to import pytorch at all. However the pytorch model needs to be compatible with ONNX, and you need to have any pre/post-processing that the model requires implemented outside of PyTorch.

Otherwise, torch2trt seems easy to use when the model works with it. It’s easier to integrate with your existing PyTorch code, but depends on PyTorch at runtime. torch-tensorrt is newer and is under active development.

thank you for your kind advice

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.