Torch tensorrt installation failed

I am trying to install torch tensorrt in nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3 container and have a jetpack version 5.1.0 using the following link : Installation — Torch-TensorRT v2.5.0.dev0+0ef880d documentation and compiling the torch_tensorrt from source.I cloned the torch tensorrt repository and followed all he instructions mentioned in the dockerfile here :jetson-containers/packages/pytorch/torch_tensorrt/Dockerfile at master · dusty-nv/jetson-containers · GitHub
I used the torch_tensorrt 1.4.0 for installation it compiles and I got a wheel file as well but when I do pip3 install --no-cache-dir --verbose /opt/torch_tensorrt*.whl it fetches a new version of torch which is cpu version and I cannot import torch_tensorrt. It gives me error : libtorch.cuda.so file missing.Can you guide me what I am doing wrong and recommend other method to install torch_tensorrt on jetson Orin ?

Hi @user163682 , try running pip3 install --index-url http://jetson.webredirect.org/jp5/cu114 --trusted-host jetson.webredirect.org /opt/torch_tensorrt*.whl - that should have it pull the CUDA-enabled pytorch wheel instead of the CPU-only one. Or you can reinstall the correct PyTorch wheel again after. Or you can use --no-dependencies with pip and manually install what dependencies you need for torch_tensorrt.

Alternatively, you could try using these torch_tensorrt containers (which include PyTorch and torchvision) -

https://hub.docker.com/r/dustynv/torch_tensorrt/tags

And just in case you continue having issues, there is also the torch2trt project and container which tends to be easier to install.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.