Jetson Orin, TensorRT, CUDA 11.8 for PyTorch 2.0.0

I’m I going crazy here or is it impossible to get this combination to work.

I need cuda 11.8 to get PyTorch to run but TensorRT is only supported for CUDA 11.4, which PyTorch is not?

I’m probably missing something as I’m no expert in this but managing all these specidic dependancies and also avalibility for aarch64 as non-standard is a right pain in the…

Oh and also JP5.1 becuause although JP 5.1.1 has support for PyTorch there isn’t a version of torchvision that works with it

Theres probably a knowledge bomb coming where I’ve missed something obvious but I’m really going round in circles with this one. Think I’ve flashed my Orin about 15x

I have a Nvidia standard orin development kit board, the one with the usb’s display port PCIe slot etc

Thanks in advance for your help. Hoping JP6 solves all my problems :)

p.s. I’m just looking to run a YOLO model, that currently runs slow on pytorch, to export it and run it with TensorRT (because it’s promised by ultralytics that it can be 5x/6x faster with TensorRT)

p.p.s I’m running VSC with venv by conda


Please use our PyTorch prebuilt which supports CUDA 11.4 directly.
The installation guide can be found in the topic you shared above.

Do you build TorchVision from the source?
Could you share more info/logs about the issue on the TorchVision with us?

We also have a container with PyTorch and TorchVision pre-installed.


Thanks for the reply.

I have Cuda 11.8, so indeed I can’t use tensorrt?


No, since we only provide the TensorRT built with CUDA 11.4 for Jetson.
But there is an upcoming JetPack 6 release with the newer compute libraries.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.