I’m I going crazy here or is it impossible to get this combination to work.
I need cuda 11.8 to get PyTorch to run but TensorRT is only supported for CUDA 11.4, which PyTorch is not?
I’m probably missing something as I’m no expert in this but managing all these specidic dependancies and also avalibility for aarch64 as non-standard is a right pain in the…
Oh and also JP5.1 becuause although JP 5.1.1 has support for PyTorch there isn’t a version of torchvision that works with it
Theres probably a knowledge bomb coming where I’ve missed something obvious but I’m really going round in circles with this one. Think I’ve flashed my Orin about 15x
I have a Nvidia standard orin development kit board, the one with the usb’s display port PCIe slot etc
Thanks in advance for your help. Hoping JP6 solves all my problems :)
p.s. I’m just looking to run a YOLO model, that currently runs slow on pytorch, to export it and run it with TensorRT (because it’s promised by ultralytics that it can be 5x/6x faster with TensorRT)