PyTorch and TorchVision for Jetpack 6.2

Couldn’t find any working combination for JetPack 6.2.

My CUDA version is 12.6, Please help me.

Thanks

Could I use the command to install on sm_87 AGX Orin?

pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124

bumping this post for move visability

I also can’t find a working version of any of these packages for the agx orin with jetpack 6.2

pytorch, torch vision, tensorrt-llm, tensorrt-llm edge. All fail to build on this device

1 Like

These don’t work? https://pypi.jetson-ai-lab.io/jp6/cu126

1 Like

Hi,

Please find the packages for JetPack 6.2 in the link below:

Thanks.

1 Like

I can confirm both the 3.14 wheel and 3.16 wheel for the python llamapythoncpp have the same corruption error as the official llamapythoncpp library. So looking over this, at least with my agx there is no currently supported version that has the correct fix. If the KV cache fills to quickly, then it’ll overload and start outputting # only and or another random token.

I have been testing versions and patterns for over a week now, new libraries old libraries every wheel i can find. It seems to be an issue with the cuda drivers on the device and some kind of inconsistent buffer overflow on the cuda cores.

The only fix i could find was running a hidden terminal and llama.cpp in c++ then using a wrapper and terminal command reading from python to get semi reliable outputs. This cracks 300+tokens a second with a 7b model, but only delays the corruption to 1000 total kv cache tokens.

I’m not entirely sure if it’s simply just my machine or my version of jetpack 6.2 on my agx. Or if this is a known problem. However it’s making agentic systems development a little difficult so i hope these kinds of things can be ironed out soon.

I will next go over to trying the tensorrt libraries again, i did not have a lot of luck with them before but i’ll start fresh and see if that can be resolved there

Is this how you installed the package? If not try it.

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

I uninstalled everything and freshly reset all of my libraries. I upgraded to 12.9 cuda, i ran the proper wheel install for that version, and i tried the method you just gave again. Still same problem.