The installation was successful and PyTorch loads correctly. torch.cuda.is_available() returns True and the GPU name is detected.
Problem
When running any YOLO / Ultralytics model or even importing PyTorch functional ops, I receive this error:
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.x.
NumPy 2.2.6 is installed. To support both 1.x and 2.x versions of NumPy, modules
must be compiled with NumPy ≥ 2.0.
UserWarning: Failed to initialize NumPy:
RuntimeError: NumPy is not available
This causes inference to crash.
My Understanding
It appears that:
The PyTorch wheel was built against NumPy 1.x, but
JetPack 6 images come with NumPy 2.x, causing ABI mismatch. Any guidance or official compatibility matrix would be extremely helpful.
Thank you!
The issue I’m facing is that when I downgrade NumPy to <2 so that PyTorch works, my OpenCV CUDA build stops working.
The reason seems to be:
PyTorch wheels for JetPack 6.1 require NumPy 1.x.
OpenCV with CUDA on JetPack was compiled against NumPy 2.x.
So when I downgrade NumPy, the OpenCV CUDA bindings break.
If I keep NumPy 2.x, PyTorch fails instead.
So effectively PyTorch and OpenCV CUDA end up requiring different NumPy ABI versions under JetPack 6.1.
My question:
Is there an official way to install both PyTorch and OpenCV CUDA that are built against the same NumPy version, so that both can work together?
Generally, prebuilt packages expect a specific version of each dependency, as you saw even if it installs doesn’t mean it will be compatible. Since you have custom versions, it would be better to build PyTorch from source on your Jetson board. To do so follow the instructions described in this repo for Linux only. It recommends using a Conda virtual environment and while not necessary it would be better to do so. Be sure to specify the correct CUDA version (12.6) while executing the install_magma_conda.sh script.