PyTorch with GPU on Drive AGX Orin

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.2.10884
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi,

Running a PyTorch script (running Mask RCNN) on a Drive AGX Orin starts working but does not detect and take advantage any GPU.
torch.cuda.is_available() returns False and the inference execution is really slow.

It seems CUDA / CudNN are not detected.

We have CUDA 11.4, and torch 2.0.1 in Python.
Does anyone know about a set of versions that would work well together?
Does it require recompiling pytorch for the board?

Thanks for any hint,

Regarding your query about PyTorch script not taking advantage of GPU and CUDA/CudNN not being detected, have you tried exporting your PyTorch model to ONNX format? The documentation for the Drive OS TensorRT developer guide (NVIDIA TensorRT 8.5.10 Developer Guide for DRIVE OS :: NVIDIA TensorRT for DRIVE OS) recommends exporting to ONNX before running inference on Drive AGX Orin.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.