Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version:
GPU Type: NVIDIA A2 16GB Ampere AI Graphics Card
Nvidia Driver Version: 550.163.01
CUDA Version: Version: 12.4
CUDNN Version:
Operating System + Version: Debian 13 (Linux 6.12.73+deb13-amd64 on x86_64)
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
System set up with NVIDIA RTX A2000 12GB but trying to use NVIDIA A2 16GB Ampere AI Graphics Card.
Please include:
- Replaced RTXA2000 with A2
nvidia-detect
Checking card: NVIDIA Corporation GA107GL [A2 / A16] (rev a1)
Your card is supported by all driver versions.
Your card is also supported by the NVIDIA Linux Open GPU Kernel Module.
Your card is also supported by the Tesla 535 drivers series.
It is recommended to install the
nvidia-driver
package.
dmesg | grep -i nvidia
[43.917717] nvidia-nvlink: Unregistered Nvlink Core, major device number 237
[46.337551] nvidia-nvlink: Nvlink Core is being initialized, major device number 237
[46.337559] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
[46.338591] nvidia 0000:01:00.0: probe with driver nvidia failed with error -1
[46.338620] NVRM: The NVIDIA probe routine failed for 1 device(s).
[46.338621] NVRM: None of the NVIDIA devices were initialized.
[46.338828] nvidia-nvlink: Unregistered Nvlink Core, major device number 237