Environment:
- TensorRT Version: 8.6.1
- GPU: NVIDIA RTX 3060
- NVIDIA Driver Version: 535
- CUDA Version: 12.2
- Operating System & Version: Ubuntu 22.04 (Docker image:
nvcr.io/nvidia/tritonserver:23.11-py3
)
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Sep__8_19:17:24_PDT_2023
Cuda compilation tools, release 12.3, V12.3.52
Build cuda_12.3.r12.3/compiler.33281558_0
I am trying to deploy the NVIDIA Optical Character Detection and Recognition Solution using Triton Server in a Docker container (nvcr.io/nvidia/tritonserver:23.11-py3
). However, when starting inference, I encounter the following error on an RTX 3060:
“Failed: the provided PTX was compiled with an unsupported toolchain.”
Interestingly, this issue does not occur on an RTX A4000, where the deployment works as expected.
Could you please provide guidance on resolving this error for the RTX 3060? Any insights would be greatly appreciated.
Thank you!