Description
Following GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. (Building TensorRT-OSS) on SLES 15. Getting errors about
CUDA_ARCHITECTURES is empty for target “nvinfer_plugin” during cmake.
Is that something I need to address? If so, what should I do? tried -DCUDA_ARCHITECTURES but didn’t work.
Environment
TensorRT Version: 10.0.1
GPU Type: A2
Nvidia Driver Version:
CUDA Version: 12.2.0
CUDNN Version: 8.9
Operating System + Version: 5.14.21-150500.55.7-default
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Baremetal
Steps To Reproduce
Following GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. (Building TensorRT-OSS).
Example: Linux (x86-64) build with default cuda-12.5*
cd $TRT_OSSPATH mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
Getting CMake Warnings like this:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECUTRES now detected for NVCC, empty CUDA_ARCHITECUTRES now allows…
CUDA_ARCHITECTURES is empty for target "nvinfer_plugin".