Building for TensorRT version: 7.2.3, library version: 7
CMake Error at /usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.21/Modules/CMakeDetermineCUDACompiler.cmake:212 (message):
Couldn't find CUDA library root.
Call Stack (most recent call first):
CMakeLists.txt:46 (project)
-- Configuring incomplete, errors occurred!
See also "/app/TensorRT/build/CMakeFiles/CMakeOutput.log".
See also "/app/TensorRT/build/CMakeFiles/CMakeError.log".
below is the log
Checking whether the CUDA compiler is NVIDIA using "" did not match "nvcc: NVIDIA \(R\) Cuda compiler driver":
Checking whether the CUDA compiler is Clang using "" did not match "(clang version)":
Checking whether the CUDA compiler is NVIDIA using "" did not match "nvcc: NVIDIA \(R\) Cuda compiler driver":
Checking whether the CUDA compiler is Clang using "" did not match "(clang version)":
Another machine is T4 ( i was able to build also there successfully)
Current is V100 (only in docker deepsteam sdk is there , so all implementation would be inside docker)
i was able to ., can you see the config file , i used generated âlibnvds_infercustomparser_tlt.soâ and newly generated trt engine , and i was able to run the older machine .
if you scroll up you can see the config file
Solution : build tensorRT in old machine (T4) but with -DGPU_ARCHS=70 (this is for V100) . and moved thelibnvinfer_plugin.so in new machine (V100) . it worked