Error while building TensorRT OSS 8.0.1

python-libnvinfer is actually related to Python 2.7, is no longer supported and can be removed: sudo apt-get purge python-libnvinfer

If using Python 3.x: sudo apt-get install python3-libnvinfer-dev

Could you please share latest complete error log

https://portahnos-my.sharepoint.com/:f:/g/personal/echambouleyron_porta_com_ar/EnOMK3MIofZBjBrrLZKfFRYBln4_9IZDOpwTJH6ihvbX9w?e=njL35q

I have just realized I was using the incorrect GPU_ARCHS: 75 instead of 61. Here you can find the logs after that correction.

I tried setting BUILD_ONNX_PYTHON:BOOL=ON and after running cmake a warning pops up:

Warning

CMake Warning at parsers/onnx/third_party/onnx/CMakeLists.txt:394 (find_package):
By not providing “Findpybind11.cmake” in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by “pybind11”,
but CMake did not find one.

Could not find a package configuration file provided by “pybind11”
(requested version 2.2) with any of the following names:

pybind11Config.cmake
pybind11-config.cmake

Add the installation prefix of “pybind11” to CMAKE_PREFIX_PATH or set
“pybind11_DIR” to a directory containing one of the above files. If
“pybind11” provides a separate development package or SDK, be sure it has
been installed.

Steps to reproduce

cmake … -DGPU_ARCHS=“61” -DTRT_LIB_DIR=/home/deep2/TensorRT-8.0.1.6/lib -DCMAKE_C_COMPILER=/usr/bin/gcc -DCMAKE_CUDA_COMPILER:PATH=/usr/local/cuda/bin/nvcc -DCUDA_VERSION=11.3.1 -DCUBLASLT_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so -DCUBLAS_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublas.so -DCUDART_LIB=usr/local/cuda/

It seems that the github repo of TensorRT OSS does not include a cmake file for pybind11

Here is the latest complete error log.

Steps to reproduce:

cmake … -DGPU_ARCHS=“61” -DTRT_LIB_DIR=/home/deep2/TensorRT-8.0.1.6/lib -DCMAKE_C_COMPILER=/usr/bin/gcc -DCMAKE_CUDA_COMPILER:PATH=/usr/local/cuda/bin/nvcc -DCUDA_VERSION=11.3.1 -DCUBLASLT_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so -DCUBLAS_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublas.so -DCUDART_LIB=usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.11.3.109

BUILD_ONNX_PYTHON:BOOL=OFF

Looks like / in your cudart path in cmake command. Should be -DCUDART_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.11.3.109

No changes. I am still getting an error as you can see here. Look at this:

nvcc fatal : Unsupported gpu architecture ‘compute_“62”’
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/build.make:466: plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSInference.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs…
[ 18%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
nvcc fatal : Unsupported gpu architecture ‘compute_“62”’
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/build.make:479: plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:315: plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2

I have a GeForce GTX 1050, is it not supported for TensorRT OSS?

@emiliochambu,

We recommend you to please post your concern on TRT-OSS issues page to get better better help regarding this.

Thank you.

thanks

1 Like