Error while building TensorRT OSS 8.0.1

@emiliochambu,

Have you tried above and able resolve above mentioned issue ?

I have not tried it yet. I am still struggling with building TensorRT OSS.

@emiliochambu,

Could you please share complete error logs you’re facing while building TensorRT OSS.

Here you can find the complete outputs: https://portahnos-my.sharepoint.com/:f:/g/personal/echambouleyron_porta_com_ar/EpNALdo9XNpBn2eH7s-TSx0BeXkuOTTryjtqQYiyhXIBdA?e=338QJY

https://portahnos-my.sharepoint.com/:f:/g/personal/echambouleyron_porta_com_ar/EpNALdo9XNpBn2eH7s-TSx0BeXkuOTTryjtqQYiyhXIBdA?e=338QJY

I tried setting parsers off as suggested here with no succes. Here you can find the output and error logs

This seems to be the problem:
Output

bin/sh: 1: python: not found
make[2]: *** [parsers/onnx/third_party/onnx/CMakeFiles/gen_onnx_proto.dir/build.make:89: parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto] Error 127
make[1]: *** [CMakeFiles/Makefile2:1986: parsers/onnx/third_party/onnx/CMakeFiles/gen_onnx_proto.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs…

Hi @emiliochambu,

Thanks for sharing logs, We are trying to analyze error logs. Looks like some dependencies issue. Not able to find python problem you mentioned above.

Have you added python3 as alias python in your environment, please let us know in case you’re trying on some docker container.

Please make sure you’ve installed all dependencies mentioned here GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.,

We also suggest you to try downgrading your python from 3.8 to 3.7 if you still face this issue.

Thank you.

Many thanks for answering.
I am not building on docker and I did add python3 as alias in my environment.
I´ve realized I dont have installed python-libnvinfer nor python-libnvinfer-dev in my pc. Could this be related to the python problem?

@spolisetty

I ´ve downloaded python-libnvinfer and python-libnvinfer-dev using sudo apt-get install python3-libnvinfer-dev. The files are now located at:

  • /usr/share/doc/python3-libnvinfer
  • /usr/share/doc/python3-libnvinfer-dev

But I am still getting “python not found” with this final output:
[ 54%] Linking CXX shared library …/libnvinfer_plugin.so
[ 54%] Built target nvinfer_plugin
make: *** [Makefile:172: all] Error 2

python-libnvinfer is actually related to Python 2.7, is no longer supported and can be removed: sudo apt-get purge python-libnvinfer

If using Python 3.x: sudo apt-get install python3-libnvinfer-dev

Could you please share latest complete error log

https://portahnos-my.sharepoint.com/:f:/g/personal/echambouleyron_porta_com_ar/EnOMK3MIofZBjBrrLZKfFRYBln4_9IZDOpwTJH6ihvbX9w?e=njL35q

I have just realized I was using the incorrect GPU_ARCHS: 75 instead of 61. Here you can find the logs after that correction.

I tried setting BUILD_ONNX_PYTHON:BOOL=ON and after running cmake a warning pops up:

Warning

CMake Warning at parsers/onnx/third_party/onnx/CMakeLists.txt:394 (find_package):
By not providing “Findpybind11.cmake” in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by “pybind11”,
but CMake did not find one.

Could not find a package configuration file provided by “pybind11”
(requested version 2.2) with any of the following names:

pybind11Config.cmake
pybind11-config.cmake

Add the installation prefix of “pybind11” to CMAKE_PREFIX_PATH or set
“pybind11_DIR” to a directory containing one of the above files. If
“pybind11” provides a separate development package or SDK, be sure it has
been installed.

Steps to reproduce

cmake … -DGPU_ARCHS=“61” -DTRT_LIB_DIR=/home/deep2/TensorRT-8.0.1.6/lib -DCMAKE_C_COMPILER=/usr/bin/gcc -DCMAKE_CUDA_COMPILER:PATH=/usr/local/cuda/bin/nvcc -DCUDA_VERSION=11.3.1 -DCUBLASLT_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so -DCUBLAS_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublas.so -DCUDART_LIB=usr/local/cuda/

It seems that the github repo of TensorRT OSS does not include a cmake file for pybind11

Here is the latest complete error log.

Steps to reproduce:

cmake … -DGPU_ARCHS=“61” -DTRT_LIB_DIR=/home/deep2/TensorRT-8.0.1.6/lib -DCMAKE_C_COMPILER=/usr/bin/gcc -DCMAKE_CUDA_COMPILER:PATH=/usr/local/cuda/bin/nvcc -DCUDA_VERSION=11.3.1 -DCUBLASLT_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so -DCUBLAS_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcublas.so -DCUDART_LIB=usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.11.3.109

BUILD_ONNX_PYTHON:BOOL=OFF

Looks like / in your cudart path in cmake command. Should be -DCUDART_LIB=/usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.11.3.109

No changes. I am still getting an error as you can see here. Look at this:

nvcc fatal : Unsupported gpu architecture ‘compute_“62”’
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/build.make:466: plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSInference.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs…
[ 18%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
nvcc fatal : Unsupported gpu architecture ‘compute_“62”’
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/build.make:479: plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:315: plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2

I have a GeForce GTX 1050, is it not supported for TensorRT OSS?

@emiliochambu,

We recommend you to please post your concern on TRT-OSS issues page to get better better help regarding this.

Thank you.