Tao-converter error

im trying to build from TensorRT ,

export PATH=/usr/local/cuda-11.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64:$LD_LIBRARY_PATH
export CPATH=/usr/local/cuda-11.1/include:$CPATH
export LIBRARY_PATH=/usr/local/cuda-11.1/lib64:$LD_LIBRARY_PATH
export TRT_LIBPATH=/app/TensorRT
export TENSORRT_LIBRARY_INFER=/usr/lib/x86_64-linux-gnu/libnvinfer.so.7
export TENSORRT_LIBRARY_INFER_PLUGIN=/usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7
export TRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/






  cmake .. -DDCUDA_VERSION=11.1 -DGPU_ARCHS=70 -DCUDNN_LIB=/usr/local/cuda-11.1/lib64  -DTRT_INC_DIR=$TRT_LIBPATH/include/ -DTRT_LIB_DIR=$TRT_LIBPATH/lib/  -DTRT_OUT_DIR=`pwd`/out   -DTENSORRT_LIBRARY_INFER=/usr/lib/x86_64-linux-gnu/ -DTENSORRT_LIBRARY_INFER_PLUGIN=/usr/lib/x86_64-linux-gnu/ -DCMAKE_CUDA_ARCHITECTURES=70   -DCMAKE_CUDA_COMPILER=/usr/local/cuda/cuda-11.1/bin/nvcc -DTENSORRT_LIBRARY_INFER_PLUGIN=/usr/lib/x86_64-linux-gnu/ -DTENSORRT_LIBRARY_MYELIN=/usr/lib/x86_64-linux-gnu/

but i get below error


Building for TensorRT version: 7.2.3, library version: 7
CMake Error at /usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.21/Modules/CMakeDetermineCUDACompiler.cmake:212 (message):
  Couldn't find CUDA library root.
Call Stack (most recent call first):
  CMakeLists.txt:46 (project)


-- Configuring incomplete, errors occurred!
See also "/app/TensorRT/build/CMakeFiles/CMakeOutput.log".
See also "/app/TensorRT/build/CMakeFiles/CMakeError.log".

below is the log



Checking whether the CUDA compiler is NVIDIA using "" did not match "nvcc: NVIDIA \(R\) Cuda compiler driver":

Checking whether the CUDA compiler is Clang using "" did not match "(clang version)":

Checking whether the CUDA compiler is NVIDIA using "" did not match "nvcc: NVIDIA \(R\) Cuda compiler driver":

Checking whether the CUDA compiler is Clang using "" did not match "(clang version)":

You can refer to the steps in YOLOv4 — TAO Toolkit 3.22.05 documentation

But please note that there is one small issue in one step.

/usr/local/bin/cmake .. -DGPU_ARCHS=xy  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out

should be

/usr/local/bin/cmake .. -DGPU_ARCHS=xy  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out

“error staing :Couldn’t find CUDA library root” on build . Could you help on this

root@64877619f02d:/app/new/TensorRT/build# /usr/local/bin/cmake .. -DGPU_ARCHS=70  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
Building for TensorRT version: 7.2.2, library version: 7
-- The CXX compiler identification is GNU 7.5.0
CMake Error at /usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.21/Modules/CMakeDetermineCUDACompiler.cmake:212 (message):
  Couldn't find CUDA library root.
Call Stack (most recent call first):
  CMakeLists.txt:46 (project)


-- Configuring incomplete, errors occurred!
See also "/app/new/TensorRT/build/CMakeFiles/CMakeOutput.log".

Did you install CUDA in your dgpu device?

is there a command to check ?

$ dpkg -l |grep cuda

root@64877619f02d:/app# dpkg -l |grep cuda
ii  cuda-compat-11-1                     455.45.01-1                         amd64        CUDA Compatibility Platform
ii  cuda-cudart-11-1                     11.1.74-1                           amd64        CUDA Runtime native Libraries
ii  cuda-libraries-11-1                  11.1.1-1                            amd64        CUDA Libraries 11.1 meta-package
ii  cuda-nvrtc-11-1                      11.1.105-1                          amd64        NVRTC native runtime libraries
ii  cuda-nvtx-11-1                       11.1.74-1                           amd64        NVIDIA Tools Extension
ii  graphsurgeon-tf                      7.2.2-1+cuda11.1                    amd64        GraphSurgeon for TensorRT package
ii  libcudnn8                            8.0.5.39-1+cuda11.1                 amd64        cuDNN runtime libraries
hi  libnccl2                             2.8.3-1+cuda11.1                    amd64        NVIDIA Collective Communication Library (NCCL) Runtime
ii  libnvinfer-plugin7                   7.2.2-1+cuda11.1                    amd64        TensorRT plugin libraries
ii  libnvinfer7                          7.2.2-1+cuda11.1                    amd64        TensorRT runtime libraries
ii  libnvonnxparsers7                    7.2.2-1+cuda11.1                    amd64        TensorRT ONNX libraries
ii  libnvparsers7                        7.2.2-1+cuda11.1                    amd64        TensorRT parsers libraries
ii  python-libnvinfer                    7.2.2-1+cuda11.1                    amd64        Python bindings for TensorRT
ii  python3-libnvinfer                   7.2.2-1+cuda11.1                    amd64        Python 3 bindings for TensorRT
ii  uff-converter-tf                     7.2.2-1+cuda11.1                    amd64        UFF converter for TensorRT package

You are running inside a docker, right? Which docker? Can you try outside the docker?

Another machine is T4 ( i was able to build also there successfully)
Current is V100 (only in docker deepsteam sdk is there , )

Which dgpu is in another machine?
And which dgpu is in your current machine?

Another machine is T4 ( i was able to build also there successfully)
Current is V100 (only in docker deepsteam sdk is there , so all implementation would be inside docker)

So, can you generate trt engine and run inference in T4 ?

i was able to ., can you see the config file , i used generated “libnvds_infercustomparser_tlt.so” and newly generated trt engine , and i was able to run the older machine .
if you scroll up you can see the config file

custom-lib-path=/app/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so

Thank you so much . It worked .

Solution : build tensorRT in old machine (T4) but with -DGPU_ARCHS=70 (this is for V100) . and moved thelibnvinfer_plugin.so in new machine (V100) . it worked

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.