Error installing x86 TensorRT OSS Plugin

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): 2 x GeForce RTX 2080 Ti
• DeepStream Version: 5.1
• TensorRT Version: TensorRT 7.2.1 for Linux and CUDA 11.1
• NVIDIA GPU Driver Version (valid for GPU only): 460.73.01
• Issue Type( questions, new requirements, bugs): question

I’ve run deepstream-test1 to verify that I installed DS 5.1 correctly.

I follow this guide to install the x86 TensorRT OSS Plugin but I got the following error:

$HOME/install/bin/cmake .. -DGPU_ARCHS=75  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out

Building for TensorRT version: 7.0.0.1, library version: 7.0.0
-- Targeting TRT Platform: x86_64
-- GPU_ARCHS defined as 75. Generating CUDA code for SM 75
-- CUDA version set to 10.2
-- cuDNN version set to 7.6
-- Protobuf version set to 3.0.0
-- Using libprotobuf /home/minh/TensorRT/build/third_party.protobuf/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvparsers.so
-- ==========================================================================================
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /home/minh/TensorRT/build/parsers/caffe
Generated: /home/minh/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
Generated: /home/minh/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
-- 
-- ******** Summary ********
--   CMake version         : 3.19.4
--   CMake command         : /home/minh/install/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 7.5.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/home/minh/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/local
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.6.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- Found TensorRT headers at /home/minh/TensorRT/include
-- Find TensorRT libs at /usr/lib/x86_64-linux-gnu/libnvinfer.so;/usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so;/usr/lib/x86_64-linux-gnu/libmyelin.so
-- Adding new sample: sample_char_rnn
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens_mps
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_plugin
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Configuring done
  CMake Warning (dev) in plugin/CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "nvinfer_plugin".
  This warning is for project developers.  Use -Wno-dev to suppress it.

  CMake Warning (dev) in plugin/CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "nvinfer_plugin_static".
  This warning is for project developers.  Use -Wno-dev to suppress it.

-- Generating done
-- Build files have been written to: /home/minh/TensorRT/build

$make nvinfer_plugin -j$(nproc)

[  6%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cu.o
In file included from /usr/local/cuda/include/cub/util_arch.cuh:36:0,
                 from /usr/local/cuda/include/cub/config.cuh:35,
                 from /usr/local/cuda/include/cub/cub.cuh:37,
                 from /home/minh/TensorRT/plugin/common/bertCommon.h:27,
                 from /home/minh/TensorRT/plugin/fcPlugin/fcPlugin.h:20,
                 from /home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu:18:
/usr/local/cuda/include/cub/util_cpp_dialect.cuh:129:13: warning: CUB requires C++14. Please pass -std=c++14 to your compiler. Define CUB_IGNORE_DEPRECATED_CPP_DIALECT to suppress this message.
   CUB_COMPILER_DEPRECATION(C++14, pass -std=c++14 to your compiler);
             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                         
In file included from /usr/local/cuda/include/thrust/detail/config/config.h:27:0,
                 from /usr/local/cuda/include/thrust/detail/config.h:23,
                 from /usr/local/cuda/include/thrust/system/cuda/detail/core/triple_chevron_launch.h:29,
                 from /usr/local/cuda/include/cub/device/dispatch/dispatch_histogram.cuh:48,
                 from /usr/local/cuda/include/cub/device/device_histogram.cuh:41,
                 from /usr/local/cuda/include/cub/cub.cuh:52,
                 from /home/minh/TensorRT/plugin/common/bertCommon.h:27,
                 from /home/minh/TensorRT/plugin/fcPlugin/fcPlugin.h:20,
                 from /home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu:18:
/usr/local/cuda/include/thrust/detail/config/cpp_dialect.h:104:13: warning: Thrust requires C++14. Please pass -std=c++14 to your compiler. Define THRUST_IGNORE_DEPRECATED_CPP_DIALECT to suppress this message.
   THRUST_COMPILER_DEPRECATION(C++14, pass -std=c++14 to your compiler);
             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                            
/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu(165): error: argument of type "cudaDataType_t" is incompatible with parameter of type "cublasComputeType_t"

/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu(165): error: too few arguments in function call

/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu(177): error: argument of type "cudaDataType_t" is incompatible with parameter of type "cublasComputeType_t"

/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu(193): error: argument of type "cudaDataType_t" is incompatible with parameter of type "cublasComputeType_t"

/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.h(230): error: argument of type "cudaDataType_t" is incompatible with parameter of type "cublasComputeType_t"

/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.h(230): error: too few arguments in function call

6 errors detected in the compilation of "/home/minh/TensorRT/plugin/fcPlugin/fcPlugin.cu".
plugin/CMakeFiles/nvinfer_plugin.dir/build.make:731: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cu.o' failed
make[3]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cu.o] Error 1
CMakeFiles/Makefile2:1383: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/all' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
CMakeFiles/Makefile2:1390: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/rule' failed
make[1]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/rule] Error 2
Makefile:244: recipe for target 'nvinfer_plugin' failed
make: *** [nvinfer_plugin] Error 2

Please advise on how to solve this error.

DS 5.1 use TRT7.2, why you need to build the TRT7.0?

@bcao

That’s what is shown in the TLT documentation:

When I was working with DS 5.0, even though DS 5.0 comes with TensorRT 7.1, the documentation use TensorRT 7.0 OSS to replace TensorRT 7.1, using TensorRT 7.1 OSS to replace TensorRT 7.1 actually gave me an error when trying to to build an engine from yolov3 etlt file. That’s why I try to build the TRT7.0 OSS.

There is a known issue for yolov3/yolov4 with DS5.1/TRT7.2, so you can wait for the new yolov3/yolov4 in the upcoming tlt release or you need to use DS5.0/TRT7.0

2 Likes

is this still the case?