Cmake Error when installing TensorRT OSS for tao-converter

Hi guys, i am newbie at TAO. I am using Jetson Nano and Jetpack version is 4.6. Cuda version is 10.2. Deepstream version 6.0. I try this sample app Pre-trained models - License Plate Detection (LPDNet) and Recognition (LPRNet) . In this section Convert the Models to TRT Engine i tried download tao-converter. After following steps i tried download cmake and tensorRT OSS from here . But in step 3 i got an error on below.

I tought problem is my TensorRT version which is 8.0.1. My question, is Jetpack 4.6 isnt compatitable with TensorRT 7.2 ? and how i solve this problem…

Thank you so much.

nano@nano-desktop:~/cmake-3.13.5/TensorRT/build$ /usr/local/bin/cmake .. -DGPU_ARCHS=53  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
Building for TensorRT version: 7.2.2, library version: 7
-- Targeting TRT Platform: x86_64
-- CUDA version set to 11.1
-- cuDNN version set to 8.0
-- Protobuf version set to 3.0.0
-- Setting up another Protobuf build for cross compilation targeting aarch64-Linux
-- Using libprotobuf /home/nano/cmake-3.13.5/TensorRT/build/third_party.protobuf_aarch64/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/lib/aarch64-linux-gnu/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/aarch64-linux-gnu/libnvparsers.so
-- ==========================================================================================
-- GPU_ARCHS defined as 53. Generating CUDA code for SM 53
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /home/nano/cmake-3.13.5/TensorRT/build/parsers/caffe
Generated: /home/nano/cmake-3.13.5/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
Generated: /home/nano/cmake-3.13.5/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
-- 
-- ******** Summary ********
--   CMake version         : 3.13.5
--   CMake command         : /usr/local/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 7.5.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/home/nano/cmake-3.13.5/TensorRT/build/third_party.protobuf;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/lib/aarch64-linux-gnu/..
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.6.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- Found TensorRT headers at /home/nano/cmake-3.13.5/TensorRT/include
-- Find TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so;TENSORRT_LIBRARY_MYELIN-NOTFOUND
-- Could NOT find TENSORRT (missing: TENSORRT_LIBRARY) 
ERRORCannot find TensorRT library.
-- Adding new sample: sample_algorithm_selector
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_char_rnn
--     - Parsers Used: uff;caffe;onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens_mps
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_plugin
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist_coord_conv_ac
--     - Parsers Used: onnx
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
TENSORRT_LIBRARY_MYELIN
    linked by target "nvonnxparser_static" in directory /home/nano/cmake-3.13.5/TensorRT/parsers/onnx
    linked by target "nvonnxparser" in directory /home/nano/cmake-3.13.5/TensorRT/parsers/onnx

-- Configuring incomplete, errors occurred!
See also "/home/nano/cmake-3.13.5/TensorRT/build/CMakeFiles/CMakeOutput.log".
See also "/home/nano/cmake-3.13.5/TensorRT/build/CMakeFiles/CMakeError.log".

Can you try release/8.0 branch when you download TRT OSS?
$ git clone -b release/8.0 https://github.com/nvidia/TensorRT TensorRT

i try download tersonrrt oss like you say. Error was gone and output start like this:

Building for TensorRT version: 7.2.2, library version: 7 …

when i run =>/usr/local/bin/cmake … -DGPU_ARCHS=53 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
and everything was succesfull.

after this section

sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak   // backup original libnvinfer_plugin.so.x.y
sudo cp `pwd`/out/libnvinfer_plugin.so.7.m.n  /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y
sudo ldconfig

i change libnvinfer_plugin.so.7.x.y => libnvinfer_plugin.so.7.2.2.
then when i run this command

./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

i got this error:
./tao-converter: error while loading shared libraries: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory

Why it always try to build TensorRT 7.2.2? Arent we download tensorRT 8.0?
Thank you.

Please download the corresponding version of tao-converter.

More,

  1. For running lpdne or lprnet, it is not necessary to build tensorrt OSS to build the libnvinfer_plugin.so.
  2. I’m a little confused. Previously you said " my TensorRT version which is 8.0.1.". But your device seems to be installed TRT 7.

I downloaded tao-converter from here for jetpack 4.6. Is it wrong?

  1. I follow this project so i didnt know it isnt necessary tensorrt OSS to build the libnvinfer_plugin.so. I just follow steps right there and i am now here.
    basically my steps are :
    1- Deepstream LPD & LPR Sample Project
    2-For Convert the Models to TRT Engine
    3- Installing TAO Converter
    4-TensorRT OSS on Jetson

So how i run this project without TensorRT OSS?

for 2. question , i check my tensorRT version and it seems 8.0.1 again. In TensorRT Github Page says NOTE: The latest JetPack SDK v4.6 only supports TensorRT 8.0.1

so i didnt have tensoRT 7.2.2 but it against to build 7.2.2 version… I am so confused sorry :(

Thank you so much.

So, your Jetson Nano is installed Jetpack 4.6. Its TensorRT version is 8.0.1.
Make sure you download correct version of tao-converter. https://developer.nvidia.com/tao-converter-jp4.6

More, for running LPD and LPR, TRT OSS plugin(libnvinfer_plugin.so) is not needed.

For running them, please take a look at LPRNet — TAO Toolkit 3.22.05 documentation , then follow LPRNet — TAO Toolkit 3.22.05 documentationGitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.