How to "install" tensorRT when build from source

Description

I need to build tensorRT with custom plugins. I followed steps described in GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

After I run the last command in the instruction:
make -j$(nproc)
In out folder I have the next files:

libnvcaffeparser.so        libnvinfer_plugin.so.7.1.3  sample_char_rnn         sample_mlp            sample_onnx_mnist                sample_uff_maskRCNN
libnvcaffeparser.so.7      libnvinfer_plugin_static.a  sample_dynamic_reshape  sample_mnist          sample_onnx_mnist_coord_conv_ac  sample_uff_mnist
libnvcaffeparser.so.7.1.3  libnvonnxparser.so          sample_fasterRCNN       sample_mnist_api      sample_plugin                    sample_uff_plugin_v2_ext
libnvcaffeparser_static.a  libnvonnxparser.so.7        sample_googlenet        sample_movielens      sample_reformat_free_io          sample_uff_ssd
libnvinfer_plugin.so       libnvonnxparser.so.7.0.0    sample_int8             sample_movielens_mps  sample_ssd                       trtexec
libnvinfer_plugin.so.7     sample_algorithm_selector   sample_int8_api         sample_nmt            sample_uff_fasterRCNN

And I do not know what to do next with it.

Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

May be my question was unclear, sorry for that. I mean that I can install tensorrt for python following the official instructions.

version="8.x.x-1+cudax.x"
sudo apt-get install libnvinfer8=${version} libnvonnxparsers8=${version} libnvparsers8=${version} libnvinfer-plugin8=${version} libnvinfer-dev=${version} libnvonnxparsers-dev=${version} libnvparsers-dev=${version} libnvinfer-plugin-dev=${version} python3-libnvinfer=${version}
sudo apt-mark hold libnvinfer8 libnvonnxparsers8 libnvparsers8 libnvinfer-plugin8 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python3-libnvinfer

But I want to have opportunity to work in python with tensorrt with my custom plugin.
So my question how to do this?

It’s quite easy to “install” custom plugin if you registered it. So the steps are the following:

  1. Install tensorRT
sudo apt-get update && \
   apt-get install -y libnvinfer7=7.1.3-1+cuda10.2 libnvonnxparsers7=7.1.3-1+cuda10.2 libnvparsers7=7.1.3-1+cuda10.2 libnvinfer-plugin7=7.1.3-1+cuda10.2 libnvinfer-dev=7.1.3-1+cuda10.2 libnvonnxparsers-dev=7.1.3-1+cuda10.2 libnvparsers-dev=7.1.3-1+cuda10.2 libnvinfer-plugin-dev=7.1.3-1+cuda10.2 python3-libnvinfer=7.1.3-1+cuda10.2 && \
sudo apt-mark hold libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python3-libnvinfer

Note: I installed v.7.1.3.1 of tensorrt and cuda 10.2 if you want to install other version change it but be careful the version of tensorRT and cuda match in means that not for all version of tensorRT there is the version of cuda

  1. Build the library libnvinfer_plugin.so.x.x.x as described at GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

Note: x.x.x is the version of library in my case is 7.1.3

  1. Delete existing libraries at /usr/lib/x86_64-linux-gnu if you have x86 architecture or /usr/lib/aarch64-linux-gnu for arm64:
    libnvinfer_plugin.so.7.1.3
    libnvinfer_plugin.so.7
    libnvinfer_plugin.so

Again file names depends on tensorRT version.

  1. Copy the library libnvinfer_plugin.so.7.1.3 to folder /usr/lib/x86_64-linux-gnu if you have x86 architecture or /usr/lib/aarch64-linux-gnu for arm64

  2. Make simlinks for libraries:


sudo ln -s libnvinfer_plugin.so.7 sudo ln -s libnvinfer_plugin.so.7 libnvinfer_plugin.so
1 Like