Understanding TensorRT compilation process

This is a general question and will not require any hardware/environment specifications.

The process used by me to run a sample TensorRT application (e.g. sample FasterRCNN) is as follows:

  • Install TensorRT from ( .deb ) and dependencies like CUDA/cuDNN
  • Clone TensorRT repo
  • Download and untar the TensorRT binaries from the same site that provides .deb .
  • Compile these binaries (out of the .tar file) using instructions provided in the github repo.

I don’t understand the role of resources used in first 3 steps, i.e., the deb package, the repo and the tar file. Why are all of them required ?

Hi,

Please note that if you are using a Jetson device.
The TensorRT package is installed as one of the SDK components.
You don’t need to install it manually.

Thanks.

I am asking in reference to the host devices.

Hi,

Usually, you can install TensorRT on the host with a Debian package directly.
Please check the detailed instructions in our document:
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian

If you have further questions about host TensorRT, please file a new topic to below-board for better help:

Thanks.

Thanks for the response but that’s not my query, let me emphasise on it :

I hope I am being clear here.

Hi,

If you install it through .deb, only step.1 is required.

Thanks.

The github README asks cloning the repo as well, for the the sample applications .

Hi,

The OSS GitHub contains up-to-date plugin support.
If a sample requires the latest plugin implementation, you will need to update the library manually.

However, FasterRCNN is added for a while.
The required plugin implementation should be available in the default TensorRT package already.

Thanks.

So, I can compile and run the samples just by downloading the TensorRT .deb package and installing it ? No need for the GitHub repo or the .tar file ?

Hi,

For Jetson, the package is included in JetPack.
Usually, you can find it when setup Xavier with SDK manager.
Or just run the following command:

$ sudo apt update
$ sudo apt install nvidia-tensorrt

After the installation, you can find the example in the blew folder:

/usr/src/tensorrt/samples/sampleUffFasterRCNN/

Thanks.

I have version 6 or 5 of TensorRT due to my old JetPack version. It is missing this sample. Can I update TensorRT without having to reflash Jetson again ?

nvidia-tensorrt is not found.

Instead I tried tensorrt

$ sudo apt-get install tensorrt
Reading package lists... Done
Building dependency tree       
Reading state information... Done
tensorrt is already the newest version (6.0.1.10-1+cuda10.0).
The following packages were automatically installed and are no longer required:
  libcharls1 libgdcm2.8 libgl2ps1.4 libhdf5-openmpi-100 liblept5 libllvm9
  libnetcdf-c++4 libsocket++1 libtesseract4 libvtk6.3
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 337 not upgraded.

On checking apt-cache policy,

$ apt-cache policy tensorrt
tensorrt:
  Installed: 6.0.1.10-1+cuda10.0
  Candidate: 6.0.1.10-1+cuda10.0
  Version table:
 *** 6.0.1.10-1+cuda10.0 500
        500 https://repo.download.nvidia.com/jetson/t194 r32/main arm64 Packages
        100 /var/lib/dpkg/status

I am still unable to update tensorrt as a debian package

Hi,

Would you mind upgrading your device to r32.6 first?
You can find our OTA feature and detailed command below:
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/updating_jetson_and_host.html#

To update to a new point release

Thanks.