Understanding TensorRT compilation process

This is a general question and will not require any hardware/environment specifications.

The process used by me to run a sample TensorRT application (e.g. sample FasterRCNN) is as follows:

  • Install TensorRT from (.deb) and dependencies like CUDA/cuDNN
  • Clone TensorRT repo
  • Download and untar the TensorRT binaries from the same site that provides .deb.
  • Compile these binaries (out of the .tar file) using instructions provided in the github repo.

I don’t understand the role of resources used in first 3 steps, i.e., the deb package, the repo and the tar file. Why are all of them required ?

Actually this is not a forum topic for TLT. It is a TRT topic.
Please search help from TRT forum or its user guide
https://docs.nvidia.com/deeplearning/tensorrt/archives/index.html
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-801/install-guide/index.html