Onnx-tensorrt, TensorRT and TensorRT OSS

There are 3 components:

  1. Proprietary TensorRT
  2. GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX
  3. GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

If I have a onnx model and want to optimize it with TensorRT, I am supposed to run the onnx-tensorrt
onnx2trt model.onnx -o model.plan and then run the inference on that model.plan?

I am confused a little bit, what is a purpose of this OSS part of TensorRT then? Where it is used?

Hi,

Please refer the following doc to know more details on TensorRT OSS.

NVIDIA is open sourcing parsers and plugins in TensorRT so that the deep learning community can customize and extend these components to take advantage of powerful TensorRT optimizations for your apps.

Thank you.