Help with C++ API in TensorRT


A clear and concise description of the bug or issue.
I am able to compile & run all the C++ API samples provided by default in TensorRT. But I can’t figure out how to get my custom C++ program run on TensorRt container. I have created a git repo & cloned it in TensorRt folder (inside the container).

I am a beginner at C++. But I am assigned to do the inference using C++ API rather than Python.

A brief about my code.
The model trained on cifar10 Dataset & converted to ONNX.
Now I want to parse the onnx model to build & create TensorRt engine for inference.
How to create & compile c++ files, pass data, onnx_model, & get the output.
Kindly Guide me how may I get this done.

Git hub Link for the code.



TensorRT Version - 7.0:
GPU Type - Nvidia K80:
Nvidia Driver Version:
CUDA Version - 10.2:
**CUDNN Version **:
Operating System - Ubuntu + 1804:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @yashkhokarale,

You can pick a more suited example from the list of C++ samples here.
There is a github link associated with each link which you can follow step wise to get started.
You can place your custom files in the same sample directory and can run them the same way how you are doing it with sample files.
They will hopefully answer all your questions.
Also you can refer the below link to use C++ API’s in TensorRT


Hi @AakankshaS,
Thank you for you for your valuable feedback. I found the root of the problem. Its my weak understanding of C++ concepts & file structures like Cmakelists.txt etc. None the less I am able to decode slowely.