A clear and concise description of the bug or issue.
I am able to compile & run all the C++ API samples provided by default in TensorRT. But I can’t figure out how to get my custom C++ program run on TensorRt container. I have created a git repo & cloned it in TensorRt folder (inside the container).
I am a beginner at C++. But I am assigned to do the inference using C++ API rather than Python.
A brief about my code.
The model trained on cifar10 Dataset & converted to ONNX.
Now I want to parse the onnx model to build & create TensorRt engine for inference.
How to create & compile c++ files, pass data, onnx_model, & get the output.
Kindly Guide me how may I get this done.
Git hub Link for the code.
TensorRT Version - 7.0:
GPU Type - Nvidia K80:
Nvidia Driver Version:
CUDA Version - 10.2:
**CUDNN Version **:
Operating System - Ubuntu + 1804:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered