Tensort RT on SDK 5.2.0


From forum discussion come to know python-tensor RT is not supported and we need to cross compile tensorRT using c++ api, is this the approach to be followed?

with the trtexec, is it possible to give input of the network and dump the output tensor to a file?
I have tried with random input and output is printing on console only is it possible to give input(fp32) and dump tensor output to a file?
and I need to give input in fp32 buffer as input for my network, will it be handled internally if I pass a raw 32 bit buffer?

Also I am trying to modify sampleONNXMNIST example for my requirement, is it a correct approach?


TensorRT Version: 6.3
GPU Type:
Nvidia Driver Version: 5.2.0
CUDA Version: 10.2
CUDNN Version: 7
Operating System + Version: Linux
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Please refer to the below link for Sample guide.
Refer to the installation steps from the link if in case you are missing on anything
However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.


Yes we are using our custom network.
Please let us know is there any tool or sample application available in c++/python to validate any network using trt.


I have modified the sampleONNXMNIST example to make it work for our network, its working.
will create new ticket incase of any issues. Thanks.