Description
From forum discussion come to know python-tensor RT is not supported and we need to cross compile tensorRT using c++ api, is this the approach to be followed?
with the trtexec, is it possible to give input of the network and dump the output tensor to a file?
I have tried with random input and output is printing on console only is it possible to give input(fp32) and dump tensor output to a file?
and I need to give input in fp32 buffer as input for my network, will it be handled internally if I pass a raw 32 bit buffer?
Also I am trying to modify sampleONNXMNIST example for my requirement, is it a correct approach?
Environment
TensorRT Version: 6.3
GPU Type:
Nvidia Driver Version: 5.2.0
CUDA Version: 10.2
CUDNN Version: 7
Operating System + Version: Linux
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered