How to convert TensortFlow to TensorRT with two ouputs


I follow
Convert our model from TensorFlow onnx to TensorRT.
But when inference, show error:
[TensorRT] ERROR: …/rtSafe/cuda/ (1234) - Cuda Error in executeMemcpy: 1 (invalid argument)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
I guess it’s two output leading memory error.
shape:(1, 256, 512, 3)
shape: (1, 256, 512)
shape: (1, 256, 512, 4)
Is TensorRT support two outputs?
How do I modify for two outputs?
Or is other problem?


TensorRT Version:
GPU Type: RTX 2080
Nvidia Driver Version:
CUDA Version: 10.2.152
CUDNN Version:
Operating System + Version: Ubuntu18.04
Python Version (if applicable): 3.6
**TensorFlow Version (if applicable)**1.4:
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @Denny.hsu

The below links may help you resolving the issue.


When model execute, must set bindings for all inputs and outputs.
For original code it’s only one output, so it set one output in bindings,
but there are two outputs in our model, so must set 2 outputs.
Or it will crash with
[TensorRT] ERROR: …/rtSafe/cuda/ (1234) - Cuda Error in executeMemcpy: 1 (invalid argument).

1 Like