1 input and 9 outputs network, convert to trt got 1output fine but 8 outputs empty,


i got a 1 input and 9 outputs network. After i converted onnx to trt by using trtexec, i got 1output fine but 8 outputs empty when inference. any solutions?



TensorRT Version: 7.1
GPU Type: jetson xavier agx
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @hurryli36,

Could you please share issue reproducible scripts/onnx model. We also request you to provide more details for better assistance.

Thank you.

I upload the onnx to the Dropbox, and the link: Dropbox - 3.onnx - Simplify your life

and the inference code as follow:

stream = self.stream
context = self.context
engine = self.engine
host_inputs = self.host_inputs
cuda_inputs = self.cuda_inputs
host_outputs = self.host_outputs
cuda_outputs = self.cuda_outputs
bindings = self.bindings
input_image = images
np.copyto(host_inputs[0], input_image.ravel())
cuda.memcpy_htod_async(cuda_inputs[0], host_inputs[0], stream)
context.execute_async(bindings=bindings, stream_handle=stream.handle)
cuda.memcpy_dtoh_async(host_outputs[0], cuda_outputs[0], stream)
output = host_outputs[0]

i got one output correct but the other 8 outputs are empty shown as the image above.

thanks for your help

Hi @hurryli36,

Sorry for the delayed response, We request to you to provide runnable inference script and let us know the method you’ve followed for generating onnx.
Please share with us required files and let us know the steps to reproduce the issue.

Thank you.