I follow https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorflow-onnx-and-tensorrt/
Convert our model from TensorFlow onnx to TensorRT.
But when inference, show error:
[TensorRT] ERROR: …/rtSafe/cuda/genericReformat.cu (1234) - Cuda Error in executeMemcpy: 1 (invalid argument)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
I guess it’s two output leading memory error.
Input:
shape:(1, 256, 512, 3)
outputs:
shape: (1, 256, 512)
shape: (1, 256, 512, 4)
Is TensorRT support two outputs?
How do I modify for two outputs?
Or is other problem?
Environment
TensorRT Version: 6.2.0.3 GPU Type: RTX 2080 Nvidia Driver Version: CUDA Version: 10.2.152 CUDNN Version: 7.6.6.106 Operating System + Version: Ubuntu18.04 Python Version (if applicable): 3.6
**TensorFlow Version (if applicable)**1.4: PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
When model execute, must set bindings for all inputs and outputs.
For original code it’s only one output, so it set one output in bindings,
but there are two outputs in our model, so must set 2 outputs.
Or it will crash with
[TensorRT] ERROR: …/rtSafe/cuda/genericReformat.cu (1234) - Cuda Error in executeMemcpy: 1 (invalid argument).