Description
In keras my input looks like this:
Its a list of two tensors
And my outputs:
I’m working with the “simpleOnnx” example and I wonder how to set the input tensor and output tensors:
(Based my conde on https://developer.nvidia.com/blog/speed-up-inference-tensorrt/)
For one inputTensor I used this code:
Dims dims_i{engine->getBindingDimensions(0)};
Dims3 inputDims{batchSize, dims_i.d[1], dims_i.d[2]};
context->setBindingDimensions(0, inputDims);
launchInference(context.get(), stream, inputTensor, outputTensor, bindings, batchSize);
Any suggestions? or maybe an example for multiple inputs and outputs?
Environment
TensorRT Version : 7.0.0-1
GPU Type : RTX
Nvidia Driver Version :
CUDA Version : 11.0
CUDNN Version :
Operating System + Version : Ubuntu 18.04
Python Version (if applicable) : 3.6
TensorFlow Version (if applicable) : 2.3
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Update:
I’ve tried to change
Dims3 inputDims{batchSize, dims_i.d[1], dims_i.d[2]}
to
Dims4 inputDims{batchSize, 2, dims_i.d[1], dims_i.d[2]}
But I’m getting:
size outputTensor : 4
size inputTensor: 495
0 0 0 0 ERROR: Parameter check failed at: engine.cpp::resolveSlots::1092, condition: allInputDimensionsSpecified(routine)