I created a TensorRT engine file “tf_mnist.engine” as in the official example (https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/topics/workflows/tf_to_tensorrt.html).
I am wondering if I can run the command giexec for this engine (i.e., engines created by python-tensorflow-uff-conversion). From experience, I know that I can run giexec for engines converted from caffe models.
When I tried
giexec --engine=tf_mnist.engine --output=fc2/Relu --batch=1
an error occurs,
engine: tf_mnist.engine output: fc2/Relu batch: 1 name=data, bindingIndex=-1, buffers.size()=2 giexec: giexec.cpp:201: void createMemory(const nvinfer1::ICudaEngine&, std::vector<void*>&, const string&): Assertion `bindingIndex < buffers.size()' failed.  705 abort (core dumped) ~/TensorRT-3.0.2/bin/giexec --engine=tf_mnist.engine --output=fc2/Relu
How can I set bindingIndex?