how to convert tensorflow model with mutiple output to UFF

I have a tensorflow model with mutiple output, how to convert tensorflow model with mutiple output to UFF.
How to write the OUTPUT_NAMES in uff_model = uff.from_tensorflow(frozen_graph, OUTPUT_NAMES) and how to write parser.register_output()?

Hi,

Please check our sample which is located at:
/usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/tf_to_trt.py

Thanks.

Hi AastaLL:

The example only tell you how to convert tensorflow model to UFF and parser it with one output. I want the example of multiple outputs. I want to register multiple outputs using parser.register_output(). But I don’t know how to do it
Thanks

Hi,

For example,

uff_model = uff.from_tensorflow(tf_model, ['out1','out2','out3'])
...
parser.register_output("out1")
parser.register_output("out2")
parser.register_output("out3")
...

Thanks.

Hi,

Thanks for your help, Currently. I can convert the tf model with multiple output to uff or plan.

However I still have a other problem for extract the output during the infer process in tensorrt with multiple output.

In the example code, the output is getting from this code:

int inputBindingIndex, outputBindingIndex;
  inputBindingIndex = engine->getBindingIndex(testConfig.inputNodeName.c_str());
  outputBindingIndex = engine->getBindingIndex(testConfig.outputNodeName.c_str());

  // allocate memory on host / device for input / output
  float *output;
  float *inputDevice;
  float *outputDevice;
  size_t inputSize = testConfig.InputHeight() * testConfig.InputWidth() * 3 * sizeof(float);

  cudaHostAlloc(&output, testConfig.NumOutputCategories() * sizeof(float), cudaHostAllocMapped);

  if (testConfig.UseMappedMemory())
  {
    cudaHostGetDevicePointer(&inputDevice, input, 0);
    cudaHostGetDevicePointer(&outputDevice, output, 0);
  }
  else
  {
    cudaMalloc(&inputDevice, inputSize);
    cudaMalloc(&outputDevice, testConfig.NumOutputCategories() * sizeof(float));
  }

  auto tl1 = chrono::steady_clock::now();
  double difftl = MS_PER_SEC * (tl1 - tl0).count();

  float *bindings[2];
  bindings[inputBindingIndex] = inputDevice;
  bindings[outputBindingIndex] = outputDevice;

How can I bindings different output from multiple output. should we use different binding index to do that? Like:

output2BindingIndex = engine->getBindingIndex(testConfig.output2NodeName.c_str());

Hi,

We have registered multiple output tensor in our ChatBot sample:
Please check it for more information:
https://github.com/AastaNV/ChatBot/blob/master/src/tensorNet.cpp#L124

Thanks.

Great work for you!

I’ll check it! Thanks so much!

Are you also working on make the tensorflow object detection api to tensorrt? I’m also planning to working on it in the next stage. Hope I can cooperate with you if you want. My github name is foreverYoungGitHub.

Thanks again for your help!

Best,

Hi,

Here is an another tutorial for TensorFlow to TensorRT on image classification.

Thanks.

how to convert tensorflow model with mutiple output to UFF using tensorrt4 or tensorrt5?
‘’’
parser.register_output(“out1”)
parser.register_output(“out2”)
‘’’
seems not work.

I have TRT v7.x , so i couldn’t locate the following file from this version. How to locate the file from TRT v7.x ?

/usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/tf_to_trt.py

Hi samjith888,

Please help to open a new topic for your issue. Thanks

Please look into this thread