How to distinguish multiple outputs of a custom operator in a UFF file?

Sine tensorrt doesn’t support split operator for tensorflow model, I write a custom split operator by extending IpluginV2 class. The ploblem is that IpluginV2 class doesn’t provide any interface to register the names to distinguish the multiple outputs of split operator, which make the conversion process from tensorflow pb model to tensorrt uff model fail because it can’t recognize the multi output nodes, which is distinguished by the suffix appending to the operator name like “operatorname:2” in tensorflow.

Hello,

per engineering, if the plugin has 2 outputs then the implementation of getNbOutputs should return 2. Can you share how you are converting? Preprocessed with graphsurgeon?

first, I write my plugin code as follow:

int SplitPlugin::getNbOutputs() const
{
return mSplitNum;
}

Dims SplitPlugin::getOutputDimensions(int index, const Dims* inputs, int nbInputDims)
{
// Validate input arguments
assert(nbInputDims == 1);
Dims outputs = inputs[0];
outputs.d[mAxis] /= mSplitNum;
return outputs;
}

second, I load my plugin and covert my tensorflow model to uff model according to https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#uff_custom_plugin. But when I fail to run the new model with a error msg “can’t find node oldSplitOpratorName:1” reported by the uffparser. then I open the newModel.pbtxt file and find that the inputs of the Subsequent node of oldSplitOprator remain “oldSplitOpratorName:1”.

third, I modify the python script “uff\converters\tensorflow\converter.py” to sovle the promblem,
The result is that the inputs of the Subsequent node of oldSplitOprator has been changed to “newSplitOpratorName:1” in the newModel.pbtxt file successfully.

but I still fail to run the new model with a error msg “can’t find node newSplitOpratorName:1” reported by uff parser.

my graphsurgeon Preprocessing code:

def model_to_uff(model_path):
# Transform graph using graphsurgeon to map unsupported TensorFlow
# operations to appropriate TensorRT custom layer plugins
dynamic_graph = gs.DynamicGraph(model_path)
dynamic_graph.collapse_namespaces(prepare_namespace_plugin_map(model_path))
# Save resulting graph to UFF file
output_uff_path = model_path_to_uff_path(model_path)
uff.from_tensorflow(
dynamic_graph.as_graph_def(),
[OUTPUT_NAME],
output_filename=output_uff_path,
text=True,
write_preprocessed = True,
debug_mode = True
)
return output_uff_path

Hello,

to help us debug, can you share a minimal repro for the graphsurgeon step that includes the model.