Is this code behaving like the graphsurgeon API?

Hi

In this code: tf_trt_models/graph_utils.py at master · NVIDIA-AI-IOT/tf_trt_models · GitHub

The Relu6 is removed/changed from the classification models. I guess it is to make the graph compatible with TensorRT and avoid the execution to fallback to TensorFlow.

Is it wise to say that the code is like a “graph surgery”? The GIT is a bit old so maybe there was no graph_surgeon API. I am not sure.

Hi,

Based on the code, it just try to make sure there is a const6 node exist along with the Relu6 layer.

Guess that there are some some rules in uff parser to force to do so.
This should not be necessary currently.

def replace_relu6(frozen_graph):
    return convert_relu6(frozen_graph)
def convert_relu6(graph_def, const6_name='const6'):
    # add constant 6
    has_const6 = False
    for node in graph_def.node:
        if node.name == const6_name:
            has_const6 = True
    if not has_const6:
        const6_graph_def = make_const6(const6_name=const6_name)
        graph_def.node.extend(const6_graph_def.node)
        
    for node in graph_def.node:
        if node.op == 'Relu6':
            input_name = node.input[0]
            output_name = node.name
            relu6_graph_def = make_relu6(output_name, input_name, const6_name=const6_name)
            graph_def.node.remove(node)
            graph_def.node.extend(relu6_graph_def.node)
            
    return graph_def

Thanks.

1 Like

Ok, thank you for the answer.
I thought it was to make TensorFlow-TRT more performant in the benchmarks. I mean, to avoid the fallback from TensorRT to TensorFlow which will be slower.

But nowadays it is better to use ONXX, right?