The Relu6 is removed/changed from the classification models. I guess it is to make the graph compatible with TensorRT and avoid the execution to fallback to TensorFlow.
Is it wise to say that the code is like a “graph surgery”? The GIT is a bit old so maybe there was no graph_surgeon API. I am not sure.
Ok, thank you for the answer.
I thought it was to make TensorFlow-TRT more performant in the benchmarks. I mean, to avoid the fallback from TensorRT to TensorFlow which will be slower.