I have trained a efficiendet tf2 model for 300 epochs and wants to convert it into onnx.
But when I am converting the .tlt model to onnx, using the export.py, the onnx model has TensorRT layers in it.
TensorRT operators are not supported on my target device.
Could you please help me in converting the .tlt model to onnx model without TensorRT
For the plugins in the onnx, you can use graphsurgeon to remove it. Then you can refer to the corresponding implementation in TensorRT/plugin at release/8.6 · NVIDIA/TensorRT · GitHub to implement.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks