I currently use Tensorflow to serve deep models on a jetson.
I optimize the saved models using tf-trt.
I have some prototypes in pytorch using functions not available in TF (group convs, etc) and I would like to serve them using the same TF pipeline.
Is is possible to replace the TRTEngineOp with another TRT serialized engine?
Following the TF-TRT documentation I am able to dump the serialized engine from my tensorflow model and deserialize it in TRT.
However if I try to replace the “serialized_segment” with another TRT engine, write to disk and load in another TF session, it doesn’t work and seem to still use the previous model which is fairly confusing.
Replacing the segment with any random string crashes tensorflow. If I replace with another serialized model it runs fine, but doesn’t seem to be using the replaced model.