How to convert a tlt model into TensortRT model with the .trt postfix?

Hi everyone.
I use GPU and TLT version2.
I have a tlt trained model. I want to convert it to TensorRT but not with tlt-export or tlt-convert because I need a model with trt postfix (.trt), not the .etlt or .engine postfix.

To this end What can I do?
Which steps I should take?

any help will be appreciated.

You can still use tlt-export to generate etlt model. Then use tlt-converter to generate trt engine.
The trt postfix is the same as .engine postfix. They are all trt engine.

thanks @Morganh, but I have a python code which works with trt model when loading it, but when I feed a .engine model it gives me an error like:

[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)

though it works with model.trt

more detail: my model is SSD which is created by tlt-converter, and my SSD.engine can’t be load when I do this:

def load_engine(trt_runtime, engine_path): 
      with open(engine_path, "rb") as f:
            engine_data = f.read()
     engine = trt_runtime.deserialize_cuda_engine(engine_data)
     return engine

Above error suggests you did not generate trt engine in the device where you want to run inference.

1 Like

thanks @Morgan I found that I should create my trt model by tlt-converter which -e parameters can be a model name with .trt postfix instead of a .engine. Then I can use that trt model in my python scripts

No, I do not think so. The .engine should be working too.

Thanks @Morganh