We succeeded in converting to torch2trt with the pth file of the slim model by following https://github.com/NVIDIA-AI-IOT/torch2trt.
It comes out as below when tested.
There seems to be no problem with the conversion, but if I put the above file into the entire file I want to run, the problem occurs. (The slim model is a model that finds landmarks on the face. After finding a face through object detection, it finds a landmark.) However, when this model is called in and executed, it takes a very long time and eventually killed.
I want to shorten the execution time by importing the converted file, what should I do?