The model killed when I try to run torch2trt converted file

Hi,
We succeeded in converting to torch2trt with the pth file of the slim model by following https://github.com/NVIDIA-AI-IOT/torch2trt.

It comes out as below when tested.

There seems to be no problem with the conversion, but if I put the above file into the entire file I want to run, the problem occurs. (The slim model is a model that finds landmarks on the face. After finding a face through object detection, it finds a landmark.) However, when this model is called in and executed, it takes a very long time and eventually killed.

I want to shorten the execution time by importing the converted file, what should I do?

As a result of checking with jtop, turning only the converted slim model takes up ram 1.9 GB. Since the Jetson Nano I am currently using is 2GB, it seems that doing object detection and face detection does not work and is killed. In this case, is there a solution to run two models?

Hi,

We recommend you to please post your concern on Issues · NVIDIA-AI-IOT/torch2trt · GitHub to get better help.

Thank you.