How do I save my converted TensorRT Plan

Following the tutorials, I think I understand how to convert caffemodels to a tensorRT plan, but I’m not sure how to save it as an external file I can find on my drive for later use. All the tutorials seem to use the plans immediately.

Thanks

Hi jpeiwang99, the jetson-inference tutorial saves the TensorRT plans automatically, and caches them for the next run so it doesn’t need to perform the optimization phase each time.

You can see here in the code where it saves the plan:
https://github.com/dusty-nv/jetson-inference/blob/d2bb14ba4b60bbd8fb26bc952857daa20624fa97/tensorNet.cpp#L244

Note that the serialized plans are not portable to different GPUs, if you generated the plan on TX2 it can only be run on other TX2’s. That’s because TensorRT performs architecture-specific profiling to get the best performance out of each GPU.

Thanks for the reply.
Is there no plan file then? Or is that just the tensorcache file at the cache location?

Nevermind- I found another forum post that answered my question thank you.

Just for the sake of others reading the thread — yes, they are the same thing, in that repo the serialized plan is saved with a .tensorcache extension.