Can we save GIE model in TensorRT?

As title, can we save GIE model?

Instead of convert caffe model to gie model every time.

Thanks!!

Hi r02525028, yes you can serialize the TensorRT CUDA engine after it has been imported on Jetson TX2, thus saving time when loading it in the future. See the ICudaEngine::serialize() function. You can see a code sample of it being used here:

[url]jetson-inference/tensorNet.cpp at e12e6e64365fed83e255800382e593bf7e1b1b1a · dusty-nv/jetson-inference · GitHub

Hi dusty,

however, can save gie model into “portable file”??

like in caffe we can save it to “.caffemodel”

in tensorflow we can save it to “.pb”

It is portable between systems with the same type of GPU, for example you can copy GIE model between Jetson TX2’s. Since TensorRT performs GPU-specific profiling and tuning optimizations, you will want to re-run it for different GPUs (for example, if moving to Jetson TX1 you would want to re-run).