As title, can we save GIE model?
Instead of convert caffe model to gie model every time.
Thanks!!
As title, can we save GIE model?
Instead of convert caffe model to gie model every time.
Thanks!!
Hi r02525028, yes you can serialize the TensorRT CUDA engine after it has been imported on Jetson TX2, thus saving time when loading it in the future. See the ICudaEngine::serialize() function. You can see a code sample of it being used here:
Hi dusty,
however, can save gie model into “portable file”??
like in caffe we can save it to “.caffemodel”
in tensorflow we can save it to “.pb”
It is portable between systems with the same type of GPU, for example you can copy GIE model between Jetson TX2’s. Since TensorRT performs GPU-specific profiling and tuning optimizations, you will want to re-run it for different GPUs (for example, if moving to Jetson TX1 you would want to re-run).