We have a serialized TensorRT model which is exported using the following piece of code:
auto engine = builder->buildCudaEngine(*network); (*modelStream) = engine->serialize(); // Save to file ...
It’s possible to reload this model in TensorRT using the following code:
OurCustomPluginClass pluginFactory; ICudaEngine* engine = runtime->deserializeCudaEngine(modelStream->data(), modelStream->size(), &pluginFactory);
In the DeepStream user guide, I could find the samples for loading a Caffe model or UFF model for inferencing, but there is no mention of direct use of TensorRT models as far as I could see. Is this currently not supported? Which is the best way to reuse this prebuilt model?