nvinfer1::ICudaEngine *engine = builder_->buildEngineWithConfig(*network_, *builder_config_);
I used my custom plugin to build the engine, but it took a long time, about 90 seconds, compared to only 10 seconds on x86.
nvinfer1::IHostMemory *serialized_model = engine->serialize();
So I want to solve this problem by serializing engine and save it in advance, but it returns null pointer on trt8.5.
JetPack 5.0.2 GA
Our model was trained in caffe serval years ago and we use C++ to load model and complete deployment.
The following code is work but it cost so much time when building engine.
Could you try the trtexec command with TensorRT 8?
There are lots of API changes between TensorRT 7 and TensorRT 8.
Testing it with trtexec can narrow down the issue that comes from implementation or the library itself.
I use TensorRT 7 on x86 machine and everything goes well, but it took a long time when I use TensorRT 8 to build Engine on Orin, I want to know why there is a big gap between these two machine, could you tell me why?
Our model uses a lot of custom plugins, and we have adapted pluginV2 in Tensorrt8. and we were able to successfully generate engine, but the loading time has been extended too much. We have never used trtexec to generate an engine. Considering the time cost, can we solve our problem by using trt API(engine->serialize())?
I tried and it kept reminding me that there was an error loading custom plugin. Although the plugin has already been adapted to trt8, it cannot be loaded directly into it yet.