TRT8 serialize() return nullptr

nvinfer1::ICudaEngine *engine = builder_->buildEngineWithConfig(*network_, *builder_config_);
I used my custom plugin to build the engine, but it took a long time, about 90 seconds, compared to only 10 seconds on x86.

nvinfer1::IHostMemory *serialized_model = engine->serialize();
So I want to solve this problem by serializing engine and save it in advance, but it returns null pointer on trt8.5.

Please Help

Hi,

Is the engine successfully built?
If the engine is NULL, the serialization might fail and returns a null pointer.

Thanks.

I checked it and it was successfully built, so I want to know why.

Hi,

Which JetPack version do you use?
For JetPack 5, the API should look like this:
https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_ONNX_model

Thanks.

JetPack 5.0.2 GA
Our model was trained in caffe serval years ago and we use C++ to load model and complete deployment.
The following code is work but it cost so much time when building engine.

  nvinfer1::ICudaEngine *engine =
      builder_->buildEngineWithConfig(*network_, *builder_config_);
  context_ = engine->createExecutionContext();

So I want to serialize model and load it next time to skip building stage:

  nvinfer1::ICudaEngine *engine =
      builder_->buildEngineWithConfig(*network_, *builder_config_);

  // generate custom.engine
  nvinfer1::IHostMemory *serialized_model(engine->serialize());
  std::ofstream f("custom.engine", std::ios::binary);
  f.write(reinterpret_cast<const char *>(serialized_model->data()),
          serialized_model->size());

I want to know can this work without trtexec tool?

Hi,

Do you try to serialize the model with TensorRT API or trtexec?
Does it work with the below command?

$ /usr/src/tensorrt/bin/trtexec --saveEngine=[file] ...
$ /usr/src/tensorrt/bin/trtexec --loadEngine=[file] ...

Thanks.

  nvinfer1::IHostMemory *serialized_model(engine->serialize());

I have tried with TensorRT8.4 API which returned nullptr, but Tensorrt7.4 works fine, it’s this a bug in trt8?

Hi,

Could you try the trtexec command with TensorRT 8?

There are lots of API changes between TensorRT 7 and TensorRT 8.
Testing it with trtexec can narrow down the issue that comes from implementation or the library itself.

Thanks.

Hi,

Just want to confirm the environment first.

Which device are you using for the TensorRT 7?
We don’t have TensorRT 7 for Orin.

Thanks.

I use TensorRT 7 on x86 machine and everything goes well, but it took a long time when I use TensorRT 8 to build Engine on Orin, I want to know why there is a big gap between these two machine, could you tell me why?

Is the change in API the root cause of trt8 serialization failure? Has there been a change in compatibility?

Our model uses a lot of custom plugins, and we have adapted pluginV2 in Tensorrt8. and we were able to successfully generate engine, but the loading time has been extended too much. We have never used trtexec to generate an engine. Considering the time cost, can we solve our problem by using trt API(engine->serialize())?

Hi,

Have you tried to serialize and deserialize with trtexec on Orin+TensorRT 8?
We would like to know if it is working first.

Thanks.

I tried and it kept reminding me that there was an error loading custom plugin. Although the plugin has already been adapted to trt8, it cannot be loaded directly into it yet.

Hi,

Thanks for the testing.

We need to reproduce this issue in our environment to gather more info.
Could you share a reproducible source and the model?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.