Is it safe to deallocate nvinfer1::IRuntime after creating an nvinfer1::ICudaEngine but before running inference with said ICudaEngine

I have structured my application such that I create an IRuntime as follows:

std::unique_ptr<IRuntime> runtime{createInferRuntime(m_logger)};

I then use the IRuntime to deserailize the model and to create an ICudaEngine:

 m_engine = std::unique_ptr<nvinfer1::ICudaEngine>(runtime->deserializeCudaEngine(, buffer.size()));

Due to the way my application is structured, the runtime variable falls out of scope and is deallocated before I have a chance to run inference using the m_engine. Is this safe? Or must the IRuntime instance remain in scope until inference is complete?


IRuntime object in TensorRT should not be deallocated or destroyed before using any engine that was deserialized using that runtime.

Thank you.