TensorRT: Input buffers content preservation

Description

Documentation for IExecutionContext::enqueue() allows passing a signal to know when it’s possible to write new data into the input buffer. But it’s not clear whether the input buffer content is preserved during inference or can be overwritten with layers activation data.

So, the question is – can I re-use the same device input buffer for multiple networks (running either sequentially or in parallel)?

Environment

TensorRT Version: 8.0

Hi @sergeev917,

Sorry for the delayed response.

Yes. We can use.

TRT will not overwrite the input buffer content.

Thank you.

Hi @spolisetty

TRT will not overwrite the input buffer content.

Thank you! We’ve observed the buffers being preserved, but wanted to make sure that this is a part of API guarantees.

1 Like