TensorRT engine and inference in existing CUDA context

Description

Is it possible to run inference on the TensorRT engine on an existing Cuda Context?

Environment

TensorRT Version: 7.1
CUDA Version: 11.0