Is creating the tensorRT execution context a must for running inference?

Hi,

I am working on a classification model. I first converted my tensorflow model into a tensorRT inference graph by trt.create_inference_graph(). In the inference part, I did not use the tensorRT execution engine(e.g.engine.create_execution_context()), but I just use something like:
with tf.device(‘GPU’):
tf.import_graph_def(inference_graph, input_map=input_map, return_elements=return_elements).

Does this mean that the inference is running in the tensorflow normal engine? Does the inference results or computation speed get affected with the absence of trt.execution engine?

Thank you for your help!!