how to generate a '.engine' file


I have successfully converted ‘ssd_mobilenet_v2_coco.pb’ to ‘.uff’ now, when i run this python script it is executing without any errors but it is not generating any ‘.engine’ file.

please help.

[b]import uff
import tensorrt as trt
import graphsurgeon as gs

uff_model_path = “/home/tg002/Tensorflow-1/models/research/object_detection/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.uff”
engine_path = “/home/tg002/Tensorflow-1/models/research/object_detection/ssd_mobilenet_v2_coco_2018_03_29/ssd_mobilenet_v2_bs_1.engine”
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
trt.init_libnvinfer_plugins(TRT_LOGGER, ‘’)

trt_runtime = trt.Runtime(TRT_LOGGER)

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
builder.max_workspace_size = 1 << 30
builder.fp16_mode = True
builder.max_batch_size = 1
parser.register_input(“Input”, (3, 300, 300))
parser.parse(uff_model_path, network)

print(“Building TensorRT engine, this may take a few minutes…”)
trt_engine = builder.build_cuda_engine(network)


You’re just missing the extra step of serializing the engine and writing it to a file.

with open("model.engine", "wb") as f:

Then you can later use the engine by reading it in similarly with something like:

with open("model.engine", 'rb') as f:
    engine = trt_runtime.deserialize_cuda_engine(

You can also see an example of this in tensorrt/samples/python/yolov3_onnx/ for reference.

NVIDIA Enterprise Support