Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version: 7.2.2.3 (installed in virtualenv with nvidia-tensorrt)
GPU Type: 2070
Nvidia Driver Version: 460
CUDA Version: 11.0
CUDNN Version: 8.0.5.39-1
Operating System + Version: Ubuntu 18.04
**Python Version **: 3.8.8
**PyTorch Version **: 1.7.1
Relevant Files
I am attempting to export and run as a TRT model efficientnet_b2 from this repo:
I export like:
EXPORT = True
output = “my_tf_3.onnx”
if EXPORT:
import geffnet
import onnx
model = geffnet.create_model(
'efficientnet_b2',
num_classes=3,
in_chans=3,
pretrained=False,
checkpoint_path="/home/luke/projects/traffic_light_classifier/tf_3.pt",
exportable=True)
model.eval()
DEVICE = torch.device("cuda")
model.to(DEVICE)
example_input = torch.randn(1, 3, 256, 256).to(DEVICE)
model(example_input)
input_names = ["input0"]
output_names = ["output0"]
torch.onnx.export(model, example_input, output, verbose=False,
input_names=input_names, output_names=output_names,
export_params=True)
print("==> Loading and checking exported model from '{}'".format(output))
onnx_model = onnx.load(output)
onnx.checker.check_model(onnx_model) # assuming throw on error
print("==> Passed")
A netron diagram of the exported ONNX:
the exported ONNX:
my_tf_3.onnx (29.3 MB)
I try to run as a TRT engine:
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
import tensorrt as trt
TRT_LOGGER = trt.Logger()
def build_engine(onnx_file_path):
builder = trt.Builder(TRT_LOGGER)
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
builder.max_workspace_size = 1 << 30
builder.max_batch_size = 1
# parse ONNX
with open(onnx_file_path, 'rb') as model:
print('Beginning ONNX file parsing')
parser.parse(model.read())
print('Completed parsing of ONNX file')
# generate TensorRT engine optimized for the target platform
print('Building an engine...')
engine = builder.build_cuda_engine(network)
context = engine.create_execution_context()
print("Completed creating Engine")
return engine, context
engine, context = build_engine("/home/luke/projects/traffic_light_classifier/my_tf_3.onnx")
And I get the error:
30 print('Building an engine...')
31 engine = builder.build_cuda_engine(network)
---> 32 context = engine.create_execution_context()
33 print("Completed creating Engine")
34
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
I’m not even really sure how to begin debugging this, any ideas?