TensorRT issues

Description

I have tried running this code with my onnx mode i got this error

Code

model_path = “bottom.onnx”
input_size = 32

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

def build_engine(model_path):
with trt.Builder(TRT_LOGGER) as builder,
builder.create_network() as network,
trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1<<20
builder.max_batch_size = 1
with open(model_path, “rb”) as f:
parser.parse(f.read())
engine = builder.build_cuda_engine(network)
return engine

def alloc_buf(engine):
h_in_size = trt.volume(engine.get_binding_shape(0))
h_out_size = trt.volume(engine.get_binding_shape(1))
h_in_dtype = trt.nptype(engine.get_binding_dtype(0))
h_out_dtype = trt.nptype(engine.get_binding_dtype(1))
in_cpu = cuda.pagelocked_empty(h_in_size, h_in_dtype)
out_cpu = cuda.pagelocked_empty(h_out_size, h_out_dtype)
in_gpu = cuda.mem_alloc(in_cpu.nbytes)
out_gpu = cuda.mem_alloc(out_cpu.nbytes)
stream = cuda.Stream()
return in_cpu, out_cpu, in_gpu, out_gpu, stream

def inference(engine, context, inputs, out_cpu, in_gpu, out_gpu, stream):
cuda.memcpy_htod(in_gpu, inputs)
context.execute(1, [int(in_gpu), int(out_gpu)])
cuda.memcpy_dtoh(out_cpu, out_gpu)
return out_cpu

if name == “main”:
inputs = np.random.random((1, 3, input_size, input_size)).astype(np.float32)
engine = build_engine(model_path)
context = engine.create_execution_context()
for _ in range(10):
t1 = time.time()
in_cpu, out_cpu, in_gpu, out_gpu, stream = alloc_buf(engine)
res = inference(engine, context, inputs.reshape(-1), out_cpu, in_gpu, out_gpu, stream)
print(res)
print("cost time: ", time.time()-t1)

Issue

Unsupported ONNX data type: UINT8 (2)
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “onnx_trt.py”, line 55, in
context = engine.create_execution_context()
AttributeError: ‘NoneType’ object has no attribute 'create_execution_context

Environment

TensorRT Version: 5.1.5
CUDA Version: 10.1
CUDNN Version: 8.0.3
Operating System + Version: Ubuntu 1604

Hi @aquibalicool4,
Request you to share your onnx model, so that we can assist you better.

Thanks!

Here is the ONNX file

https://drive.google.com/file/d/1QThMeWxjm4WQSSflqzEEvLokz2qTVjRJ/view?usp=sharing

any update?

Hi @aquibalicool4,

This is a known issue and the fix will be available in future release.
However I would still recommend you to upgrade to latest TRT release and try parsing your model.