Inference tensorrt model failed :Parameter check failed at: engine.cpp::enqueueV2::435, condition: !mEngine.getHasImplicitBatchDim()

I have a onnx model… I try run tensorrt on jetson nano with onnx parser. Model converting was done and engine was made .But when i run inference , error occured:
[TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueueV2::435, condition: !mEngine.getHasImplicitBatchDim()
and outputs are all zeros.

my jetpack is 4.3 ,with tensorrt 6.0.1
When I move the same model and codes to a pc with tensorrt7.0.0 , all goes well

my code is from samples with little modification ,as follow:

builder=trt.Builder(TRT_LOGGER)
network=builder.create_network()
parser=trt.OnnxParser(network, TRT_LOGGER)
builder.max_workspace_size = 5<<27
builder.max_batch_size = 1
builder.fp16_mode =False
with open(model_file, ‘rb’) as model:
a=parser.parse(model.read())
print(‘getting engine’)
engine= builder.build_cuda_engine(network)
with open(engine_path,‘wb’) as f:
f.write(engine.serialize())
with engine.create_execution_context() as context:
inputs, outputs, bindings, stream = trt_common.allocate_buffers(engine)
inputs[0].host = …
trt_outputs = trt_common.do_inference_v2(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)

how to do ? Thank you

Hi,

The error occurs in the batch type definition.
Please add following update to see if helps:

flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
builder.create_network(flag)

Thanks.

Solved.
i should use do_inference instead of do_inference_v2

By the way .Hope jetpack support TRT7 soonly, seems TRT7 support much more onnx layers than 6.Thank you

Hi,

TensorRT 7 will be available in our next JetPack release.
Stay tuned.

Hi,

TensorRT 7.1 is available in our latest JetPack4.4 DP.

Thanks.