How to correctly set up bindings for execute_async_v3()?

There is an example TensorRT/demo/BERT/inference.ipynb at release/10.0 · NVIDIA/TensorRT · GitHub with this:

bindings = [int(d_inputs[i]) for i in range(3)] + [int(d_output)]
for i in range(engine.num_io_tensors):
context.set_tensor_address(engine.get_tensor_name(i), bindings[i])

context.execute_async_v3(stream_handle=stream.handle)

But when I try to set up bindings same way [int(d_input)] + [int(d_output)]
I got error:

TypeError: set_tensor_address(): incompatible function arguments. The following argument types are supported:
1. (self: tensorrt.tensorrt.IExecutionContext, name: str, memory: int) → bool

I used execute_async_v2() as

bindings = [int(d_input)] + [int(d_output)]
context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)

and it worked. How to make it work for execute_async_v3()?

This worked for me:

context.set_tensor_address(engine.get_tensor_name(0), int(d_input))
context.set_tensor_address(engine.get_tensor_name(1), int(d_output))

Now you have set up input and output bufferes separately

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.