How to feed multiple inputs of images (batch of input images) to a Nvidia TensorRT in inference?

Hi all.

Environment

I use tlt container version2 which support trt and pycuda:
tensorrt version : 7.0.0-1+cuda10.0
GPU Type: 1080 Ti

I use resnet34_peoplenet_int8.trt which is obtained by people net model in tlt.
when I run inference for single image this part of code works perfectly:

np.copyto(inputs[0].host, img.ravel())

but when I run it for multiple images in a list (batch_size>1):

np.copyto(inputs[0].host, img_list.ravel())

it gives me an error.

I want to know how to feed multiple images into inputs[0].host in inference part of tensorrt?
which steps should be taken?

any help will be appreciated

Hi,
The below link might be useful for you
https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#thread-safety

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html
For multi threading/streaming, will suggest you to use Deepstream or TRITON
For more details, we recommend you to raise the query to the Deepstream or TRITON forum.

Thanks!

thanks a lot @NVES, I saw that document before.
my question is about how to feed multiple input images to the tensorrt engine and apply inference on a batch of images (>1) in the python.

I misunderstand how to merge images as a batch as an input of the model.
Is it true to do this?
concate for example 2 images (batch_size=2), like this:

concat_img = np.concatenate([img,img])

and then feed them to inputs[0].host as below

inputs[0].host = concat_img.reshape(-1)

I will be grateful if anyone help me to fine this

Hi @MediaJ,

You need to tell the TRT runtime that the batch size of the input. For python API, it is
https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/ExecutionContext.html#tensorrt.IExecutionContext.execute.

And, the python sample: https://github.com/NVIDIA/TensorRT/blob/master/samples/python/common.py#L147.

Thank you.

hello @MediaJ ,
Did you find how to do this? I am looking for the answers for the same question. thanks