How to feed multiple inputs of images (batch of input images) to a trt model of peoplenet in tlt in inference?

Hi all.

Environment

I use tlt container version2 which support trt and pycuda:
tensorrt version : 7.0.0-1+cuda10.0
GPU Type : 1080 Ti

I use resnet34_peoplenet_int8.trt which is obtained by tlt-converter on people net model.
when I run inference for single image this part of my python code works perfectly:

np.copyto(inputs[0].host, img.ravel())

but when I run it for multiple images in a list (batch_size>1):

np.copyto(inputs[0].host, img_list.ravel())

it gives me an error.

I want to know how to feed multiple images into inputs[0].host in inference part?
which steps should be taken?

any help will be appreciated

Please feed your image batches to “img”.

1 Like

thanks @Morganh , I misunderstand how to merge images as a batch as an input of the model.
Is it true to do this?
concate for example 2 images (batch_size=2), like this:

concat_img = np.concatenate([img,img])

and then feed them to inputs[0].host as below

inputs[0].host = concat_img.reshape(-1)

Can you try

concat_img = np.concatenate([img,img])
np.copyto(inputs[0].host, concat_img.ravel())

1 Like

I did it, it works but the value of the self.input[0] for 1 image and 2 different images is the same.
for single img1:

" img1 shape = " (1566720,)
"inputs[0].host_value = " [Host:
[0.92156863 0.9411765 0.9647059 … 0.02745098 0.02745098 0.03137255]

for single img2:

" img2 shape = " (1566720,)
"inputs[0].host_value = " [Host:
[0.9607843 0.9607843 0.9607843 … 0.5921569 0.5803922 0.5647059]

for concatenation [img1, img2]:

"concate img shape = " (3133440,)
"inputs[0].host_value = " [Host:
[0.92156863 0.9411765 0.9647059 … 0.02745098 0.02745098 0.03137255]

in concatenation the results is the same as when we load only image1. why it is like this?

When allocate_buffers, make sure the inputs has enough size.

1 Like

I checked the size by this:

batch_size = 2
size = trt.volume(trt_engine.get_binding_shape(binding))
* batch_size
print('size = ', size) → 3133440 (which is fit for 2 images)

Not sure what happened. Suggest you debugging more.
More, you can consider another way: in main, glob the images folder and then run inference.

1 Like

Thanks a lot @Morganh.
Ok I will debug more.
Can I have your email to send my code?
I would be grateful and appreciated

I still recommend you debug by yourself. Thanks a lot.

1 Like

yes sure. I just wanted to share more details in my code that may help you.
Thanks for all your help, if I face more problems I will share it here

1 Like

when using the above mentioend code getting the follow error Exception has occurred: ValueError
could not broadcast input array from shape (921600) into shape (1566720)
File “/peoplenet_trt/inference.py”, line 131, in
np.copyto(inputs[0].host, image.ravel())