I use tlt container version2 which support trt and pycuda:
tensorrt version : 7.0.0-1+cuda10.0 GPU Type : 1080 Ti
I use resnet34_peoplenet_int8.trt which is obtained by tlt-converter on people net model.
when I run inference for single image this part of my python code works perfectly:
np.copyto(inputs[0].host, img.ravel())
but when I run it for multiple images in a list (batch_size>1):
np.copyto(inputs[0].host, img_list.ravel())
it gives me an error.
I want to know how to feed multiple images into inputs[0].host in inference part?
which steps should be taken?
thanks @Morganh , I misunderstand how to merge images as a batch as an input of the model.
Is it true to do this?
concate for example 2 images (batch_size=2), like this:
when using the above mentioend code getting the follow error Exception has occurred: ValueError
could not broadcast input array from shape (921600) into shape (1566720)
File “/peoplenet_trt/inference.py”, line 131, in
np.copyto(inputs[0].host, image.ravel())