I use tlt container version2 which support trt and pycuda:
tensorrt version : 7.0.0-1+cuda10.0 GPU Type: 1080 Ti
I use resnet34_peoplenet_int8.trt which is obtained by people net model in tlt.
when I run inference for single image this part of code works perfectly:
np.copyto(inputs[0].host, img.ravel())
but when I run it for multiple images in a list (batch_size>1):
np.copyto(inputs[0].host, img_list.ravel())
it gives me an error.
I want to know how to feed multiple images into inputs[0].host in inference part of tensorrt?
which steps should be taken?
thanks a lot @NVES, I saw that document before.
my question is about how to feed multiple input images to the tensorrt engine and apply inference on a batch of images (>1) in the python.
I misunderstand how to merge images as a batch as an input of the model.
Is it true to do this?
concate for example 2 images (batch_size=2), like this: