I use tlt container version2 which support trt and pycuda:
tensorrt version : 7.0.0-1+cuda10.0
GPU Type : 1080 Ti
I use resnet34_peoplenet_int8.trt which is obtained by tlt-converter on people net model.
when I run inference for single image this part of my python code works perfectly:
but when I run it for multiple images in a list (batch_size>1):
it gives me an error.
I want to know how to feed multiple images into inputs.host in inference part?
which steps should be taken?
any help will be appreciated