Tensorrt_dynamicshape.py (18.1 KB)
Description
I tried to convert our model by trtexec with dynamic shape. Then I tried to inference .( Please refer attchement for my code
But the problem is when batch_size >1 . I faced with bellow error:
Traceback (most recent call last):
File “/usr/lib/python3.8/threading.py”, line 932, in _bootstrap_inner
self.run()
File “infer_insight_face.py”, line 395, in run
batch_image_raw, use_time = self.trt_wrapper.infer(self.trt_wrapper.get_raw_image(self.image_path_batch))
File “infer_insight_face.py”, line 158, in infer
np.copyto(host_inputs[0], batch_input_image.ravel())
File “<array_function internals>”, line 5, in copyto
ValueError: could not broadcast input array from shape (75264,) into shape (37632,)
-------------------------------------------------------------------Traceback (most recent call last):
I know that because input data is batch_size*shape , but host_mem is locate with shape.
But how can I fix it?
Could you please support to fix it?
Environment
TensorRT Version: 8.2.0.6
GPU Type:
Nvidia Driver Version: TU102 [GeForce RTX 2080 Ti]
CUDA Version: 11.4.2
CUDNN Version:
Operating System + Version: Linux 20.0.4
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.1+cu102
Baremetal or Container (if container which image + tag):
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered