TAO Deploy Yolo v4 Tiny model giving error

Please provide the following information when requesting support.

• Hardware (RTX2070)
• Network Type (YOLOv4 Tiny)
• TLT Version (5.0)

I am trying to inference the LPDnet yolov4_tiny_ccpd_deployable model using tao deploy container
When the https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/yolo_v4/scripts/inference.py script is invoked, its giving error

[11/11/2023-20:27:40] [TRT] [E] 3: [executionContext.cpp::validateInputBindings::1838] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::validateInputBindings::1838, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [2,3,480,640] for bindings[0] exceed min ~ max range at index 0, maximum dimension in profile is 1, minimum dimension in profile is 1, but supplied dimension is 2.
Producing predictions:   0%|                              | 0/3 [00:00<?, ?it/s]
2023-11-11 20:27:40,638 [INFO] root: could not broadcast input array from shape (1843200,) into shape (921600,)
Traceback (most recent call last):
  File "inference.py", line 214, in <module>
  File "/workspace/tao-deploy/nvidia_tao_deploy/cv/common/decorators.py", line 63, in _func
    raise e
  File "/workspace/tao-deploy/nvidia_tao_deploy/cv/common/decorators.py", line 47, in _func
    runner(cfg, **kwargs)
  File "inference.py", line 102, in main
    y_pred = trt_infer.infer(imgs)
  File "/workspace/tao-deploy/nvidia_tao_deploy/cv/yolo_v3/inferencer.py", line 110, in infer
    np.copyto(self.inputs[0].host, self.numpy_array.ravel())
  File "<__array_function__ internals>", line 180, in copyto
ValueError: could not broadcast input array from shape (1843200,) into shape (921600,)

I suppose the inferencer is for Yolov4 model not for the tiny model that’s why it’s giving the error. Is there any sample inference.py for the tiny model too? I see the tao deploy container also have the task

                    docker_registry: nvcr.io
                       25. yolo_v4_tiny

which can be directly called by the tao deploy cli. But I need to customize the inferencer with python. How can I do it? Should I customize the yolov4.config.proto ? Please help

Can you share the spec file?

BTW, for YOLOv4_tiny, it can use inference.py from YOLOv4 network.

The error was due to batch size, when I set to 1, it runs fine. The yolo v4 tiny model also runs on Yolov4 inference.py script.