TAO yolov4 batch inference

Hi, I’m using YOLOv4 of TAO to train my own dataset.
It got good results following YOLOv4 example from tao/cv_samples.

I can run batch inference (batch size = 3) on TAO by:

!tao yolo_v4 inference -m $USER_EXPERIMENT_DIR/export/trt3.engine \
                       -e $SPECS_DIR/yolo_v4_retrain_resnet18_kitti_seq.txt \
                       -i $DATA_DOWNLOAD_DIR/test_samples \
                       -o $USER_EXPERIMENT_DIR/yolo_infer_images \
                       -t 0.6

After that, I want to write a python script to inference following
Doing inference in python with YOLO V4 in TensorRT - postporsessing
I can run inference with batch size = 1. However, I can’t run inference when batch size > 1.
The output from model are all zeros. Besides, it shows this error:
[TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueue::445, condition: batchSize > 0 && batchSize <= mEngine.getMaxBatchSize(). Note: Batch size was: 3, but engine max batch size was: 1

How can I fix this?

Please modify “-p” option when you generate trt engine via tao-converter.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.