Description
Hi all,
I have been working in Python with YoloV3 and YoloV4 on Jetson Nano and Jetson Xavier NX for a while, with a batch size of 1 and I never had an issue there.
Now for a project I am trying to use YoloV4 (yolov4-288) with multiple inputs, ie a batch size of 2.
I have been able to properly convert YoloV4 to a ONNX file with a static input dimension of [64, 3, 288, 288], and then to convert it to a TensorRT file with a input size of [2, 3, 288, 288].
When I run the inferences with the inputs being the frames of two videos, the engine output is good for only the first frame, while it is completely null for the second frame.
I would like to know what I can do to make the inferences working for the second frame.
Environment
TensorRT Version: 7.1.3
GPU Type: Jetson Xavier NX
CUDA Version: 10.2
Jetpack: 4.4.1
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15
Baremetal or Container (if container which image + tag): Baremetal
Relevant Files
All files including the yolov4.onnx, yolov4.trt, the conversion scripts and a script test_batches.py with the video files to test on, are on my Google Drive : YoloV4_XavierNX - Google Drive
Steps To Reproduce
The script test_batches.py show the number and indexes of detections from the outputs of the trt engine. Here the trt engine get the frames of two videos. It will raise an error after the first batch processing (if you comment line 285 it will process the following frames).
The detection outputs here only concern the first frame. All the detection arrays for the second frame are equal to zeros. And if you change the second frame by replacing with Nothing or with the first frame then the results are the same.