PGIE component latency

It is not true. The nvinfer batch-size only impact TensorRT inferencing time.
The nvinfer component is a GStreamer plugin, the pre-processing, inferencing and postprocessing are done inside nvinfer. They are asynchronizely done in different threads. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
For yolov7 and yolov8, the postprocessing is relatively complicated, and if the postprocessing is done with CPU, it will take much longer time than TensorRT inferencing. So the nvinfer latency is mainly decided by the postprocessing but not TensorRT inferencing. And nvinfer batch size only impact TensorRT inferencing.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_FAQ.html#what-is-the-difference-between-batch-size-of-nvstreammux-and-nvinfer-what-are-the-recommended-values-for-nvstreammux-batch-size

Depends on the nvinfer batch size and nvstreammux batch size. E.G. If you batch the frames with nvstreammux batch size 4 while you inferencing the batch with nvinfer batch size 1, the frames in the batch will be inferenced one by one.