Why deepstream-app convert image from I420 to NV12 before feeding to NvInfer?

This is pipeline grab that I exported when running deepstream-app sample (Sorry I can’t upload full pipeline because it is too large), my model used in Nvinfer is YOLOv7 or YOLOv4 trained by sRGB image. I see that Nv12 image format is feeded to model, Dose it affects to accuracy (mAP)?

Thank you very much.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.6
• Issue Type (questions, new requirements, bugs) questions

please refer to the doc. nvinfer can accepts batched NV12/RGBA buffers from upstream.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.