I am currently deciding if I should build my low-latency CV pipeline directly in VPI or via self-made Deepstream plugins. Reading the documentation of VPI it is not entirely clear to me what the optimal handling and frame acquisition from two MIPI-CSI V4L2 cameras should be. Opening them via OpenCV incurrs a CPU transfer and a colour space conversion, but as far as I see I could pipe the NVMM images from the camera directly into the VPI pipeline. Are there best practices regarding this?
You can use Deepstream to read a camera frame and feed it into the VPI.
Please find below topic for the sample and discussion:
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.