How VIC performance is affected by resolution

Hi Nvidia community,

We notice that there is a big difference in VIC performance depending on the input resolution (4MP seems worse than 5MP) and level of downscale we are doing (2 is much better than 1.4 as an example).

We are using the VIC a lot with GStreamer through nvvidconv and we need to understand the performance better so we don’t go out of spec as once it happens the video become choppy

Can we get some documentation on VIC and how it works so we can understand how we can utilize it for downscale in the best way?

For example, assuming we have a 4MP camera of 2688 x 1520 which values and how we can calculate them will give us the best downscale performance? Also, what is the impact of interpolation-method on the performance, etc.

Best Regards.

Hi,
Please run the script to enable VIC at maximum clock and check again:

VPI - Vision Programming Interface: Performance Benchmark

Certain performance drop may be due to dynamic frequency scaling not actively pulling to higher clock. Please run in maximum clock always. If the issue persists, please share the steps and we will set up developer kit to check.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.