Hi, I am interested in the ability to customize the motion estimation engine.
From the docs: "The hardware also provides capability to use external motion estimation engine and custom quantization parameter maps (for ROI “region of interest” encoding). These features, however, are currently not exposed in the software APIs and will be available in future releases of the SDK."
This intrigues me, because it could mean that we could integrate our rendering engine.
The following paper describes how motion vectors can be generated directly from data in the rendering pipeline.
". In experiments we demonstrated an overall acceleration of approximately 25%"
Most high-end rendering engines will already build a complete motion vector buffer during rendering for the purposes of generating motion blur. This buffer could be used directly to speed up motion estimation which could have a performance benefit greater than that described in the paper.
Was this one of the use cases in mind, and is this feasible in the future?