Is there any easy way to easily get Determinism for different batch size inferring?


Other configuration options that can result in a different kernel selection are different input sizes (for example, batch size)

model inferring with Dynamic batch size may result to different results, which make it difficult to debug our deploy system. Is there any easy way for any model to achieve this Determinism?


As mentioned in the doc, please try AlgorithmSelector to build the deterministic model.

Thank you.