Two machines with very similar SW stack but different GPUs generate different folded model using the Polygraphy tool on the same model onnx input

It looks like this is the issue:

This ORT build has [‘TensorrtExecutionProvider’, ‘CUDAExecutionProvider’, ‘CPUExecutionProvider’] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(…, providers=[‘TensorrtExecutionProvider’, ‘CUDAExecutionProvider’, ‘CPUExecutionProvider’], …)

Can you try either upgrading ONNX-GraphSurgeon to the latest version or downgrading ONNX-Runtime to something < 1.9.0 on Machine #2?