Why can't I use BFloat16 types with my ONNX model?

Hi,

I am trying to create a new ONNX model directly in the Deep Learning Designer (2025.4) and it complains with cryptic messages when I change the input type to BFloat16.

When using Float everything seems to work. The same happens if I open the existing ONNX model with BFloat16 types inside. The documentation does not prohibits the usage of these types and the tool itself gives me it in the selection list.

If I profile the model ignoring the errors, the tool stops with another non-clear message:

Could not find an implementation for Pow(15) node with name ‘Pow_0’

The issue is due to the actual kernel support from ONNXRuntime. The definition of the ONNX Pow operator ( Pow - ONNX 1.21.0 documentation ) does support BFloat16. However ONNXRuntime doesn’t always implement support for all data types and for all operators.

DL Designer 2025.4 is based on ONNXRuntime 1.21.0, you can find the supported data types for each ONNXRuntime kernel here: onnxruntime/docs/OperatorKernels.md at rel-1.21.0 · microsoft/onnxruntime · GitHub , the Pow operator for example doesn’t support BFloat16, hence the “Could not find an implementation for Pow(15) node with name ‘Pow_0’" error when profiling.
You can try using the TensorRT profiler which should support BFloat16 for the Pow operator.

As for the type checking errors, DL Designer uses ONNXRuntime to perform model validation, and because the Pow operator doesn’t support BFloat16, it will show some errors.

1 Like