UFF generation - different checksum every time

Hi,

I have a script that generates a .uff file from a TensorFlow .pb file, using the “uff.from_tensorflow_frozen_model” function.

I have noticed that every time I generate the .uff file, it has a different checksum. Why is that? I would expect the conversion to be deterministic and reproducible.

Thanks!

Hi,

That’s an interesting find. Upon closer inspection, it seems like the binary .uff seems to store the parameters of some layers in an unsorted fashion, such as conv2d(strides=x, dtype=y, …) vs. conv2d(dtype=y, strides=x, …), causing differences in the checksum even though they’re logically equivalent.

If you output the human-readable/text (.pbtxt) version of the uff file however, it seems to sort the parameters to always get the same output. So as a workaround if your workflow is dependent on verifying the checksum of the model, you can try validating the checksum of the human-readable version instead:

# This will output both model.uff (binary) and model.pbtxt (human-readable)
convert-to-uff model.pb --text -o model

That works, thanks for the quick answer!

I assume TensorRT’s C++ UFF parser will still want the binary version, not human-readable, right?

I believe the .uff binary is expected, the .pbtxt is just for convenience of readability to user. You should be able to quickly verify that in your sample code :)