ONNX conversion slow because of protobuf


When converting TF model to ONXX I got a warning :
WARNING - IMPORTANT Installed protobuf is not cpp accelerated. Conversion will be extremely slow. See tf2onnx is super slow using Python 3.8 and 3.9 on Windows · Issue #1557 · onnx/tensorflow-onnx · GitHub

I use this command :
python3 -m tf2onnx.convert --saved-model “TF_model_saved” --output “dest_model.onnx”

My python3 version is 3.6.9

I tried to uninstal / reinstall protobuf and even with the lastest version I still got this warning.

And indeed conversion is veryyyyyyyyyy slow…

Is there a way to get a protobuf with accelerated CPP on jetson nano ?

Thanks !

Hi @rdpdo2002, you can refer to TensorFlow/TensorRT (TF-TRT) Revisited or follow along with the steps from this dockerfile here: https://github.com/dusty-nv/jetson-containers/blob/da4f32521aea12f6e37e5b51b437b0ccb81fd8fb/Dockerfile.tensorflow#L66

Ok thanks I already tried with TensorFlow/TensorRT (TF-TRT) Revisited but after installation I got an error “google.protobuf” not found when importing tensorflow…
I will try again with the other link you give me…

So it works now I don’t have warning message anymore, but the first link do not work (error importing module) and the second link need to be updated (test compilation failed) so I removed the test compilation from the script.

OK gotcha - glad that you were able to get it working!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.