Thank you for your reply.
I am trying to convert a model which was developped in Tensorflow 2.0 and Keras backend (tf.keras). Our model is in the saved.model format and I am using 2ft0nnx to convert it to ONNX. When I try to do that I get the following segmentation fault:
`python3 -m tf2onnx.convert --saved-model . --opset 12 --output model.onnx --fold_const `
2020-06-30 14:00:40.443942: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.2/lib64
2020-06-30 14:00:40.444009: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-06-30 14:00:40.444227: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.2/lib64
2020-06-30 14:00:40.444265: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Segmentation fault (core dumped`
When I tried using Tensorflow 1.15 to do the conversion (which I probably shouln’t ) I get the following error:
2020-06-29 16:09:21.083854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 570 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
Traceback (most recent call last):
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from fe_0_conv0/kernel:0 incompatible with expected resource.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/usr/lib/python3.6/runpy.py”, line 193, in _run_module_as_main
“main”, mod_spec)
File “/usr/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tf2onnx/convert.py”, line 169, in
main()
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tf2onnx/convert.py”, line 142, in main
tf.import_graph_def(graph_def, name=‘’)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 405, in import_graph_def
producer_op_list=producer_op_list)
File “/srv/demo/Demos/scripts/tensorrt_uff/tensorflow1.15env/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py”, line 505, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Input 1 of node StatefulPartitionedCall was passed float from fe_0_conv0/kernel:0 incompatible with expected resource.
So I am running out of choices converting the model to ONNX…
Thanks again
Svetlana