mobilenet onnx problem

I have a “mobilenet” which I froze (pb). I tried to convert it to uff, but it complains about “Unsupported operation _FusedBatchNormV3” which sounds like is not supported in uff yet. So I am now trying pb->onnx. Unfortunately I am not having any better luck with that:

~/convertToONNX$ python3 -m tf2onnx.convert  --input /home/xavier/convertToONNX/frozen_inference_graph.pb --inputs X:0 --outputs output:0 --output /home/xavier/convertToONNX/ssd-raccoonnet.onnx
2020-01-17 15:08:24.026031: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/verbose_logging.py:72: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-01-17 15:08:27,830 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/verbose_logging.py:72: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-01-17 15:08:27.835344: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-01-17 15:08:27.839512: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.839677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377
pciBusID: 0000:00:00.0
2020-01-17 15:08:27.839736: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-01-17 15:08:27.841502: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-01-17 15:08:27.843120: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-01-17 15:08:27.843710: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-01-17 15:08:27.846533: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-01-17 15:08:27.848348: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-01-17 15:08:27.854380: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-01-17 15:08:27.854585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.854780: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.854956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-01-17 15:08:27.872648: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-01-17 15:08:27.873562: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2f66c8f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-17 15:08:27.873644: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-01-17 15:08:27.934881: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.935306: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2f7a07b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-01-17 15:08:27.935435: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
2020-01-17 15:08:27.935901: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.936054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377
pciBusID: 0000:00:00.0
2020-01-17 15:08:27.936110: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-01-17 15:08:27.936152: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-01-17 15:08:27.936190: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-01-17 15:08:27.936220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-01-17 15:08:27.936250: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-01-17 15:08:27.936279: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-01-17 15:08:27.936311: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-01-17 15:08:27.936427: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.936568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:27.936639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-01-17 15:08:27.936699: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-01-17 15:08:28.648073: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-17 15:08:28.648189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2020-01-17 15:08:28.648230: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2020-01-17 15:08:28.648539: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:28.648741: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-01-17 15:08:28.648915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8514 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/convert.py", line 161, in <module>
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/convert.py", line 116, in main
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/loader.py", line 64, in from_graphdef
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/loader.py", line 37, in freeze_session
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/graph_util_impl.py", line 277, in convert_variables_to_constants
    inference_graph = extract_sub_graph(input_graph_def, output_node_names)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/graph_util_impl.py", line 197, in extract_sub_graph
    _assert_nodes_are_present(name_to_node, dest_nodes)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/graph_util_impl.py", line 152, in _assert_nodes_are_present
    assert d in name_to_node, "%s is not in graph" % d
AssertionError: output is not in graph

any ideas?

Using bazel to compile summarize graph and then using summarize graph on my frozen pb, I was able to find the correct names for inputs and outputs :
image_tensor & detection_boxes, respectively. Running that, I got past that part of the converter, but now It is saying:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/tfonnx.py", line 354, in tensorflow_onnx_mapping
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/onnx_opset/tensor.py", line 321, in version_1
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/graph_builder.py", line 40, in make_slice
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/graph_builder.py", line 106, in convert_to_attribute
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/graph.py", line 260, in get_tensor_value
ValueError: get tensor value: MultipleGridAnchorGenerator/Meshgrid_9/ExpandedShape_1/ExpandDims must be Const

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/convert.py", line 161, in <module>
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/convert.py", line 145, in main
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/tfonnx.py", line 573, in process_tf_graph
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/tfonnx.py", line 357, in tensorflow_onnx_mapping
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx-1.6.0-py3.6.egg/tf2onnx/graph.py", line 175, in summary
AttributeError: 'NoneType' object has no attribute 'get_node_by_output'

by adding --fold_const --opset 11 at the end, I was able to get it to create an .onnx file but now I am stuck. Running the detect-net with it seems to have a problem with uint8 data types:

----------------------------------------------------------------
Input filename:   /usr/local/bin/networks/SSD-Raccoonnet/ssd_raccoon.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    tf2onnx
Producer version: 1.6.0
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
Unsupported ONNX data type: UINT8 (2)
ERROR: ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
[TRT]   failed to parse ONNX model '/usr/local/bin/networks/SSD-Raccoonnet/ssd_raccoon.onnx'
[TRT]   device GPU, failed to load networks/SSD-Raccoonnet/ssd_raccoon.onnx
detectNet -- failed to initialize.

Hi skywolf,

TensorRT only supports FP32, FP16, INT32, and INT8: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/infer/FoundationalTypes/DataType.html#tensorrt.DataType

UINT8 is not supported as mentioned there. Can you use int8 instead of uint8 in your model before converting your model to ONNX?

ref: https://devtalk.nvidia.com/default/topic/1067555/tensorrt/tensorrt-inference-error-while-load-onnx-model

Is there a script that I can run on my pb file which will “parse” it and fix the uint8’s?

I was thinking the same thing at first, but unfortunately I don’t think it’s that simple. If you have UINT8 weights with values outside (128-255) of the range of INT8 (-128-127), It will likely alter the results of your network. So I believe the network has to be trained with the types set to INT8 in order to learn the correct weights and be compatible with TensorRT at the moment.

@NVES_R In other words, TensorRT can’t run any of the TF Object Detection models from Zoo?

1 Like

int32 would work, uint8 isn’t supported in TensorRT and int8 can’t represent the range of values that uint8 can, but int32 or float values would represent all the pixel values correctly and let you use the same weights.

hello. I used tf.NodeDef to solve same uint8 issue in ‘mobilenet_v2_coco(pb) → onnx → trt’ process. Input node(image_tensor’s dtype is uint8) issue seems to solve, but it has other problem. Network model(pb) have two nodes dtype is int64. that node is tf.where (Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/ClipToWindow/Where:0). anyone have solution?

  • I change uint8 node to float32 node by tf.NodeDef…uint8(0 - 255) to float32(0.0 -255.0). Original next node is tf.cast to float32 from uint8, now i change float32 to float32. it means pass-through…is this right?

This should help: Unsupported ONNX data type: UINT8 (2) · Issue #400 · onnx/onnx-tensorrt · GitHub