Onnx Graph Error

The developer is seeing an error finishes where the “variable” object has no attribute “values.” Output Report:

Output report

python create_onnx.py --pipeline_config C:/Tensorflow/data/models/newModelSSDMobilenetv2_300/pipeline.config --saved_model C:/Tensorflow/data/models/newModelSSDMobilenetv2_300/saved_model --onnx C:/Tensorflow/data/models/newModelSSDMobilenetv2_300/model.onnx
C:\Tensorflow\venv\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\Tensorflow\venv\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\Tensorflow\venv\lib\site-packages\numpy\.libs\libopenblas.WCDJNK7YVMPZQ2ME2ZZHJJRJ3JIKNDB7.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
INFO:tf2onnx.tf_loader:Signatures found in model: [serving_default].
INFO:tf2onnx.tf_loader:Output names: ['detection_anchor_indices', 'detection_boxes', 'detection_classes', 'detection_multiclass_scores', 'detection_scores', 'num_detections', 'raw_detection_boxes', 'raw_detection_scores']
WARNING:tensorflow:From C:\Tensorflow\venv\lib\site-packages\tf2onnx\tf_loader.py:711: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
WARNING:tensorflow:From C:\Tensorflow\venv\lib\site-packages\tf2onnx\tf_loader.py:711: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:ModelHelper:Loaded saved model from C:\Tensorflow\data\models\newModelSSDMobilenetv2_300\saved_model
INFO:tf2onnx.tfonnx:Using tensorflow=2.8.0, onnx=1.11.0, tf2onnx=1.10.0/07e9e0
INFO:tf2onnx.tfonnx:Using opset <onnx, 11>
INFO:tf2onnx.tf_utils:Computed 4 values for constant folding
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_4
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_5
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_8
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_1
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.optimizer:Optimizing ONNX model
INFO:tf2onnx.optimizer:After optimization: BatchNormalization -53 (60->7), Cast -481 (2037->1556), Const -451 (3381->2930), Gather +7 (488->495), Identity -199 (199->0), Less -2 (99->97), Mul -2 (504->502), Placeholder -9 (18->9), Reshape -17 (405->388), Shape -8 (216->208), Slice -7 (427->420), Squeeze -22 (342->320), Transpose -272 (293->21), Unsqueeze -166 (478->312)
INFO:ModelHelper:TF2ONNX graph created successfully
INFO:ModelHelper:Model is ssd_mobilenet_v2_keras
INFO:ModelHelper:Height is 300
INFO:ModelHelper:Width is 300
INFO:ModelHelper:First NMS score threshold is 9.99999993922529e-09
INFO:ModelHelper:First NMS iou threshold is 0.6000000238418579
INFO:ModelHelper:First NMS max proposals is 100
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
INFO:ModelHelper:ONNX graph input shape: [1, 300, 300, 3] [NCHW format set]
INFO:ModelHelper:Found Conv node 'StatefulPartitionedCall/ssd_mobile_net_v2_keras_feature_extractor/model/Conv1/Conv2D' as stem entry
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
INFO:ModelHelper:Found Concat node 'StatefulPartitionedCall/concat_1' as the tip of BoxPredictor/ConvolutionalClassHead_
INFO:ModelHelper:Found Squeeze node 'StatefulPartitionedCall/Squeeze' as the tip of BoxPredictor/ConvolutionalBoxHead_
Traceback (most recent call last):
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 673, in <module>
    main(args)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 649, in main
    effdet_gs.process_graph(args.first_nms_threshold, args.second_nms_threshold)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 622, in process_graph
    self.graph.outputs = first_nms(-1, True, first_nms_threshold)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 486, in first_nms
    anchors_tensor = self.extract_anchors_tensor(box_net_split)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 312, in extract_anchors_tensor
    anchors_y = get_anchor(0, "Add")
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 301, in get_anchor
    if (node.inputs[1].values).size == 1:
AttributeError: 'Variable' object has no attribute 'values'

Environment

numpy=1.22.3
Pillow 9.0.1
TensorRT = 8.4.0.6
TensorFlow 2.8.0
object detection 0.1
pycuda=2021.1
onnx=1.11.0
onnxruntime=1.11.0
tf2onnx==1.10.0
onnx-graphsurgeon==0.3.10
Windows 10

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Steps to reproduce

  1. Download the SSD MobileNet v2 320x320 from TensorFlow 2 model zoo
  2. Export saved model with float image tensor as input type
cd /path/to/models/research/object_detection
python exporter_main_v2.py \
    --input_type float_image_tensor \
    --trained_checkpoint_dir /path/to/ssd_mobilenet_v2_320x320_coco17_tpu-8/checkpoint \
    --pipeline_config_path /path/to/ssd_mobilenet_v2_320x320_coco17_tpu-8/pipeline.config \
    --output_directory /path/to/export
  1. Create ONNX Graph
python create_onnx.py \
    --pipeline_config /path/to/exported/pipeline.config \
    --saved_model /path/to/exported/saved_model \
    --onnx /path/to/save/model.onnx

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

@KamalLAGH - I moved your Create onnx graph throws AttributeError: ‘Variable’ object has no attribute ‘values’ - AI & Data Science / Computer Vision & Image Processing - NVIDIA Developer Forums to this thread.

Hi,

If you are facing an issue in generating the ONNX model using tf2onnx, we recommend you to please post your concern here to get better help.

Thank you.