Exporting to ONNX

I use the below command to export a model to ONNX;

tao model faster_rcnn export --gpu_index $GPU_INDEX -m /workspace/tao-experiments/model.hdf5  \
                        -o $USER_EXPERIMENT_DIR/model.onnx \
                        -e $SPECS_DIR/specs.txt

At the end of the run, I get the following warnings:

WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_16/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_16/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_16/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_16/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_16/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_16/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_12/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_12/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_12/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_12/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_13/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_13/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_8/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,984 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_8/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_13/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_13/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_8/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_8/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_8/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_8/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_11/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_11/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_11/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_11/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_11/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_11/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_2/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_2/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_2/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_2/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_2/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_2/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_5/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_5/Merge of type Merge
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_5/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_5/Switch_2 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_5/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/cond_5/Switch_1 of type Switch
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/BroadcastTo_2 of type BroadcastTo
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/BroadcastTo_2 of type BroadcastTo
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/BroadcastTo_1 of type BroadcastTo
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/BroadcastTo_1 of type BroadcastTo
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node proposal_1/BroadcastTo of type BroadcastTo
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:57,985 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node proposal_1/BroadcastTo of type BroadcastTo
      The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node crop_and_resize_1/CropAndResize of type CropAndResize
      The generated ONNX model needs run with the custom op supports.
2024-03-01 13:24:58,013 [TAO Toolkit] [WARNING] keras2onnx 301: WARN: No corresponding ONNX op matches the tf.op node crop_and_resize_1/CropAndResize of type CropAndResize
      The generated ONNX model needs run with the custom op supports.
The ONNX operator number change on the optimization: 808 -> 489
2024-03-01 13:25:15,709 [TAO Toolkit] [INFO] keras2onnx 347: The ONNX operator number change on the optimization: 808 -> 489
Execution status: PASS
2024-03-01 13:25:29,115 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.

When I load the model using onnxruntime, I get an error:

import onnxruntime as ort
sess = ort.InferenceSession('model.onnx')

Error:

InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model.onnx failed:This is an invalid model. In Node, ("proposal", ProposalDynamic, "", -1) : ("sigmoid_output": tensor(float),"convolution_output1": tensor(float),) -> ("proposal_out": tensor(float),) , Error No Op registered for ProposalDynamic with domain_version of 12

PS1: I also tried with additional arguments, still the same error:

                        --target_opset 12 \
                        --gen_ds_config

PS2: I tried importing the model with cuda as a provider, also still the same error:

sess = ort.InferenceSession('model.onnx', providers=["CUDAExecutionProvider"])

Seems that the onnx file is generated successfully. Can other cells in “10. Deploy!” run successfully? Please refer to tao_tutorials/notebooks/tao_launcher_starter_kit/faster_rcnn/faster_rcnn.ipynb at main · NVIDIA/tao_tutorials · GitHub.
If possible, please share the onnx file as well.

More, for

Please refer to the implement in TensorRT/plugin/proposalPlugin at release/8.6 · NVIDIA/TensorRT · GitHub.

BTW, why are going to use onnxruntime instead?

Hello @Morganh,

My goal is to make inference on a Raspberry PI, therefore I need an onnx and not a TensorRT because the TensorRT works only on NVIDIA discrete GPUs as per the documentation. My PI will have a TPU Hardware accelerator.

All the subsequent cells in the notebook are dedicated to the TensorRT conversion and they run well but they do not help my case.

What is strange for me is that I am able to read and display the model information using onnx:

onnx_model = onnx.load(model_path)
print('Model Format Version {}'.format(onnx_model.ir_version))
print('Model Opset Version {}'.format(onnx_model.opset_import[0].version))

Gives:

Model Format Version 8
Model Opset Version 12

And:

print(f"Number of inputs: {len(model_graph.input)}")
print(f"Number of outputs: {len(model_graph.output)}")

for input_tensor in model_graph.input:
    print(f"Input name: {input_tensor.name}, data type: {input_tensor.type.tensor_type.elem_type}")

for output_tensor in model_graph.output:
    print(f"Output name: {output_tensor.name}, data type: {output_tensor.type.tensor_type.elem_type}")

Gives:

Number of inputs: 1
Number of outputs: 2
Input name: input_image, data type: 1
Output name: nms_out, data type: 1
Output name: nms_out_1, data type: 1

Which is correct.

However:

sess = ort.InferenceSession(model_path)

always results in an INVALID_GRAPH

You can open the onnx file via netron app. There are some specific plugins in the faster_rcnn onnx file. For example, “Proposal” plugin, it is implemented in above -mentioned link.

@Morganh the onnx file is larger than 100 MB. The interface does not allow me to share it.

Yes, I can open it via the netron app. Attached is an export as PNG.
Regarding the proposal plugins: if I understood well, the NMSDynamicTRT is the layer that is causing the error and that I should -somehow- point ONNX runtime to the plugin so that it can work with it. If so, how can do that? could you please point me to the starting point?

The proposal plugin can be found in below.

As mentioned above, the implementation is in TensorRT/plugin/proposalPlugin at release/8.6 · NVIDIA/TensorRT · GitHub.

For Cropandresize plugin, its implementation is in
TensorRT/plugin/cropAndResizePlugin at release/8.6 · NVIDIA/TensorRT · GitHub.

You can refer to them.

Thanks for the direction Morgan. So now I know that these are specific implementation. I imaging I should not reimplement them as this really goes against the spirit of TAO and ONNX ecosystems. How can I get these plugins plugged into ONNX?

If there are not similar ops in ONNX, it is needed to implement by the users.

Thanks for your answer Morgan. In practice, how do I do that?

For the faster_rcnn onnx file, you can trim them into several parts.
You can use polygraphy surgeon extract command. Refer to TensorRT/tools/Polygraphy/examples/cli/surgeon/01_isolating_subgraphs at main · NVIDIA/TensorRT · GitHub.

Then implement similar function/code based on TensorRT/plugin/proposalPlugin at release/8.6 · NVIDIA/TensorRT · GitHub.
Get its output.

Similarly , implement similar function/code based on TensorRT/plugin/cropAndResizePlugin at release/8.6 · NVIDIA/TensorRT · GitHub.

Similar topic: Errors while reading ONNX file produced by TAO 5 - #10 by Morganh.

Thanks Morgan. I will keep this option as a last resort as it need time to implement. Are there other alternatives to get this ONNX running on a Raspberry PI? Can I run a tensort on a Raspberry pi equipped with a TPU dongle?

I am afraid not, a similar topic is Jetson Inference on Raspberry Pi.

You can refer to Add a mode to ROIAlign for implementing tf.crop_and_resize? · Issue #2100 · onnx/onnx · GitHub.

I see. Thanks for your continous help.

What do you think about running the inference session of onnxruntime in c++ instead of python? Is it something doable easily or even more complicated than reimplementing a python script?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Officially, the tao faster_rcnn user guide is expecting running inference with tensorrt engine or the keras model. There is not guide for onnxruntime.
For the plugins inside the faster_rcnn onnx, the implementation is in c++, like others under TensorRT/plugin at release/8.6 · NVIDIA/TensorRT · GitHub.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.