Failed to convert onnx to trt build_engine.py

Description

Failed to build engine.py on the Jetson Nano jetpack 4.6
I build the onnx on my laptop using command python -m tf2onnx.convert --saved-model H:\Jupyter_Notebooks\ObjectDetection\workspace\training_demo\exported-models\my_model\saved_model --output H:\Jupyter_Notebooks\ObjectDetection\workspace\training_demo\onnx\model.onnx --opset 11
When trying to build engine at the nano this error happens. I can’t use opset 9 for my desktop and i can’t convert the model to onnx at the nano …

python build_engine.py --onnx onnx/model.onnx --engine engine/engine.trt --precision fp16
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +198, GPU +0, now: CPU 228, GPU 3457 (MiB)
[TensorRT] WARNING: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] INFO: No importer registered for op: TensorListStack. Attempting to import as plugin.
[TensorRT] INFO: Searching for plugin: TensorListStack, plugin_version: 1, plugin_namespace:
[TensorRT] ERROR: 3: getPluginCreator could not find plugin: TensorListStack version: 1
ERROR:EngineBuilder:Failed to load ONNX file: /home/rabah/TensorRT/samples/python/tensorflow_object_detection_api/onnx/model.onnx
ERROR:EngineBuilder:In node 9 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace c

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

model.onnx (10.4 MB)

When I used create_onnx.py TensorRT Object Detection it always gave me AttributeError: 'Graph' object has no attribute 'op_with_const'
when I used tf2onnx.convert w/ default opset “9” it gave me this error raise ValueError("StridedSlice: attribute " + attr_name + " not supported") ValueError: StridedSlice: attribute new_axis_mask not supported
But when using opset 11 it succeeded
However, while validiting the model using the check_model.py it returned this error: `ValidationError: No Op registered for TensorListStack with domain_version of 11

==> Context: Bad node spec for node. Name: StatefulPartitionedCall/map/TensorArrayV2Stack_1/TensorListStack OpType: TensorListStack`

And as far as I know, Opset 9 is needed to work on the jetson …

Hi,

Sorry, could you please give more details.
Were you able to successfully convert TensorFlow → ONNX-> TensorRT on opset 11?
Are you facing the above issue using opset 9 ?
Which version of the TensorRT are you using ?

Sorry for the late reply but you took a while to answer so I forgot about that post. Actually yeah I did successfully convert tensorflow to onnx using opset 11 only. When using opset 9 ("StridedSlice: attribute " + attr_name + " not supported") ValueError: StridedSlice: attribute new_axis_mask not supported.
On my pc it’s TensorRT-8.2.4.2
My nano is JP43