Description
Failed to build engine.py on the Jetson Nano jetpack 4.6
I build the onnx on my laptop using command python -m tf2onnx.convert --saved-model H:\Jupyter_Notebooks\ObjectDetection\workspace\training_demo\exported-models\my_model\saved_model --output H:\Jupyter_Notebooks\ObjectDetection\workspace\training_demo\onnx\model.onnx --opset 11
When trying to build engine at the nano this error happens. I can’t use opset 9 for my desktop and i can’t convert the model to onnx at the nano …
python build_engine.py --onnx onnx/model.onnx --engine engine/engine.trt --precision fp16
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +198, GPU +0, now: CPU 228, GPU 3457 (MiB)
[TensorRT] WARNING: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] INFO: No importer registered for op: TensorListStack. Attempting to import as plugin.
[TensorRT] INFO: Searching for plugin: TensorListStack, plugin_version: 1, plugin_namespace:
[TensorRT] ERROR: 3: getPluginCreator could not find plugin: TensorListStack version: 1
ERROR:EngineBuilder:Failed to load ONNX file: /home/rabah/TensorRT/samples/python/tensorflow_object_detection_api/onnx/model.onnx
ERROR:EngineBuilder:In node 9 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace c
Environment
TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered