ONNX file was generated from PyTorch Retinanet and then folded using polygraphy. Then when running /usr/src/tensorrt/bin/trtexec --onnx=folded.onnx --saveEngine=model.engine, this is the error:
...
[07/11/2023-17:14:31] [I] [TRT] [GpuLayer] MYELIN: {ForeignNode[onnx::Equal_3245.../model/Concat_92]}
[07/11/2023-17:14:31] [I] [TRT] [GpuLayer] TRAIN_STATION: [trainStation3]
[07/11/2023-17:14:31] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 913, GPU 5828 (MiB)
[07/11/2023-17:14:31] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 914, GPU 5828 (MiB)
[07/11/2023-17:14:31] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[07/11/2023-17:14:31] [E] Error[2]: [injectImplicitPadding.cpp::grabShapeHostToDeviceNodes::419] Error Code 2: Internal Error (Assertion !holder failed. each train expected to have at most one ShapeHostToDeviceNode)
[07/11/2023-17:14:31] [E] Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[07/11/2023-17:14:31] [E] Engine could not be created from network
[07/11/2023-17:14:31] [E] Building engine failed
[07/11/2023-17:14:31] [E] Failed to create engine from model or file.
[07/11/2023-17:14:31] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=folded.onnx --saveEngine=model.engine
Any ideas on what is causing this?
Environment
TensorRT Version: 8.5 GPU Type: Jetson Xavier Nvidia Driver Version: CUDA Version: 11.4 CUDNN Version: Operating System + Version: Ubuntu 20 Python Version (if applicable): 3.8.10 TensorFlow Version (if applicable): PyTorch Version (if applicable): 1.13 Baremetal or Container (if container which image + tag):
Relevant Files
Steps To Reproduce
Using that onnx, run /usr/src/tensorrt/bin/trtexec --onnx=folded.onnx --saveEngine=model.engine
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!