I’m traying to deploy a onnx model using TensorRT (TensorRT 8.5.2-1+cuda11.4) nvidia-jetpack 5.1.1-b56
but I get an error and the deployment can not be done. I don’t understand why is this happening.
Here is the output of trtexec:
[11/18/2024-09:31:35] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +675, now: CPU 915, GPU 6645 (MiB)
[11/18/2024-09:36:49] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +159, GPU +832, now: CPU 1074, GPU 7477 (MiB)
[11/18/2024-09:36:49] [TRT] [W] TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0
[11/18/2024-09:36:49] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/18/2024-09:36:49] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{2055, 14, 32}, 24470},)
[11/18/2024-09:36:49] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
Traceback (most recent call last):
File “onnx2tensorrt.py”, line 74, in
main()
File “onnx2tensorrt.py”, line 59, in main
from_onnx(
File “/home/ubuntu/yj/mmdeploy/mmdeploy/backend/tensorrt/utils.py”, line 248, in from_onnx
assert engine is not None, ‘Failed to create TensorRT engine’
AssertionError: Failed to create TensorRT engine