Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{2055, 14,

I’m traying to deploy a onnx model using TensorRT (TensorRT 8.5.2-1+cuda11.4) nvidia-jetpack 5.1.1-b56
but I get an error and the deployment can not be done. I don’t understand why is this happening.

Here is the output of trtexec:

[11/18/2024-09:31:35] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +675, now: CPU 915, GPU 6645 (MiB)
[11/18/2024-09:36:49] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +159, GPU +832, now: CPU 1074, GPU 7477 (MiB)
[11/18/2024-09:36:49] [TRT] [W] TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0
[11/18/2024-09:36:49] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/18/2024-09:36:49] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{2055, 14, 32}, 24470},)
[11/18/2024-09:36:49] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
Traceback (most recent call last):
File “onnx2tensorrt.py”, line 74, in
main()
File “onnx2tensorrt.py”, line 59, in main
from_onnx(
File “/home/ubuntu/yj/mmdeploy/mmdeploy/backend/tensorrt/utils.py”, line 248, in from_onnx
assert engine is not None, ‘Failed to create TensorRT engine’
AssertionError: Failed to create TensorRT engine

Uploading: end2end.zip…
This is the model I want to convert, and it was generated from the SegFormer model.

Hi,

As the error report Unknown embedded device detected. , could you share which device you use?
Is Xavier 64GB?

Could you also try to convert a simple model to see if this issue is model-related or platform-related?

Thanks.

Thank you for your response. The device is the EA-B600 equipped with the NVIDIA Jetson AGX Orin. I have tried converting simple models, and it is possible.This issue should be model-related

There is no update from you for a period, assuming this is not an issue anymore.
Hence, we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Could you share the verbose output log for the working and nonworking models?
Thanks.