I am attempting to convert an ONNX model to TensorRT with the python package on a Jetson Orin Development Kit device. This is a continuation of this thread https://forums.developer.nvidia.com/t/trtexec-internal-error-symbolic-relation-a-z-0-is-always-false/237317 but I am creating a new topic since the previous thread has been closed. I have just recently reflashed and upgraded the Jetson device to Jetpack 5.1 and found this similar issue here https://forums.developer.nvidia.com/t/error-code-2-internal-error-assertion-engineptr-nullptr-failed/205371/3 but their solution was simply to upgrade Jetpack, which I have already done. Running apt show I see that my nvidia-cuda version is 5.1-b147 and my nvidia-tensorrt version is 5.1-b147. The only modification I have made to the code was to add a memory pool limit since the script threw a non enough memory error the first time I ran it, the complete script is here:
import tensorrt as trt
logger = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(logger)
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
parser = trt.OnnxParser(network, logger)
success = parser.parse_from_file("sidewalk3.onnx")
config = builder.create_builder_config()
config.set_memory_pool_limit(pool=trt.MemoryPoolType.WORKSPACE,pool_size=16777216)
serialized_engine = builder.build_serialized_network(network, config)
[02/06/2023-09:52:58] [TRT] [E] 4: [optimizer.cpp::computeCosts::3725] Error Code 4: Internal Error (Could not find any implementation for node {ForeignNode[onnx::MatMul_9671 + (Unnamed Layer* 5373) [Shuffle]...Transpose_3662 + Reshape_3663]} due to insufficient workspace. See verbose log for requested sizes.)
[02/06/2023-09:52:58] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Any help would be very much appreciated, thank you!