Assertion engine != nullptr failed

I am attempting to convert an ONNX model to TensorRT with the python package on a Jetson Orin Development Kit device. This is a continuation of this thread https://forums.developer.nvidia.com/t/trtexec-internal-error-symbolic-relation-a-z-0-is-always-false/237317 but I am creating a new topic since the previous thread has been closed. I have just recently reflashed and upgraded the Jetson device to Jetpack 5.1 and found this similar issue here https://forums.developer.nvidia.com/t/error-code-2-internal-error-assertion-engineptr-nullptr-failed/205371/3 but their solution was simply to upgrade Jetpack, which I have already done. Running apt show I see that my nvidia-cuda version is 5.1-b147 and my nvidia-tensorrt version is 5.1-b147. The only modification I have made to the code was to add a memory pool limit since the script threw a non enough memory error the first time I ran it, the complete script is here:

import tensorrt as trt
logger = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(logger)
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
parser = trt.OnnxParser(network, logger)
success = parser.parse_from_file("sidewalk3.onnx")
config = builder.create_builder_config()
config.set_memory_pool_limit(pool=trt.MemoryPoolType.WORKSPACE,pool_size=16777216)
serialized_engine = builder.build_serialized_network(network, config)

[02/06/2023-09:52:58] [TRT] [E] 4: [optimizer.cpp::computeCosts::3725] Error Code 4: Internal Error (Could not find any implementation for node {ForeignNode[onnx::MatMul_9671 + (Unnamed Layer* 5373) [Shuffle]...Transpose_3662 + Reshape_3663]} due to insufficient workspace. See verbose log for requested sizes.)
[02/06/2023-09:52:58] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

Any help would be very much appreciated, thank you!

Hi,

Could you try to convert the model with trtexec to see if it works?

$ /usr/src/tensorrt/bin/trtexec --onnx=

More, could you confirm which platform you use?
Do you switch to the Orin device for XavierNX?

Thanks.

trtexec_output.txt (260.6 KB)

I am now using the Orin Development Kit

Hi,

Thanks, I’m moving your topic to the XavierNX board.
Will update more information for you later.

Hello, I am not using a XavierNX board. I am using an Orin Development Kit.

Hi just in case you didn’t get my last post since the thread was moved, I want to again clarify that I am not using the XavierNX and I am using the Orin Development Kit. Should my thread be moved back to the Orin forum? If so, is that something that I could do or would that be something for a Moderator?

Hi,

Sorry for the mistake. I have moved your topic back.

Based on the log, the model can run successfully with the trtexec.
So the error should come from implementation side.

Could you remove the below line to see if it helps?

config.set_memory_pool_limit(pool=trt.MemoryPoolType.WORKSPACE,pool_size=16777216)

Thanks.

Sorry for the delay. This worked! Thank you for your help. Only thing that wasn’t completely clear in the developer guide is how to do inference with the context that you generate from the deserialized engine. Is there a more recent or deeper explanation somewhere other than here: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.