Problems with building cudla models using EngineCapability::kDLA_STANDALONE

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.1.10844
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Problems with building cudla models using EngineCapability::kDLA_STANDALONE.
We want to use patterns kDLA_STANDALONE to run the model, but we encounter the following error when compiling the model.

[01/02/1970-22:13:57] [V] [TRT] Original: 97 layers
[01/02/1970-22:13:57] [V] [TRT] After dead-layer removal: 97 layers
[01/02/1970-22:13:58] [E] Error[2]: [foreignNode.cpp::determineCandidateForeignNodes::910] Error Code 2: Internal Error (Safety certified DLA should only have one graph for the whole network.)
[01/02/1970-22:13:58] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[01/02/1970-22:13:58] [E] Engine could not be created from network
[01/02/1970-22:13:58] [E] Building engine failed
[01/02/1970-22:13:58] [E] Failed to create engine from model or file.
[01/02/1970-22:13:58] [E] Engine set up failed

You can’t go wrong without using mode kDLA_STANDALONE.
I know that normal models do have many foreignNodes, how do we use kDLA_STANDALONE to build models with multiple foreignNodes.

Dear @haihua.wei,
May I know the used platform? Is it DRIVE AGX Orin Devkit or any other platform?
I see you selected other hardware platform in check box.

DRIVE AGX Orin,I’ll modify it.
@SivaRamaKrishnaNV

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Dear @haihua.wei,
Did you use trtexec to build model for DLA? Could you share model to repro the issue?

@haihua.wei
Also check out the DLA github page for samples and resources: Recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.

We have a FAQ page that addresses some common questions that we see developers run into: Deep-Learning-Accelerator-SW/FAQ