Error while using DLA

I get this error whenever I try to convert my model to DLA using the TRT C++ API.

2: [nvmRegion.cpp::NvmRegion::38] Error Code 2: Internal Error (Assertion (isUnpackedNvmCHW() || isNvmC32HW() || isNvmC16HW() || isNvmHWC4() || isNvm8C32() || isNvm8() || isUnpackedNvmHWC()) && “format not supported by NvMedia” failed. )

I used these lines to set the DLA
(“config” is an Ibuilder config variable, DLACore is an integer equal to 0)
config->setDefaultDeviceType(nvinfer1::DeviceType::kDLA);
config->setEngineCapability(nvinfer1::EngineCapability::kDLA_STANDALONE);
config->setFlag(nvinfer1::BuilderFlag::kFP16);
config->setFlag(nvinfer1::BuilderFlag::kDIRECT_IO);
config->setDLACore(DLACore);

Hi,

Could you try to convert the model with trtexec to see if it works?

Thanks.

It does

Any updates? Have you looked into this?

Hi,

Do you mean the model can convert to TensorRT if using trtexec?
Did you also add the --directIO flag?

More, would you mind removing the following line to see if it works?

config->setEngineCapability(nvinfer1::EngineCapability::kDLA_STANDALONE);

Thanks.

Hey thanks, removing that line helped.
Could you tell me why eaxctly that line would cause it to break, and what that Engine capability flag is used for if I can use the DLA without it?

Hi,

The flag is for DLA standalone mode.
It will generate a DLA loadable instead of a TensorRT engine.

For more details, please check below doc:

Thanks.

Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Also check out the DLA github page for samples and resources or to report issues: Recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.

We have a FAQ page that addresses some common questions that we see developers run into: Deep-Learning-Accelerator-SW/FAQ