Use DLA

Hi,

I want to execute my custom deep learning model with DLA and have some questions.

  1. There are 2 NVDLA in Xavier, can I use multiple nvdla for one trained model? such as using multiple GPUs.
    If I can, how?

  2. Can I only use the NVDLA for inferecing without GPU?
    If I can, how?

Thank you

Hi jungminash, yes you can run the same model in parallel on both DLA engine. It would run as two independent instances of the same model (i.e. the layers within the model would not be split between DLA’s, each DLA engine would run the model independently).

To set the DLA engine that the model runs on, before you deserialize your model, call IRuntime::setDLACore() with your desired DLA engine. Note that your TensorRT engine should have originally been built with the IBuilder::setDefaultDeviceType(nvinfer1::DeviceType::kDLA) flag enabled.

If all the layers in your model are supported on DLA, when building your TensorRT you can disable GPU fallback by calling IBuilder::allowGPUFallback(false). Otherwise, if there are some layers in your model that aren’t supported on DLA, the TensorRT engine will fail to build unless GPU fallback is enabled with IBuilder::allowGPUFallback(true)

Thank you,

The answer was absolutely helpful.