Empowering the DLA in Xavier with DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hi, I have come to know that AGX Xavier has multiple DLA engines built into hardware. I wonder if they comes in any help running the DeepStream SDK? Is there any documentation on such integration or any plan for future release possibility?


Deepstream do support DLA inference.
You can set the configure file like this.

Ex. config_infer_primary.txt



1 Like

@AastaLLL can you provide us all some info on when its good to set this parameter to use one of the DLA engines?

For example if I have a pipeline with PGIE and SGIE. Should we put the PGIE on GPU and the SGIE on DLA ? Or vice versa? Is running a model on DLA faster?

Or if I am running multiple pipelines in parallel (i.e. the nvidia cloud-native demo) - how do I choose which model is run on GPU or DLA?
From what I’ve read DLA is best for convolution type operations - so should suit any of the typical deepstream models I presume??

Thanks @AastaLLL, clearly understandable. However, if the DLA is purpose designed for accelerating convolution layers (if I’m not wrong), what is the difference and purpose of the 7-Way VLIW Vision Processor?


The placement depends on the user case.
It’s recommended to try it on your model directly.

The target of DLA is to offload the GPU so the performance of DLA won’t be faster than GPU.


7-Way VLIW Vision Processor’ indicates our PVA hardware, which use for vision task.

To launch PVA, you will need to use our VPI SDK.
Here is the VPI document for your reference: https://docs.nvidia.com/vpi/index.html


For those who dont know vpi is already on your board with 4.4 it’s under:


You can run the samples on all the jetson devices. Not just xavier

does the DLA support exist with docker deepsteam implementation?

You can run inference on DLA within docker.