Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 5 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
Hi, I have come to know that AGX Xavier has multiple DLA engines built into hardware. I wonder if they comes in any help running the DeepStream SDK? Is there any documentation on such integration or any plan for future release possibility?
@AastaLLL can you provide us all some info on when its good to set this parameter to use one of the DLA engines?
For example if I have a pipeline with PGIE and SGIE. Should we put the PGIE on GPU and the SGIE on DLA ? Or vice versa? Is running a model on DLA faster?
Or if I am running multiple pipelines in parallel (i.e. the nvidia cloud-native demo) - how do I choose which model is run on GPU or DLA?
From what I’ve read DLA is best for convolution type operations - so should suit any of the typical deepstream models I presume??
Thanks @AastaLLL, clearly understandable. However, if the DLA is purpose designed for accelerating convolution layers (if I’m not wrong), what is the difference and purpose of the 7-Way VLIW Vision Processor?