Run one shared model on Two DLAs as same time

• Hardware Platform (Jetson / GPU): jetson xavier nx
• DeepStream Version : 5.0
• JetPack Version (valid for Jetson only) : 4.4
• TensorRT Version : 7.1.x

I want to run peoplenet on two DLAs at same time, and I used deepstream for this purpose. and I converetd the etlt model to engine file with tlt-convertor, and the generated model can run on two DLAs with two times of loading the model as separately, but I want to know, Is it possible to load one times of model for two DLAs usage?

Hi,

You will need to define two configure file for each DLA index.
But you can feed it into the same Deepstream pipeline to enable them in the same process.

Please find more details in the document below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Performance.html#running-applications-using-dla

Thanks.

@AastaLLL , Thanks,
I run GPU and two DLAs at same time and the EMS goes to 105%, Why 105%? Why not 100%? This 5% exceed due to over clock?

Hi,

Sorry for the late update.
Do you mean the EMC value in the tegrastat?
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/AppendixTegraStats.html#wwpID0E0HB0HA

If yes, could you share the way to reproduce this?
We try to duplicate this with the default primary model but only got at most 26% usage.

RAM 3731/7773MB (lfb 52x4MB) SWAP 981/3886MB (cached 46MB) CPU [25%@2035,25%@2035,22%@2035,24%@2035,36%@2035,26%@2035] EMC_FREQ 26%@1331 GR3D_FREQ 18%@905 NVDEC 550 NVDEC1 550 VIC_FREQ 99%@460 APE 75 MTS fg 0% bg 11% AO@36.5C GPU@36C Tdiode@39C PMIC@100C AUX@36C CPU@36.5C thermal@36.15C Tboard@34C GPU 1980/862 CPU 1827/611 SOC 1979/1271 CV 913/72 VDDRQ 1065/302 SYS5V 2560/2078

Thanks.