• Hardware Platform (Jetson / GPU):jetson • DeepStream Version:5.0 • JetPack Version (valid for Jetson only):4.4 • TensorRT Version:7.1
Hi, Q1- How I can use dual DLA in deepstream pipeline? my mean is that I want to use dual DLA in gstreamer with nvinfer for the multi-streams? DLA0 for first 5 streams and DLA1 for second 5 streams.
Q2- Is it possible to show runnig of DLAs in terminal or any whrere, In the jtop shows HW is running or no, I want to see same about for DLAs. In the jtop shows percent utilizing of GPU, it’s included only for GPU or include for both GPU and DLAs?
Q3- How I can set and enable the models for converting to TensorRT? Is it possible to convert the model for DLA with tlt-convertor? or trtexec?
DLAs can share the same engine file since the same hardware architecture is used.
But you will need to load it separately to create two runtime engine for each.
@AastaLLL, Hi, Q1- If we can use model generated by DLA0 for DLA1, So, why we set to specific DLA in the config file, We can only set useDLACore=1, right? Q2- Is it possible to be run one model on both DLAs at same time? My mean is that the deepstream automatically put some layers of model on DLA0 and some other layers of model on DLA1 and other on GPU?
I want to know one model only allowed to run on one DLA? If the model be complex and large, then the deepstream used both DLAs for that model?
1. TensorRT uses the same DLA configure to generate engine and create runtime.
The engine file is the same since DLA0 and DLA1 have the same hardware architecture.
But you will need this configure to decide which DLA to run on.
2. Unfortunately, no.
If the model is too complicated to feed into one DLA, the remaining operations will fallback into GPU.