• Hardware Platform (Jetson / GPU):jetson
• DeepStream Version:5.0
• JetPack Version (valid for Jetson only):4.4
• TensorRT Version:7.1
Q1- How I can use dual DLA in deepstream pipeline? my mean is that I want to use dual DLA in gstreamer with nvinfer for the multi-streams? DLA0 for first 5 streams and DLA1 for second 5 streams.
Q2- Is it possible to show runnig of DLAs in terminal or any whrere, In the jtop shows HW is running or no, I want to see same about for DLAs. In the jtop shows percent utilizing of GPU, it’s included only for GPU or include for both GPU and DLAs?
Q3- How I can set and enable the models for converting to TensorRT? Is it possible to convert the model for DLA with tlt-convertor? or trtexec?