Excuse me, in the new deepstream, I added DLA support in the configuration file of Infer, but it hasn't been implemented yet

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)6.1
**• DeepStream Version 7.1
**• JetPack Version (valid for Jetson only)6.1
**• TensorRT Version 10.3
**• NVIDIA GPU Driver Version (valid for GPU only)540.4.0
• Issue Type( questions, new requirements, bugs)
The Type of this mechine is Orin AGX 64G
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Excuse me, in the new deepstream, I added DLA support in the configuration file of Infer, but it hasn’t been implemented yet

How to use models in the DLA?
file.zip (852.8 KB)

I feel that the model is not compatible with DLA

This model is GPU based,
I don’t know how to convert models based on Deepstream
The model that used trtexec to convert DLA also failed

It looks like there is nothing wrong with your configuration file.
If you modify dstest1_pgie_config.txt as follows, does it work properly?

./deepstream-test1-app /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 
diff --git a/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt b/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
index 146c732..f7c9d7f 100644
--- a/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
+++ b/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
@@ -51,7 +51,7 @@
 gpu-id=0
 net-scale-factor=0.00392156862745098
 onnx-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx
-model-engine-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
+model-engine-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_dla0_int8.engine
 labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
 int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
 batch-size=1
@@ -62,6 +62,8 @@ gie-unique-id=1
 #scaling-filter=0
 #scaling-compute-hw=0
 cluster-mode=2
+enable-dla=1
+use-dla-core=0
 
 [class-attrs-all]
 pre-cluster-threshold=0.2

DLA core 0 is already working properly

This model was converted to GPU, not DLA, thank you.
The configuration file above has also been added
+enable-dla=1
+use-dla-core=0

The first question is with command trtexec ,can we set batch size
The second question can we transform dla engine in the deepstream?

The first problem has been solved, there is no need to look again

Now we only need to look at the second question

The following command line is used to generate a int8 batch-size=8 engine file for the DLA0 .

You need to specify the int8 calibration file

DLA requests all profiles have same min, max, and opt value. Otherwise the generated engine file will only run on the GPU.

/usr/src/tensorrt/bin/trtexec --minShapes="input_1:0":8x3x544x960 --maxShapes="input_1:0":8x3x544x960 --optShapes="input_1:0":8x3x544x960 --onnx=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx --useDLACore=0 --int8 --calib=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin --allowGPUFallback --saveEngine=resnet18_trafficcamnet_pruned.onnx_b8_dla0_int8.engine

Please use the above configuration file to run test1, and DLA0 can be used normally

ok I try it

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.