Deepstream-test1 does not work with dla

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)jetson agx orin
**• DeepStream Version 6.4
**• JetPack Version (valid for Jetson only) 6.0
**• TensorRT Version 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)questions

I modified it to use dla when testing deepstream-test1, but checked with jtop that dla was not used

This is the configuration of pgie
property:
gpu-id: 0
net-scale-factor: 0.00392156862745098
tlt-model-key: tlt_encode
tlt-encoded-model: …/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file: …/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
labelfile-path: …/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file: …/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim: 1
batch-size: 1
network-mode: 1
num-detected-classes: 4
interval: 0
gie-unique-id: 1
uff-input-order: 0
uff-input-blob-name: input_1
output-blob-names: output_cov/Sigmoid;output_bbox/BiasAdd
#scaling-filter=0
#scaling-compute-hw=0
enable-dla: 1
use-dla-core: 0
cluster-mode: 2
infer-dims: 3;544;960

class-attrs-all:
pre-cluster-threshold: 0.2
topk: 20
nms-iou-threshold: 0.5

This is my running log
Added elements to bin
Using file: dstest1_config.yml
Opening in BLOCKING MODE
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:11.726063109 231035 0xaaaaabe65670 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:00:12.099193520 231035 0xaaaaabe65670 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
0:00:12.110004854 231035 0xaaaaabe65670 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-wpy/dstest1_pgie_config.yml sucessfully
Running…
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
nvstreammux: Successfully handled EOS for source_id=0
End of stream
Returned, stopping playback
Deleting pipeline
[1] + Done “/usr/bin/gdb” --interpreter=mi --tty=${DbgTerm} 0<“/tmp/Microsoft-MIEngine-In-kl4onbfa.ifi” 1>“/tmp/Microsoft-MIEngine-Out-bcnb1on5.31a”

/* we set the input filename to the source element */
g_object_set (G_OBJECT (source), “location”, argv[1], NULL);

g_object_set (G_OBJECT (nvosd), “process-mode”, 0, NULL);

I tried this and it didn’t work, see if you can try it on deepstream6.4

Sorry, I have deleted the wrong guide

Some tips:
1.

This calibration file just for GPU,not DLA. There will be some unexpected problems

Try changing the configuration file to the following

diff --git a/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt b/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
index cc04b7f..2daab98 100644
--- a/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
+++ b/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
@@ -52,12 +52,13 @@ gpu-id=0
 net-scale-factor=0.00392156862745098
 tlt-model-key=tlt_encode
 tlt-encoded-model=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
-model-engine-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
+#model-engine-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
+model-engine-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_dla0_fp16.engine
 labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
-int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
+#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
 force-implicit-batch-dim=1
 batch-size=1
-network-mode=1
+network-mode=2
 num-detected-classes=4
 interval=0
 gie-unique-id=1
@@ -68,6 +69,8 @@ output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
 #scaling-compute-hw=0
 cluster-mode=2
 infer-dims=3;544;960
+enable-dla=1
+use-dla-core=0
 
 [class-attrs-all]
 pre-cluster-threshold=0.2

If the other example wants to use dla normally, it should be changed to fp16, and the network-mode should be changed accordingly, right

Yes, this is determined by the DLA hardware

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.