Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)jetson agx orin
**• DeepStream Version 6.4
**• JetPack Version (valid for Jetson only) 6.0
**• TensorRT Version 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)questions
I modified it to use dla when testing deepstream-test1, but checked with jtop that dla was not used
This is the configuration of pgie
property:
gpu-id: 0
net-scale-factor: 0.00392156862745098
tlt-model-key: tlt_encode
tlt-encoded-model: …/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file: …/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
labelfile-path: …/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file: …/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim: 1
batch-size: 1
network-mode: 1
num-detected-classes: 4
interval: 0
gie-unique-id: 1
uff-input-order: 0
uff-input-blob-name: input_1
output-blob-names: output_cov/Sigmoid;output_bbox/BiasAdd
#scaling-filter=0
#scaling-compute-hw=0
enable-dla: 1
use-dla-core: 0
cluster-mode: 2
infer-dims: 3;544;960
class-attrs-all:
pre-cluster-threshold: 0.2
topk: 20
nms-iou-threshold: 0.5
This is my running log
Added elements to bin
Using file: dstest1_config.yml
Opening in BLOCKING MODE
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:11.726063109 231035 0xaaaaabe65670 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60
0:00:12.099193520 231035 0xaaaaabe65670 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
0:00:12.110004854 231035 0xaaaaabe65670 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-wpy/dstest1_pgie_config.yml sucessfully
Running…
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
nvstreammux: Successfully handled EOS for source_id=0
End of stream
Returned, stopping playback
Deleting pipeline
[1] + Done “/usr/bin/gdb” --interpreter=mi --tty=${DbgTerm} 0<“/tmp/Microsoft-MIEngine-In-kl4onbfa.ifi” 1>“/tmp/Microsoft-MIEngine-Out-bcnb1on5.31a”