Please provide complete information as applicable to your setup.
use the website " Gst-nvtracker — DeepStream 6.2 Release documentation (nvidia.com)"
use the command: deepstream-app -c deepstream_peoplenet_test.txt
deepstream_peoplenet_test.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#type=3
#uri=file://../../streams/sample_1080p_h264.mp4
type=4
uri=rtsp://admin:hk123456@192.168.1.71/h264/ch1/sub/av_stream
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=1
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
process-mode=1
display-text=1
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=33000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
#config-file=config_infer_primary.txt
config-file=config_infer_primary_PeopleNet.txt
[tracker]
enable=1
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
# ll-config-file=config_tracker_NvSORT.yml
# ll-config-file=config_tracker_NvDCF_perf.yml
# ll-config-file=config_tracker_NvDCF_accuracy.yml
ll-config-file=config_tracker_NvDeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1
config_infer_primary_PeopleNet.txt
[property]
## model-specific params. The paths will be different if the user sets up in different directory.
int8-calib-file=../../models/peoplenet/resnet34_peoplenet_int8.txt
labelfile-path=../../models/peoplenet/labels.txt
#tlt-encoded-model=../../models/peoplenet/resnet34_peoplenet_int8.etlt
model-engine-file=../../models/peoplenet/resnet34_peoplenet_int8.engine
gpu-id=0
net-scale-factor=0.0039215697906911373
input-dims=3;544;960;0
uff-input-blob-name=input_1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=3
maintain-aspect-ratio=1
[class-attrs-all]
pre-cluster-threshold=0.1696
nms-iou-threshold=0.5196
minBoxes=2
dbscan-min-score=1.4226
eps=0.2280
detected-min-w=20
detected-min-h=20
#tlt-encoded-model=…/…/models/peoplenet/resnet34_peoplenet_int8.etlt
model-engine-file=…/…/models/peoplenet/resnet34_peoplenet_int8.engine
resnet34_peoplenet_int8.engine is use the command to converter
tao-converter resnet34_peoplenet_int8.etlt -k tlt_encode -d 3,544,960 -o output_cov/Sigmoid,output_bbox/BiasAdd -c resnet34_peoplenet_int8.txt -e resnet34_peoplenet_int8.engine -m 8 -t int8
run then have
nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:04.761926927 7672 0xaaab023c7d20 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:04.933544044 7672 0xaaab023c7d20 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:04.933599405 7672 0xaaab023c7d20 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
why, please help, thanke you
have
**• Hardware Platform (Jetson / GPU) Jetson
**• DeepStream Version 6.2
**• JetPack Version (valid for Jetson only) 5.1