Deepstream error

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.4
• TensorRT Version 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only) 545.23.08

Ihave create a own config file when i run deepstream-app -c test_by_varun2.txt

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine_b4_gpu0_int8.engine open error
0:00:06.207236878 3987 0x5597f2af64c0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine_b4_gpu0_int8.engine failed
0:00:06.411229541 3987 0x5597f2af64c0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine_b4_gpu0_int8.engine failed, try rebuild
0:00:06.411261008 3987 0x5597f2af64c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-6.4/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:11.685071568 3987 0x5597f2af64c0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

this is my file test_by_varun2.txt

#This code is made by Varun
#Get mesurement and get metadata outputs
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=3
gie-kitti-output-dir=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/meta/gie/
kitti-track-output-dir=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/meta/track/
reid-track-output-dir=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/meta/trackout/
terminated-track-output-dir=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/meta/termtrack/

#the output display type
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

#The path for the video file
[source0]
enable=1
type=3
uri=file://…/stream/face.mp4
gpu-id=0
num-sources=1
cudadec-memtype=0

#Basic sink
[sink0]
enable=1
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0
#Encode and file saver
[sink1]
enable=0
type=3
container=1
codec=1
enc-type=0
sync=0
bitrate=2000000
profile=0
output-file=out.mp4
source-id=0
#For RTSP streamming
[sink2]
enable=0
type=4
codec=1
enc-type=0
sync=0
bitrate=400000
profile=0
rtsp-port=8554
udp-port=5400

#For overlays text and rectangles in video frame
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
buffer-pool-size=1
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
attach-sys-ts-as-ntp=1

#the Pre proceesing part is being skipped

config for gie section Here lies the main model codes which are going to be implemented
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine_b4_gpu0_int8.engine
batch-size=1
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
nvbuf-memory-type=0
config-file=config_infer_primary_gie2.txt

[tracker]
enable=1
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
gpu-id=0
display-tracking-id=1

[tests]
file-loop=0

this is primary infer file

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
tlt-model-key=tlt_encode
tlt-encoded-model=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine_b30_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.4/samples/configs/test_by_varun/model/Peoplenet/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.4/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.txt
batch-size=30
process-mode=1
model-color-format=0
cluster-mode=2
infer-dims=3;544;960
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
force-implicit-batch-dim=1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so

cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0
infer-dims=3;544;960

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3

#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.2

[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

the model that i used is Peoplenet pruned_quantized_v2.3.2

I am trying to build a tracking system of people using unique id

Could you check this property in your config file?

Sorry there has been a mistake while pasting the code this is the full code for primary gie

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
tlt-model-key=tlt_encode
tlt-encoded-model=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.etlt_b4_gpu0_int8.engine_b30_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.4/samples/configs/test_by_varun/model/Peoplenet/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.4/samples/configs/test_by_varun/model/Peoplenet/resnet34_peoplenet_pruned_int8.txt
batch-size=30
process-mode=1
model-color-format=0
cluster-mode=2
infer-dims=3;544;960

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
force-implicit-batch-dim=1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0
infer-dims=3;544;960

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3

#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.2

Per class configurations

[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

Could you double check the model file? It should end with etlt.

Thanks a lot , it was really a silly mistake ,

And i need to know the onnx model files does it have special configuration because i tried it i got error

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/peoplenet/resnet34_peoplenet_int8.onnx_b4_gpu0_int8.engine open error
0:00:05.965677886 411 0x55e99f9c8cc0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/peoplenet/resnet34_peoplenet_int8.onnx_b4_gpu0_int8.engine failed
0:00:06.166126183 411 0x55e99f9c8cc0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream/samples/configs/test_by_varun/model/peoplenet/resnet34_peoplenet_int8.onnx_b4_gpu0_int8.engine failed, try rebuild
0:00:06.166141813 411 0x55e99f9c8cc0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:359 Failed to build network, error in model parsing.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:11.506854354 411 0x55e99f9c8cc0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

The model is resnet34_peoplenet_int8.onnx

You can refer to config_infer_primary_yoloV7.txt to learn how to set the config file with onnx.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.