Error while running deepstream custom application!

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) L4 24 GB
• DeepStream Version 6.4
• TensorRT Version 8.6
• NVIDIA GPU Driver Version (valid for GPU only) 535.104.05
**• 0x00007fffc904cfef in nvdsinfer::DetectPostprocessor::clusterAndFillDetectionOutputNMS(NvDsInferDetectionOutput&) () from ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_infer.so
**
I am trying to create a custom DeepStream application for face recognition, but facing this issue from a long time.

this is my deepstream app config file
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:admin123@122.176.49.124:8112/cam/realmonitor?channel=1&subtype=0
#[source1]
#enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#type=4
#uri=rtsp://admin:admin123@103.117.15.75:8113/cam/realmonitor?channel=1&subtype=0

#[source2]
#enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#type=4
#uri=rtsp://admin:admin123@103.117.15.75:8114/cam/realmonitor?channel=1&subtype=0

#[source3]
#enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#type=4
#uri=rtsp://admin:admin123@103.117.15.75:8115/cam/realmonitor?channel=1&subtype=0

#[source4]
#enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#type=4
#uri=rtsp://admin:admin123@103.117.15.75:8116/cam/realmonitor?channel=1&subtype=0

[sink0]
enable=0

#[sink1]
#enable=0

#[sink2]
#enable=0

#[sink3]
#enable=0

#[sink4]
#enable=0

[osd]
enable=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
model-engine-file=/root/DeepStream-App/libfrai/insightface_model/models/buffalo_sc/det_500m.trt
batch-size=1
#Required for the secondary detector
gie-unique-id=1
config-file=config_infer_primary.txt

[secondary-gie0]
enable=1
gpu-id=0
model-engine-file=/root/DeepStream-App/libfrai/insightface_model/models/buffalo_sc/w600k_mbf.trt
batch-size=1
#Required for the tracker
gie-unique-id=2
operate-on-gie-id=1
config-file=config_infer_secondary.txt

[tracker]
enable=0
#tracker-width=640
#tracker-height=368
#ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-config-file required for IOU only
#ll-config-file=tracker_config.yml
#gpu-id=0
#enable-batch-process=1

[tests]
file-loop=0

[custom-plugin]
enable=1
type=3
custom-lib-path=/root/DeepStream-App/libfrai/custom-plugins/face_matching/matching_algorithm.so

Then this is my primary gie config
[property]
gpu-id=0
#net-scale-factor=0.0039215697906911373
#model-color-format=0
#custom-network-config=yolov4.cfg
#model-file=/root/DeepStream-App/libfrai/insightface_model/models/buffalo_sc/det_500m.onnx
model-engine-file=/root/DeepStream-App/libfrai/insightface_model/models/buffalo_sc/det_500m.trt
parse-bbox-func-name=parse_bounding_boxes
custom-lib-path=/root/DeepStream-App/libfrai/custom-plugins/parsing-bbox/parse_bounding_boxes.so
gie-unique-id=1

and this is my secondary gie config
[property]
gpu-id=0
#net-scale-factor=1.0
#model-color-format=0
model-engine-file=/root/DeepStream-App/libfrai/insightface_model/models/buffalo_sc/w600k_mbf.trt
parse-bbox-func-name=process_trt_output
custom-lib-path=/root/DeepStream-App/libfrai/custom-plugins/face_matching/matching.so

gie-unique-id=2

could you elaborate on your question? Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.