When running with my custom yolov5 model, I'm not getting error but deepstream stopping

**• Hardware Platform: GPU
**• Deepstream Version: 6.1.1
**• TensorRT Version: 8.4.1.5
**• NVIDIA GPU Driver Version: Quadro RTX 8000
**• Issue Type: Error

YoloV5 infer config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=my_model.cfg
model-file=my_model.wts
model-engine-file=yolo.engine
#int8-calib-file=calib.table
#labelfile-path=labels.txt
batch-size=1
network-mode=0
uff-input-dims=3;640;640;0
num-detected-classes=2
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

I’m working with rstp stream. But deepstream stopping when running with my custom yolov5 model. It is working for about 10 minutes. It was stopping later.

How to solve this problem?

Thanks.

Can you share your log when running this DeepStream program?

I am working at the different machine. So I can’t giving full log record. Log ended as shown in the picture above.

the log information is not enough, is there any issue if using video file as input? please refer to deepstream yolov5 sample GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

Thanks, the problem is not from the model or deepstream. Because when Deepstream is running, I was restarting deepstream. So I take this problem.

Glad to know you fixed it, thanks for the update! If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.