I failed to run the official yolo instance by using deepstream-6.2

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU) Tesla T4
• deepstream-app version 6.2.0
• DeepStreamSDK 6.2.0
• CUDA Driver Version: 11.8
• CUDA Runtime Version: 11.8
• TensorRT Version: 8.5
• cuDNN Version: 8.7
• libNVWarp360 Version: 2.0.1d3

My execution steps
cd /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo
wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg -q --show-progress
wget https://pjreddie.com/media/files/yolov3.weights -q --show-progress
cd /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo
export CUDA_VER=11.8
make -C nvdsinfer_custom_impl_Yolo
deepstream-app -c deepstream_app_config_yoloV3.txt

config_infer_primary_yoloV3.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3.cfg
model-file=yolov3.weights
#model-engine-file=yolov3_b1_gpu0_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt7.0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=80
gie-unique-id=1
network-type=0
is-classifier=0

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
#parse-bbox-func-name=NvDsInferParseCustomYoloV3_cuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0
disable-output-host-copy=0

[class-attrs-all]
nms-iou-threshold=0.3
pre-cluster-threshold=0.7

Terminal output
Unknown or legacy key specified ‘is-classifier’ for group [property]
Unknown or legacy key specified ‘disable-output-host-copy’ for group [property]
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
App run failed

Additional: docker installation process
docker pull nvcr.io/nvidia/deepstream:6.2-triton
docker run --gpus all -it -p 123:22 --name deepstream6.2 -v /root/workspace:/root/workspace -v /usr/src:/usr/src -v /lib/modules:/lib/modules -v /dev:/dev --privileged --cap-add=ALL --pid=host -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:6.2-triton

Could you run the cli with GST_DEBUG=3, like GST_DEBUG=3 deepstream-app -c deepstream_app_config_yoloV3.txt and attach the log? Thanks

thanks, i solved it by modifying the deepstream_app_config_yoloV3.txt.
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
#type=2
#sync=0
#source-id=0
#gpu-id=0
#nvbuf-memory-type=0
#1=mp4 2=mkv
#container=1
#1=h264 2=h265
#codec=1
#output-file=yolov4.mp4
type=3
container=1
codec=1
enc-type=0
sync=1
bitrate=2000000
profile=0
output-file=out_yolov3.mp4
source-id=0

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.