Deepstream python with yolov4

• Hardware Platform (Jetson / GPU): Xavier AGX
• DeepStream Version: 5.0

Hi, we’ve followed this guide to convert YOLOv4 model from Darknet to TensorRT already, and the model works fine (with deepstream in C/C++ verison).

But in our work, we need to run DS with python version, and we met some problem while applying the model to deepstream_test3.

Here’s the content of the config file.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
# model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
# proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../sources/deepstream_yolov4/yolov4_1_3_608_608_static.engine
labelfile-path=../../../../sources/deepstream_yolov4/labels.txt
# int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
#output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid


[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

Here’s the output error message.

Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating transform 
 
Creating EGLSink 

Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  file:///home/user/Downloads/l6c051min.mp4
Starting pipeline 

0:00:03.522080443 27501     0x296124d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_yolov4/yolov4_1_3_608_608_static.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608       
1   OUTPUT kFLOAT boxes           22743x1x4       
2   OUTPUT kFLOAT confs           22743x80        

0:00:03.522278340 27501     0x296124d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_yolov4/yolov4_1_3_608_608_static.engine
0:00:03.534536733 27501     0x296124d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_yolov4_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: qtdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fae0457c8 (GstCapsFeatures at 0x7f2c00dc00)>
0:00:03.820124695 27501     0x296189e0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:03.820575852 27501     0x296189e0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:734> [UID = 1]: Failed to parse bboxes

why is the content in your config different from yolov4_deepstream/config_infer_primary_yoloV4.txt at master · NVIDIA-AI-IOT/yolov4_deepstream · GitHub ?

We tried to modify the config from deepstream test3.
Finally we found that using the same config file, i.e., yolov4_deepstream/config_infer_primary_yoloV4.txt will work.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.