Custom vision model fail to work with deepstream 6.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson nano
**• DeepStream Version 6.0.1
**• JetPack Version 4.6.1
• TensorRT Version
**• NVIDIA GPU Driver Version 10.2

When I try to use a customVision model (exported in onnx dormat) with deepstream sdk6.0.1 I get the following error:
deepstream-test5-app: nvdsparsebbox_Yolo.cpp:329: bool NvDsInferParseYoloV2(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&): Assertion `layer.inferDims.numDims == 3’ failed.

Any clue on how can I fix it please?

1 config is not right, seemed to include “parse-bbox-func-name=NvDsInferParseCustomYoloV3” , is your model is yolo?
2 what is your start command?

yes my model is based on yolooV2 and I used deepstream-test5-app to start the app

please refer to this yolo demo: yolov4_deepstream/deepstream_yolov4 at master · NVIDIA-AI-IOT/yolov4_deepstream · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.