DeepStream YoloV3 onnx do not work

YoloV3 model with cfg&weights works well in DeepStream ; but convert model to onnx ,in deepstream I can get data with onnx model ; but results is smaller rects(Compared with cfg&weights) . Why?

• Hardware Platform (Jetson / GPU) NX
• DeepStream Version 5.0 preview
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hi,

Could you share which sample do you apply for cfg&weights and onnx model.
So we can give a more precise comment for you.

Thanks.

deepstream-test3, cfg&weights is in https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg and https://pjreddie.com/media/files/yolov3.weights. onnx model was converted by /usr/src/tensorrt/samples/python/yolov3_onnx/yolov3_to_onnx.py with python3.
I use yolov3-10.onnx by https://github.com/onnx/models,i got this
strong text Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1592> [UID = 1]: Trying to create engine from model files

Input filename: /media/nvidia/ZHINK/yolov3-10.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: keras2onnx
Producer version: 1.5.1
Domain: onnx
Model version: 0
Doc string:

WARNING: [TRT]: onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: (Unnamed Layer* 224) [Constant]: invalid weights type of Bool
ERROR: [TRT]: (Unnamed Layer* 224) [Constant]: invalid weights type of Bool
Segmentation fault (core dumped)

Where can i find this YoloV3 onnx model?

Hi,

Do you use deepstream-test3 for both cfg&weights and onnx model?
Thanks.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-engine-file=/home/nvidia/work/person/model_b6_gpu0_fp16.engine
onnx-file=/media/nvidia/ZHINK/yolov3-10.onnx
labelfile-path=/home/nvidia/work/person/labels.txt
batch-size=6
network-mode=2
num-detected-classes=80
interval=9
gie-unique-id=1
network-type=0
cluster-mode=2
model-color-format=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=/home/nvidia/work/libnvdsinfer_custom_impl_Yolo.so
#engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.3
pre-cluster-threshold=0.8

this is config file of nvinfer

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Just want to clarify again.

It looks like the sample for cfg&weights is not using deepstream. Is that right? Maybe darknet?
So the problem should be the bbox is different after converting the model into Deepstream(TensorRT)?

Thanks.