Incorrectly detect the bounding boxes for custom yolov7 model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Orin Nano
• DeepStream Version: deepstream-6.4
• JetPack Version (valid for Jetson only): Version: 6.0-b52
• TensorRT Version: Version: 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only): NA
• Issue Type( questions, new requirements, bugs)

I have trained the yolov7 model with pre-train weights on my data for the helmet detection. I got the best.pt. Using the following command I get the ONNX model to best.onnx.

python3 export.py --weights /home/anil/Downloads/Helmet/Data/yolov7/runs/train/yolov7-custom6/weights/best.pt --simplify --img-size 640 640 --batch-size 4

Using the following command I get the best_yolo_model.engine file to use in deepstream-app.

/usr/src/tensorrt/bin/trtexec --onnx=best.onnx --saveEngine=best_yolo_model.engine

These are my attached config files:

deepstream_app_helmet_config.txt (867 Bytes)

helmet_config_infer_primary_yoloV7.txt (713 Bytes)

When I run the helmet_config_infer_primary_yolov7.txt I got the following result

Can someone help me where I am going wrong?

You need to check if the net-scale-factor value is right for your own model. You can refer to y = net_scale_factor*(x-mean). You also can add some logs in your libnvdsinfer_custom_impl_Yolo.so lib to check if there are any outputs.

@yuweiw Thanks for your response.

Do I need to calculate the net-scale-factor value? The link suggests only a theoretical explanation. The libnvdsinfer_custom_impl_Yolo.so the file is encrypted. Thanks.

@rajupadhyay59 May you help me resolve this. Thanks.

Try this repo: GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

@PhongNT I am working on the same repo except, I have my own customized model. Thanks

1 Like

Yes. You need to set the value based on your training parameter. You can also refer to our demo yolo_deepstream. The postprocess is open source nvdsinfer_custom_impl_Yolo.

I would suggest that first make sure your custom post-process script is working fine by printing out all the outputs. Try putting cout statements in your custom post-processor
Then you can proceed to alter the parameters.
You’ll have to debug your parameters like net-scale-factor like yuweiw mentioned.

@rajupadhyay59 Thanks for your attention.

I have recently started working with the DeepStream app, What do you mean by custom process script? Or does it mean configuration file? If not where I can find the custom process script?

Like @PhongNT mentioned, please refer the yolo deepstream repo.

Especially, the code nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp which is what I mean by custom post-process script. This script is needed when working with custom models. (Building this script gives you the mentioned .so file)

Here is another link (my repo).
I myself took reference from the above mentioned yolo-deepstream repo and the samples provided by Nvidia.

Refer both the repos and modify your code. Also put cout statements and then build it to get your .so file.

I would still recommend you to read deepstream manuals before doing all this to get a better understanding.

Can you share your command ?

Important: please export the ONNX model with the new export file in DeepStream-Yolo/utils at master · marcoslucianops/DeepStream-Yolo · GitHub

@PhongNT Thanks for your attention. I have already shared the command.

python3 export.py --weights /home/anil/Downloads/Helmet/Data/yolov7/runs/train/yolov7-custom6/weights/best.pt --simplify --img-size 640 640 --batch-size 4

Try use export_file

Thanks @PhongNT I use the script that you suggested to export the model and it works, but how the export file you suggested is different from the file given in this repo GitHub - WongKinYiu/yolov7: Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.