The bbox only cover part of the large object with detectnet_v2

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) RTX4090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Detectnet_v2
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5
• Training spec file(If have, please share here) exactly same with detectnet_v2 notebook, using bdd100k car dataset
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

image below is the inference result of detectnet_v2, if the car object is very close to the camera, it became large, the the bbox only cover part of the object. So In deepstream app, there may be 2 or 3 bbox with one car in one frame , and the nv_tracker can’t work.

This is another inference result of picodet network, trained by paddlepaddle with the same dataset. It works well with all large object.

Is there any parameter to adjust for the large object with training config?

You can refer to DetectNet_v2 - NVIDIA Docs

  • cov_radius_x (float): x-radius of the coverage ellipse
  • cov_radius_y (float): y-radius of the coverage ellipse
  • bbox_min_radius (float): The minimum radius of the coverage region to be drawn for boxes

I charge cov_radius_x and cov_radius_y to 1.0 , bbox_min_radius=1.0
the same result.

Did you run training with the new setting?

yes, just 10 epochs with new config. I’m using yolo model now:(

OK, yolov4_tiny can be an option.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.