Custom PeopleNet, false positives

Hi,
I have re-trained PeopleNet with my custom dataset, however, when I do tlt-infer I get false positives i.e the chair as a person, in addition to detecting person as person.

How can I avoid false positives?

Is there similar negative images concept like in yolo?.
This is a folder of images where you inform yolo to consider as negative images, and that minimizes the wrong detections.

Please note what I mean by negative image is an image that does not have any of the classes in that training

Can I use (dontcare) in Kitti to point the entire negative image dimensions and feed that to TLT as part of the training, will that help?

Your guidance, please.
Thanks

Normally, it does not need negative images.

Several questions.
How many classes did you train?
How many images in your training dataset, and in each class?

If possible, could you please share the training spec and mAP result?

Hi Morganh,
How many classed? 2 classes.

man=10160 instances. not images
woman=10649 instances. not images

Resolution: 608*608
mAP around 65% for both
should I freeze more layers or no zero is enough, or should I freeze more as all the classes fall under PeopleNet?

The specs file
random_seed: 42
model_config {
pretrained_model_file: “/workspace/pretrained_model/tlt_peoplenet_vunpruned_v2.0/resnet34_peoplenet.tlt”
num_layers: 34

freeze_blocks: 0
freeze_blocks: 1
freeze_blocks: 2
freeze_blocks: 3
arch: “resnet”
use_batch_norm: true
activation {
activation_type: “relu”
}
dropout_rate: 0.1
objective_set: {
cov {}
bbox {
scale: 35.0
offset: 0.5
}
}
training_precision {
backend_floatx: FLOAT32
}
}

augmentation_config {
preprocessing {
output_image_width: 608
output_image_height: 608
output_image_channel: 3
min_bbox_width: 1.0
min_bbox_height: 1.0
}
spatial_augmentation {
hflip_probability: 0.5
vflip_probability: 0.0
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 8.0
translate_max_y: 8.0
}
color_augmentation {
color_shift_stddev: 0.0
hue_rotation_max: 25.0
saturation_shift_max: 0.2
contrast_scale_max: 0.1
contrast_center: 0.5
}
}

bbox_rasterizer_config {
target_class_config {
key: “man”
value: {
cov_center_x: 0.5
cov_center_y: 0.5
cov_radius_x: 1.0
cov_radius_y: 1.0
bbox_min_radius: 1.0
}
}
target_class_config {
key: “woman”
value: {
cov_center_x: 0.5
cov_center_y: 0.5
cov_radius_x: 1.0
cov_radius_y: 1.0
bbox_min_radius: 1.0
}
}

deadzone_radius: 0.67
}

cost_function_config {
target_classes {
name: “man”
class_weight: 1.0
coverage_foreground_weight: 0.05
objectives {
name: “cov”
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: “bbox”
initial_weight: 10.0
weight_target: 10.0
}
}
target_classes {
name: “woman”
class_weight: 1.0
coverage_foreground_weight: 0.05
objectives {
name: “cov”
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: “bbox”
initial_weight: 10.0
weight_target: 10.0
}
}

enable_autoweighting: False
max_objective_weight: 0.9999
min_objective_weight: 0.0001
}

training_config {
batch_size_per_gpu: 7
num_epochs: 700
learning_rate {
soft_start_annealing_schedule {
min_learning_rate: 5e-6
max_learning_rate: 5e-4
soft_start: 0.1
annealing: 0.7
}
}
regularizer {
type: L1
weight: 3e-9
}
optimizer {
adam {
epsilon: 1e-08
beta1: 0.9
beta2: 0.999
}
}
cost_scaling {
enabled: False
initial_exponent: 20.0
increment: 0.005
decrement: 1.0
}
checkpoint_interval: 10
}

postprocessing_config {
target_class_config {
key: “man”
value: {
clustering_config {
coverage_threshold: 0.005
dbscan_eps: 0.15
dbscan_min_samples: 0.05
minimum_bounding_box_height: 10
}
}
}
target_class_config {
key: “woman”
value: {
clustering_config {
coverage_threshold: 0.005
dbscan_eps: 0.15
dbscan_min_samples: 0.05
minimum_bounding_box_height: 10
}
}
}

}

dataset_config {
data_sources: {
tfrecords_path: “/workspace/tfrecrods/train/*”
image_directory_path: “/workspace/dataset/train”
}
image_extension: “jpg”
target_class_mapping {
key: “man”
value: “man”
}
target_class_mapping {
key: “woman”
value: “woman”
}

#validation_fold: 0
validation_data_source: {
tfrecords_path: “/workspace/tfrecords_test/*”
image_directory_path: “/workspace/dataset/test”
}
}

evaluation_config {
validation_period_during_training: 10
first_validation_epoch: 1
minimum_detection_ground_truth_overlap {
key: “man”
value: 0.5
}
minimum_detection_ground_truth_overlap {
key: “woman”
value: 0.5
}

evaluation_box_config {
key: “man”
value {
minimum_height: 4
maximum_height: 9999
minimum_width: 4
maximum_width: 9999
}
}
evaluation_box_config {
key: “woman”
value {
minimum_height: 4
maximum_height: 9999
minimum_width: 4
maximum_width: 9999
}
}

}

Could you delete “freeze_blocks” and train again?
For your case, I suggest you to trigger more experiments to get a higher mAP.
Seems that the 65% is a little low. Are your images difficult as coco data? Beside man/woman, are there any other objects?

Also, you can also try different batch-size. (bs4,bs16)
Possibly, the max_learning_rate is also needed finetune.

Did it work? Did you try “dontcare” class?

Deleting all freeze blocks did not make much difference (compared to keeping freeae_blocks:0 only)
Changed max_learning_rate to 0.001 I noticed better map.
Also changing to mixed precision allows me to use batch 32.

Thanks for your support

1 Like

Thanks for the info, can we close this topic?

1 Like