Trafficcamnet detect car very low accuracy

I use trafficcamnet to detect the car, but no matter how I adjust the parameters, the results are very poor.

Can someone tell me why?

Can you share the training spec file?
And upload full training log as well?

1 Like

i was facing the same problem when training my first custom model using tlt.
for it me the YOLO to KITTI dataset conversion wasn’t done properly. Just draw bounding boxes and check .

You can use my code to draw boxes if you require :

1 Like

Thx , but maybe tomorrow I can upload it on the company computer

Thx!!! I have the foresight that it should be an error in the format of the dataset. I used a previous script to transfer YOLO to KITTI. Previously it performed well in license plate recognition let me think it’s a correct conversion . but maybe now I need to try your method!😊

detectnet_v2_train_resnet18_kitti_car.txt (3.0 KB)
status.json (31.7 KB)

Oh, I made some changes in your code to draw boxes (change location of kitti label xmin=float(ls[5]) to xmin=float(ls[4])) and then found the result seem to be correct

Thanks, for the heads ups.

haha, Bad thing is that there is nothing wrong with my KITTI dataset

How many images in your training dataset?
According to your status.json, you have run training for 120 epochs for 10 minutes.

More, how about the average resolution for the class car? Is the car a small object?

About 1000 images, this status.json is just a part of the data, In fact, the car image resolution varies greatly, which comes from the data collected by the 1080p camera. From the picture, maybe one part is big and the other part is small…

For detectnet_v2, the train tool does not support training on images of multiple resolutions.
Do your training images have the same resolution?

Yeah I think they are all 1920×1080

Since you set 960x544 in the spec, please resize images/labels from 1920x1080 to 960x544 .

Refer to below.

The train tool does not support training on images of multiple resolutions, or resizing images during training. All of the images must be resized offline to the final training size and the corresponding bounding boxes must be scaled accordingly.

1 Like

Or use below way.

the dataloader does support resizing images to the input resolution defined in the specification file. This can be enabled by setting the enable_auto_resize parameter to true in the augmentation_config module of the spec file.

1 Like

Oh!! I ignored them, maybe because I got good results when I used lpdnet to train license plate recognition and I didn’t adjust the label at that time.

Oh god thx very much!!! This problem bothered me for several days. After setting the **enable_auto_resize** parameter, the accuracy improved too much. and I’d like to know what is the expected training accuracy?

Thanks for the info.

It is hard to draw a conclusion for it. It depends on the input size, backbones, images quantities, etc.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.