Transfer learning Custom Model has trouble detecting "OpenEyes" from more than a foot away from the camera

JP4.6, Tensorflow 2.5.0
I utilized the camera-capture tool from jetson utililites.

Capture Code:
camera-capture --width=640 --height=480 --camera=/dev/video1

Converting to Onnx Code:
python3 onnx_export.py --model-dir=models/Eyes

Training Code:
python3 train_ssd.py --dataset-type=voc data=data/Eyes --model-dir=models/Eyes --batch-size=2 --workers=1

Running Model Code:
detect --model=models/Eyes/ssd-mobilenet.onnx --labels=models/Eyes/labels.txt --input_blob=0 ==output-cvg=scores --output-bbox=boxes /dev/video1

Label Method/Details:
Put a square box around each eye “Open” Sitting about 2-3 feet from webcam. Also made another label “Closed”. The weird thing is the closed object detection works just fine from a distance, but the model only detects my open eyes if i move really really close to the webcam.
I’ve tweaked the batch size and epoch, increasing, decreasing. That didn’t change the behavior. Also created a whole new model with a new dateset, with label “head down”, and other things with the addition of “Open eyes”…Same behavior

Possible Solutions: Maybe changing the capture resolution? Label my eyes in a different way?

Hi,

Could you share some examples of your training data and testing data?

Usually, the accuracy is strongly related to the database collection.
Do you test the “Open” class with an image on a square box but “Close” with a real eye?

Thanks.

This is an example, this is not directly from the dataset as I dont have access to it right now. But basically how I labeled it. I’ve retrained first model with pictures from example 1 and trained another model with similar pictures and labeled similar to example 2. Both gave similar results and behavior.


eyesopen2

Hi,

Not sure if this accuracy issue is caused by the way of data collection.

We have a tutorial that guides users to collect their custom datasets.
Could you check if this can give you some idea first?

Thanks.