I’m trying to refine my online data augmentation parameters related to color_augmentation for DetectNet_v2. When reading the documentation, I don’t fully understand the effect of the five parameters (color_shift_stddev, hue_rotation_max, …).
It would be convenient to see for real the random color effects it create on one original image. Is there any way to do this ? Alternatively, can you give more mathematical details about these parameters or can you point to image examples ?
Can enable tensorboard. Refer to the user guide.
color_shift_stddev (float): Standard deviation for color shift augmentation.
hue_rotation_max (float): Maximum hue rotation, in degrees.
saturation_shift_max (float): Maximum value for saturation shift.
constrast_scale_max (float): Maximum scale shift for contrast augmentation. Set to
0.0 to disable.
contrast_center (float): Center point for contrast augmentation. Set to 0.5 to
Like I explained, I read the docs.
Training a model is costly and we don’t want to take the risk to reduce the performance of a model by choosing one (bad) guessed value for color augmentation. “Standard deviation for color shift augmentation” is not clear to me. I’m a typical “standard customer”. May I ask you to request the NVidia engineers to extend the docs with what is does precisely ?
color (R,G,B) – color_shift_stddev C → color (R2,G2,B2) ??
Alternatively, they can extend the docs with what does each of these parameters on real example images.
Instead of training, there is a tool for visualization. See Offline Data Augmentation - NVIDIA Docs . The color_shift_stddev is similar to blur setting.
We implemented a software of our own for color augmentation.
You can close.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.