Fail with Transfer Learning with Unet Multiclass, Color Images, Semantic Segmentation

I am using cv_samples_v1.3.0/unet as a guide to do transfer learning for a dataset comprised of color images and need to get a multiclass result using semantic segmentation.

I fully ran the notebook unet_isbi to completion with the default values and test data.

I changed the training specification, and is attached unet_train_resnet_unet_6S.txt (1.6 KB)

Some of the changes I made:

To account for input image size (multimples of 16 as per de documentation) …

  model_input_width: 592
  model_input_height: 800
  model_input_channels: 3

For multiclass support I changed …

loss: "cross_entropy"

And the classes:

data_class_config {
  target_classes {
    name: "E100"
    mapping_class: "E100"
    label_id: 100
  }
  target_classes {
    name: "S70"
    mapping_class: "S70"
    label_id: 70
  }
  target_classes {
    name: "N40"
    mapping_class: "N40"
    label_id: 40
  }
  target_classes {
    name: "L10"
    mapping_class: "L10"
    label_id: 10
  }
  target_classes {
    name: "Background"
    mapping_class: "Background"
    label_id: 0
  }
}

I have guaranteed that ALL pixels in the semantic mask files fall in the values {0, 10, 40, 70, 100}
I’m using notebook unet_isbi, and runs without error including section 5. Evaluate trained models, which outputs the following json :

"{
    'Background': {'precision': 0.97737277, 'Recall': 1.0, 'F1 Score': 0.9885569201451342, 'iou': 0.97737277}, 
    'L10': {'precision': nan, 'Recall': 0.0, 'F1 Score': nan, 'iou': 0.0}, 
    'N40': {'precision': nan, 'Recall': 0.0, 'F1 Score': nan, 'iou': 0.0}, 
    'S70': {'precision': nan, 'Recall': 0.0, 'F1 Score': nan, 'iou': 0.0}, 
    'E100': {'precision': nan, 'Recall': 0.0, 'F1 Score': nan, 'iou': 0.0}


    }"

The trainig failed !!! An I have no clue as to where to look next.

Many thanks!

Are your images .jpg or .png?
Refer to Problems encountered in training unet and inference unet - #26 , you can have a try for below.
Change the training image from jpg to png.
$ for i in *.jpg ; do convert "$i" "${i%.*}.png" ; done

Thanks!

I changed the images to be PNG and completely deleted the contents of folder isbi_experiment_unpruned, but same nan results on evaluate

Refer to Problems encountered in training unet and inference unet - #26
and Multiple classes not detected? - #11 by Morganh
Please note that the pixel integer value should be equal to the value of the label_id provided in the spec.
UNet expects the images and corresponding masks encoded as images. Each mask image is a single-channel image, where every pixel is assigned an integer value that represents the segmentation class.

Please inspect each pixel value of the mask image.

Thanks! I’m doing that

I wrote a program to count the pixels per ID value, and they are all fine!!


P1020007.png, 0: 448962, 10: 21025, 40: 496, 70: 3059, 100: 58, BAD: 0
P1020008.png, 0: 448504, 10: 21388, 40: 654, 70: 2952, 100: 102, BAD: 0
P1020009.png, 0: 439044, 10: 30354, 40: 845, 70: 3234, 100: 123, BAD: 0
P1020010.png, 0: 439767, 10: 28556, 40: 608, 70: 4576, 100: 93, BAD: 0
P1020011.png, 0: 452752, 10: 17341, 40: 447, 70: 2951, 100: 109, BAD: 0
P1020012.png, 0: 452612, 10: 17946, 40: 537, 70: 2417, 100: 88, BAD: 0
P1020013.png, 0: 457016, 10: 13876, 40: 329, 70: 2317, 100: 62, BAD: 0
P1020014.png, 0: 456702, 10: 13838, 40: 483, 70: 2539, 100: 38, BAD: 0
P1020015.png, 0: 461810, 10: 9648, 40: 297, 70: 1797, 100: 48, BAD: 0
P1020016.png, 0: 457818, 10: 13526, 40: 409, 70: 1790, 100: 57, BAD: 0
P1020017.png, 0: 443266, 10: 25413, 40: 687, 70: 4163, 100: 71, BAD: 0
P1020018.png, 0: 444648, 10: 24016, 40: 575, 70: 4292, 100: 69, BAD: 0
P1020019.png, 0: 453011, 10: 17776, 40: 0, 70: 2759, 100: 54, BAD: 0
P1020020.png, 0: 455757, 10: 15088, 40: 0, 70: 2681, 100: 74, BAD: 0
P1020021.png, 0: 463272, 10: 8716, 40: 0, 70: 1580, 100: 32, BAD: 0
P1020022.png, 0: 460001, 10: 11711, 40: 127, 70: 1740, 100: 21, BAD: 0
P1020023.png, 0: 458486, 10: 11810, 40: 515, 70: 2730, 100: 59, BAD: 0
P1020024.png, 0: 463211, 10: 7981, 40: 420, 70: 1949, 100: 39, BAD: 0
P1020025.png, 0: 450812, 10: 19266, 40: 494, 70: 2949, 100: 79, BAD: 0
P1020026.png, 0: 443934, 10: 26056, 40: 513, 70: 3055, 100: 42, BAD: 0
P1020027.png, 0: 447317, 10: 20554, 40: 246, 70: 5308, 100: 175, BAD: 0
P1020028.png, 0: 452602, 10: 15976, 40: 819, 70: 4047, 100: 156, BAD: 0
P1020029.png, 0: 458531, 10: 10047, 40: 531, 70: 4417, 100: 74, BAD: 0
P1020030.png, 0: 458589, 10: 10126, 40: 298, 70: 4478, 100: 109, BAD: 0
P1020031.png, 0: 462452, 10: 8832, 40: 289, 70: 1965, 100: 62, BAD: 0
P1020032.png, 0: 457226, 10: 13178, 40: 680, 70: 2456, 100: 60, BAD: 0
P1020033.png, 0: 459907, 10: 10693, 40: 357, 70: 2589, 100: 54, BAD: 0
P1020034.png, 0: 458065, 10: 12243, 40: 510, 70: 2727, 100: 55, BAD: 0
P1020035.png, 0: 457595, 10: 12297, 40: 270, 70: 3394, 100: 44, BAD: 0
P1020036.png, 0: 458862, 10: 10877, 40: 265, 70: 3533, 100: 63, BAD: 0
P1020037.png, 0: 465058, 10: 6407, 40: 242, 70: 1789, 100: 104, BAD: 0
P1020038.png, 0: 465046, 10: 6464, 40: 497, 70: 1521, 100: 72, BAD: 0
P1020039.png, 0: 456134, 10: 13809, 40: 218, 70: 3355, 100: 84, BAD: 0
P1020040.png, 0: 451252, 10: 18191, 40: 515, 70: 3540, 100: 102, BAD: 0
P1020041.png, 0: 460170, 10: 11775, 40: 505, 70: 1100, 100: 50, BAD: 0
P1020042.png, 0: 463283, 10: 9159, 40: 24, 70: 1094, 100: 40, BAD: 0
P1020043.png, 0: 449870, 10: 20105, 40: 530, 70: 3035, 100: 60, BAD: 0
P1020044.png, 0: 444420, 10: 25455, 40: 604, 70: 3019, 100: 102, BAD: 0
P1020045.png, 0: 422175, 10: 46492, 40: 1131, 70: 3556, 100: 246, BAD: 0
P1020046.png, 0: 413711, 10: 52179, 40: 2345, 70: 5136, 100: 229, BAD: 0
P1020047.png, 0: 462088, 10: 10144, 40: 469, 70: 853, 100: 46, BAD: 0
P1020048.png, 0: 462609, 10: 10017, 40: 222, 70: 694, 100: 58, BAD: 0
P1020049.png, 0: 464067, 10: 6465, 40: 161, 70: 2853, 100: 54, BAD: 0
P1020050.png, 0: 466113, 10: 4973, 40: 142, 70: 2320, 100: 52, BAD: 0
P1020051.png, 0: 467099, 10: 5262, 40: 79, 70: 1131, 100: 29, BAD: 0
P1020052.png, 0: 466447, 10: 5854, 40: 55, 70: 1220, 100: 24, BAD: 0
P1020053.png, 0: 458785, 10: 12741, 40: 490, 70: 1532, 100: 52, BAD: 0
P1020054.png, 0: 449266, 10: 21846, 40: 704, 70: 1692, 100: 92, BAD: 0
P1020055.png, 0: 461822, 10: 8958, 40: 457, 70: 2313, 100: 50, BAD: 0
P1020056.png, 0: 461882, 10: 9371, 40: 300, 70: 1992, 100: 55, BAD: 0
P1020057.png, 0: 456263, 10: 14286, 40: 619, 70: 2346, 100: 86, BAD: 0
P1020058.png, 0: 451747, 10: 18317, 40: 561, 70: 2868, 100: 107, BAD: 0
P1020059.png, 0: 464750, 10: 7897, 40: 255, 70: 660, 100: 38, BAD: 0
P1020060.png, 0: 460274, 10: 12100, 40: 330, 70: 860, 100: 36, BAD: 0
P1020061.png, 0: 446557, 10: 24050, 40: 504, 70: 2424, 100: 65, BAD: 0
P1020062.png, 0: 446850, 10: 23363, 40: 934, 70: 2360, 100: 93, BAD: 0
P1020063.png, 0: 451786, 10: 19323, 40: 0, 70: 2428, 100: 63, BAD: 0
P1020064.png, 0: 451899, 10: 19199, 40: 62, 70: 2371, 100: 69, BAD: 0
P1020065.png, 0: 446823, 10: 25111, 40: 0, 70: 1616, 100: 50, BAD: 0
P1020066.png, 0: 441813, 10: 29572, 40: 494, 70: 1689, 100: 32, BAD: 0
P1020067.png, 0: 461030, 10: 10061, 40: 169, 70: 2295, 100: 45, BAD: 0
P1020068.png, 0: 462967, 10: 7781, 40: 377, 70: 2417, 100: 58, BAD: 0
P1020069.png, 0: 462920, 10: 7764, 40: 419, 70: 2407, 100: 90, BAD: 0
P1020070.png, 0: 463592, 10: 6971, 40: 429, 70: 2496, 100: 112, BAD: 0
P1020071.png, 0: 468470, 10: 2740, 40: 255, 70: 2057, 100: 78, BAD: 0
P1020072.png, 0: 467231, 10: 3925, 40: 212, 70: 2144, 100: 88, BAD: 0
P1020073.png, 0: 463645, 10: 8006, 40: 112, 70: 1766, 100: 71, BAD: 0
P1020074.png, 0: 459467, 10: 11703, 40: 302, 70: 2042, 100: 86, BAD: 0
P1020075.png, 0: 460108, 10: 12381, 40: 97, 70: 973, 100: 41, BAD: 0
P1020076.png, 0: 458322, 10: 13586, 40: 348, 70: 1308, 100: 36, BAD: 0
P1020077.png, 0: 459340, 10: 11437, 40: 451, 70: 2268, 100: 104, BAD: 0
P1020078.png, 0: 459097, 10: 11760, 40: 380, 70: 2297, 100: 66, BAD: 0
P1020079.png, 0: 468254, 10: 3838, 40: 269, 70: 1186, 100: 53, BAD: 0
P1020080.png, 0: 469210, 10: 2701, 40: 296, 70: 1348, 100: 45, BAD: 0
P1020081.png, 0: 465073, 10: 6065, 40: 288, 70: 2118, 100: 56, BAD: 0
P1020082.png, 0: 464379, 10: 6376, 40: 434, 70: 2355, 100: 56, BAD: 0
P1020083.png, 0: 459362, 10: 10380, 40: 580, 70: 3206, 100: 72, BAD: 0
P1020084.png, 0: 455451, 10: 14747, 40: 558, 70: 2754, 100: 90, BAD: 0
P1020085.png, 0: 464822, 10: 6127, 40: 436, 70: 2126, 100: 89, BAD: 0
P1020086.png, 0: 465382, 10: 4615, 40: 588, 70: 2929, 100: 86, BAD: 0
P1020087.png, 0: 464710, 10: 6370, 40: 169, 70: 2258, 100: 93, BAD: 0
P1020088.png, 0: 466585, 10: 4449, 40: 0, 70: 2488, 100: 78, BAD: 0
P1020089.png, 0: 466246, 10: 4353, 40: 419, 70: 2502, 100: 80, BAD: 0
P1020090.png, 0: 465710, 10: 4931, 40: 383, 70: 2534, 100: 42, BAD: 0
P1020091.png, 0: 460893, 10: 10249, 40: 664, 70: 1722, 100: 72, BAD: 0
P1020092.png, 0: 459747, 10: 10925, 40: 552, 70: 2303, 100: 73, BAD: 0
P1020093.png, 0: 466737, 10: 4576, 40: 278, 70: 1958, 100: 51, BAD: 0
P1020095.png, 0: 465370, 10: 5725, 40: 337, 70: 2103, 100: 65, BAD: 0
P1020096.png, 0: 465577, 10: 5278, 40: 338, 70: 2353, 100: 54, BAD: 0
P1020097.png, 0: 460146, 10: 10853, 40: 778, 70: 1764, 100: 59, BAD: 0
P1020098.png, 0: 461327, 10: 9383, 40: 764, 70: 2053, 100: 73, BAD: 0
P1020099.png, 0: 455540, 10: 12938, 40: 791, 70: 4225, 100: 106, BAD: 0
P1020100.png, 0: 460150, 10: 9331, 40: 528, 70: 3530, 100: 61, BAD: 0
P1020101.png, 0: 463599, 10: 6152, 40: 758, 70: 3002, 100: 89, BAD: 0
P1020102.png, 0: 458772, 10: 10578, 40: 478, 70: 3654, 100: 118, BAD: 0

Still having the problem of nan result

More, as mention in Problem in training unet - #21 by Morganh

  1. Please convert the mask images to gray images. After checking the public dataset you mentioned, the pixel value is either 0 or 128. Please map 128 to 1.
  2. For binary segmentation, the label_id should be 0 and 1. BTW, if there are 4 classes , label_id should belong to 0-3

Thanks!

The image is created as Grayscale as follows:

Mat red (resized.rows, resized.cols, CV_8UC1);  

Also converted all labels to be contiguous 0, 1, 2, 3, and 4:

P1020007.png, 0: 448962, 1: 21025, 2: 496, 3: 3059, 4: 58, BAD: 0
P1020008.png, 0: 448504, 1: 21388, 2: 654, 3: 2952, 4: 102, BAD: 0
P1020009.png, 0: 439044, 1: 30354, 2: 845, 3: 3234, 4: 123, BAD: 0
P1020010.png, 0: 439767, 1: 28556, 2: 608, 3: 4576, 4: 93, BAD: 0
P1020011.png, 0: 452752, 1: 17341, 2: 447, 3: 2951, 4: 109, BAD: 0
P1020012.png, 0: 452612, 1: 17946, 2: 537, 3: 2417, 4: 88, BAD: 0
P1020013.png, 0: 457016, 1: 13876, 2: 329, 3: 2317, 4: 62, BAD: 0
P1020014.png, 0: 456702, 1: 13838, 2: 483, 3: 2539, 4: 38, BAD: 0
P1020015.png, 0: 461810, 1: 9648, 2: 297, 3: 1797, 4: 48, BAD: 0
P1020016.png, 0: 457818, 1: 13526, 2: 409, 3: 1790, 4: 57, BAD: 0
P1020017.png, 0: 443266, 1: 25413, 2: 687, 3: 4163, 4: 71, BAD: 0
P1020018.png, 0: 444648, 1: 24016, 2: 575, 3: 4292, 4: 69, BAD: 0
P1020019.png, 0: 453011, 1: 17776, 2: 0, 3: 2759, 4: 54, BAD: 0
P1020020.png, 0: 455757, 1: 15088, 2: 0, 3: 2681, 4: 74, BAD: 0
P1020021.png, 0: 463272, 1: 8716, 2: 0, 3: 1580, 4: 32, BAD: 0
P1020022.png, 0: 460001, 1: 11711, 2: 127, 3: 1740, 4: 21, BAD: 0
P1020023.png, 0: 458486, 1: 11810, 2: 515, 3: 2730, 4: 59, BAD: 0
P1020024.png, 0: 463211, 1: 7981, 2: 420, 3: 1949, 4: 39, BAD: 0
P1020025.png, 0: 450812, 1: 19266, 2: 494, 3: 2949, 4: 79, BAD: 0
P1020026.png, 0: 443934, 1: 26056, 2: 513, 3: 3055, 4: 42, BAD: 0
P1020027.png, 0: 447317, 1: 20554, 2: 246, 3: 5308, 4: 175, BAD: 0
P1020028.png, 0: 452602, 1: 15976, 2: 819, 3: 4047, 4: 156, BAD: 0
P1020029.png, 0: 458531, 1: 10047, 2: 531, 3: 4417, 4: 74, BAD: 0
P1020030.png, 0: 458589, 1: 10126, 2: 298, 3: 4478, 4: 109, BAD: 0
P1020031.png, 0: 462452, 1: 8832, 2: 289, 3: 1965, 4: 62, BAD: 0
P1020032.png, 0: 457226, 1: 13178, 2: 680, 3: 2456, 4: 60, BAD: 0
P1020033.png, 0: 459907, 1: 10693, 2: 357, 3: 2589, 4: 54, BAD: 0
P1020034.png, 0: 458065, 1: 12243, 2: 510, 3: 2727, 4: 55, BAD: 0
P1020035.png, 0: 457595, 1: 12297, 2: 270, 3: 3394, 4: 44, BAD: 0
P1020036.png, 0: 458862, 1: 10877, 2: 265, 3: 3533, 4: 63, BAD: 0
P1020037.png, 0: 465058, 1: 6407, 2: 242, 3: 1789, 4: 104, BAD: 0
P1020038.png, 0: 465046, 1: 6464, 2: 497, 3: 1521, 4: 72, BAD: 0
P1020039.png, 0: 456134, 1: 13809, 2: 218, 3: 3355, 4: 84, BAD: 0
P1020040.png, 0: 451252, 1: 18191, 2: 515, 3: 3540, 4: 102, BAD: 0
P1020041.png, 0: 460170, 1: 11775, 2: 505, 3: 1100, 4: 50, BAD: 0
P1020042.png, 0: 463283, 1: 9159, 2: 24, 3: 1094, 4: 40, BAD: 0
P1020043.png, 0: 449870, 1: 20105, 2: 530, 3: 3035, 4: 60, BAD: 0
P1020044.png, 0: 444420, 1: 25455, 2: 604, 3: 3019, 4: 102, BAD: 0
P1020045.png, 0: 422175, 1: 46492, 2: 1131, 3: 3556, 4: 246, BAD: 0
P1020046.png, 0: 413711, 1: 52179, 2: 2345, 3: 5136, 4: 229, BAD: 0
P1020047.png, 0: 462088, 1: 10144, 2: 469, 3: 853, 4: 46, BAD: 0
P1020048.png, 0: 462609, 1: 10017, 2: 222, 3: 694, 4: 58, BAD: 0
P1020049.png, 0: 464067, 1: 6465, 2: 161, 3: 2853, 4: 54, BAD: 0
P1020050.png, 0: 466113, 1: 4973, 2: 142, 3: 2320, 4: 52, BAD: 0
P1020051.png, 0: 467099, 1: 5262, 2: 79, 3: 1131, 4: 29, BAD: 0
P1020052.png, 0: 466447, 1: 5854, 2: 55, 3: 1220, 4: 24, BAD: 0
P1020053.png, 0: 458785, 1: 12741, 2: 490, 3: 1532, 4: 52, BAD: 0
P1020054.png, 0: 449266, 1: 21846, 2: 704, 3: 1692, 4: 92, BAD: 0
P1020055.png, 0: 461822, 1: 8958, 2: 457, 3: 2313, 4: 50, BAD: 0
P1020056.png, 0: 461882, 1: 9371, 2: 300, 3: 1992, 4: 55, BAD: 0
P1020057.png, 0: 456263, 1: 14286, 2: 619, 3: 2346, 4: 86, BAD: 0
P1020058.png, 0: 451747, 1: 18317, 2: 561, 3: 2868, 4: 107, BAD: 0
P1020059.png, 0: 464750, 1: 7897, 2: 255, 3: 660, 4: 38, BAD: 0
P1020060.png, 0: 460274, 1: 12100, 2: 330, 3: 860, 4: 36, BAD: 0
P1020061.png, 0: 446557, 1: 24050, 2: 504, 3: 2424, 4: 65, BAD: 0
P1020062.png, 0: 446850, 1: 23363, 2: 934, 3: 2360, 4: 93, BAD: 0
P1020063.png, 0: 451786, 1: 19323, 2: 0, 3: 2428, 4: 63, BAD: 0
P1020064.png, 0: 451899, 1: 19199, 2: 62, 3: 2371, 4: 69, BAD: 0
P1020065.png, 0: 446823, 1: 25111, 2: 0, 3: 1616, 4: 50, BAD: 0
P1020066.png, 0: 441813, 1: 29572, 2: 494, 3: 1689, 4: 32, BAD: 0
P1020067.png, 0: 461030, 1: 10061, 2: 169, 3: 2295, 4: 45, BAD: 0
P1020068.png, 0: 462967, 1: 7781, 2: 377, 3: 2417, 4: 58, BAD: 0
P1020069.png, 0: 462920, 1: 7764, 2: 419, 3: 2407, 4: 90, BAD: 0
P1020070.png, 0: 463592, 1: 6971, 2: 429, 3: 2496, 4: 112, BAD: 0
P1020071.png, 0: 468470, 1: 2740, 2: 255, 3: 2057, 4: 78, BAD: 0
P1020072.png, 0: 467231, 1: 3925, 2: 212, 3: 2144, 4: 88, BAD: 0
P1020073.png, 0: 463645, 1: 8006, 2: 112, 3: 1766, 4: 71, BAD: 0
P1020074.png, 0: 459467, 1: 11703, 2: 302, 3: 2042, 4: 86, BAD: 0
P1020075.png, 0: 460108, 1: 12381, 2: 97, 3: 973, 4: 41, BAD: 0
P1020076.png, 0: 458322, 1: 13586, 2: 348, 3: 1308, 4: 36, BAD: 0
P1020077.png, 0: 459340, 1: 11437, 2: 451, 3: 2268, 4: 104, BAD: 0
P1020078.png, 0: 459097, 1: 11760, 2: 380, 3: 2297, 4: 66, BAD: 0
P1020079.png, 0: 468254, 1: 3838, 2: 269, 3: 1186, 4: 53, BAD: 0
P1020080.png, 0: 469210, 1: 2701, 2: 296, 3: 1348, 4: 45, BAD: 0
P1020081.png, 0: 465073, 1: 6065, 2: 288, 3: 2118, 4: 56, BAD: 0
P1020082.png, 0: 464379, 1: 6376, 2: 434, 3: 2355, 4: 56, BAD: 0
P1020083.png, 0: 459362, 1: 10380, 2: 580, 3: 3206, 4: 72, BAD: 0
P1020084.png, 0: 455451, 1: 14747, 2: 558, 3: 2754, 4: 90, BAD: 0
P1020085.png, 0: 464822, 1: 6127, 2: 436, 3: 2126, 4: 89, BAD: 0
P1020086.png, 0: 465382, 1: 4615, 2: 588, 3: 2929, 4: 86, BAD: 0
P1020087.png, 0: 464710, 1: 6370, 2: 169, 3: 2258, 4: 93, BAD: 0
P1020088.png, 0: 466585, 1: 4449, 2: 0, 3: 2488, 4: 78, BAD: 0
P1020089.png, 0: 466246, 1: 4353, 2: 419, 3: 2502, 4: 80, BAD: 0
P1020090.png, 0: 465710, 1: 4931, 2: 383, 3: 2534, 4: 42, BAD: 0
P1020091.png, 0: 460893, 1: 10249, 2: 664, 3: 1722, 4: 72, BAD: 0
P1020092.png, 0: 459747, 1: 10925, 2: 552, 3: 2303, 4: 73, BAD: 0
P1020093.png, 0: 466737, 1: 4576, 2: 278, 3: 1958, 4: 51, BAD: 0
P1020095.png, 0: 465370, 1: 5725, 2: 337, 3: 2103, 4: 65, BAD: 0
P1020096.png, 0: 465577, 1: 5278, 2: 338, 3: 2353, 4: 54, BAD: 0
P1020097.png, 0: 460146, 1: 10853, 2: 778, 3: 1764, 4: 59, BAD: 0
P1020098.png, 0: 461327, 1: 9383, 2: 764, 3: 2053, 4: 73, BAD: 0
P1020099.png, 0: 455540, 1: 12938, 2: 791, 3: 4225, 4: 106, BAD: 0
P1020100.png, 0: 460150, 1: 9331, 2: 528, 3: 3530, 4: 61, BAD: 0
P1020101.png, 0: 463599, 1: 6152, 2: 758, 3: 3002, 4: 89, BAD: 0
P1020102.png, 0: 458772, 1: 10578, 2: 478, 3: 3654, 4: 118, BAD: 0

visualize_images Displays a proper and correct overlay

And this is the new specs for the classes:

data_class_config {
  target_classes {
    name: "Background"
    mapping_class: "Background"
    label_id: 0
  }
  target_classes {
    name: "L10"
    mapping_class: "L10"
    label_id: 1
  }
  target_classes {
    name: "N40"
    mapping_class: "N40"
    label_id: 2
  }
  target_classes {
    name: "S70"
    mapping_class: "S70"
    label_id: 3
  }
  target_classes {
    name: "E100"
    mapping_class: "E100"
    label_id: 4
  }
}

Also converted all images and masks to 320 X 320

And still getting a nan result

UPDATE

I decided to go back to what works and do small steps from there. I went back to a working version of the original unmodified unet_isbi notebook.

Then, I made by dataset have the same specs as the ISBI dataset: all images are grayscale 512X512, and the masks have label values of 0 for background, and value 255 for foreground!!!

This experiment gave be excellent results, all worked!!! But why?? AS you explained it must be labels 0 and 1, as described in the specs??

data_class_config {
  target_classes {
    name: "foreground"
    mapping_class: "foreground"
    label_id: 0
  }
  target_classes {
    name: "background"
    mapping_class: "background"
    label_id: 1 "<<<-------  Actual value in the mask is 255!!!!!!!!!
  }
}

AND, when I change the masks to 0 for background and 1 for foreground, the training now fails with nan evaluations.

The original masks for the ISBI dataset are 0 and 255. Despite the labels defined as 0 and 1 in the original example. If I modify those masks to 0 and 1, I get a bad nan evaluation.

I have no idea what to do to go from a 0, 255 binary segmentation to a 0, 1, 2, 3, 4 multiclass segmentation. Please help!

I attach one of the original ISBI masks. Here the values are 0 or 255!

Because it is grayscale image in the notebook. Its “input_image_type” is not color.
Only in the case of grayscale, the mask image will have 0 and 255 and the tao pipe will take care accordingly, mapping is not required.
For input_image_type color and the mask having any value except 0 and 255 - we need to modify.
The label_id should always start from 0 and go up in increasing order.
And should convert the mask image to a 1 channel image where every pixel that has the value of label_id.

Right now I am trying to simplify by using the original unet_isbi all in grayscale, but with my images,converted to grayscale 512X512 it worked well with two classes (binary segmentation) with exactly the same original spec as for isbi. This means the masks were marked 0 and 255. Running the same model with masks values 0 and 1 doesn’t work!

You can try this yourself! replace the pixels valued 255 in the isbi masks to 1 and train and evaluate and you’ll get nan.

So my first question is how to run the original isbi unet with values 0 and 1.

Next, how do I transform the model to three classes. For this, I have masks with all pixels with values in {0, 1, 2} and changed loss function to cross_entropy, added the extra class, and then the training fails…

Is there a unet multiclass segmentation example that works?

random_seed: 42
model_config {
  model_input_width: 320
  model_input_height: 320
  model_input_channels: 1
  num_layers: 18
  all_projections: true
  arch: "resnet"
  use_batch_norm: False
  training_precision {
    backend_floatx: FLOAT32
  }
}

training_config {
  batch_size: 3
  epochs: 50
  log_summary_steps: 10
  checkpoint_interval: 1
  loss: "cross_entropy"
  learning_rate:0.0001
  regularizer {
    type: L2
    weight: 2e-5
  }
  optimizer {
    adam {
      epsilon: 9.99999993923e-09
      beta1: 0.899999976158
      beta2: 0.999000012875
    }
  }
}

dataset_config {
  dataset: "custom"
  augment: False
  augmentation_config {
    spatial_augmentation {
    hflip_probability : 0.5
    vflip_probability : 0.5
    crop_and_resize_prob : 0.5
  }
  brightness_augmentation {
    delta: 0.2
  }
}
input_image_type: "grayscale"
train_images_path:"/workspace/tao-experiments/data/isbi/images/train"
train_masks_path:"/workspace/tao-experiments/data/isbi/masks/train"

val_images_path:"/workspace/tao-experiments/data/isbi/images/val"
val_masks_path:"/workspace/tao-experiments/data/isbi/masks/val"

test_images_path:"/workspace/tao-experiments/data/isbi/images/test"

data_class_config {
  target_classes {
    name: "foreground"
    mapping_class: "foreground"
    label_id: 0
  }
  target_classes {
    name: "Leaf"
    mapping_class: "Leaf"
    label_id: 1
  }
  target_classes {
    name: "Stem"
    mapping_class: "Stem"
    label_id: 2
  }
}
}

And here is an example of a mask…

As mentioned above, for the case of grayscale, mapping is not required.

Yes, please refer to another user’s topic Training multi-class UNet does not converge - #32 by laurim

That example didn’t work for me.

Here is the current situation:

a) Working well: unet binary segmentation works with my dataset grayscale images and masks with all pixels in {0, 255}

b) Next step (not working): binary segmentation with color images dataset.

I’ve used new masks, single channel, ALL pixels are in {0,1} for the two classes, foreground and background and the following changes to the specs file:

model_input_channels: 3 

and

input_image_type: "color"

The complete specs file is: unet_train_resnet_unet_isbi.txt (1.4 KB)

With a nan result on evaluate:

"{ 'foreground': {'precision': 1.0, 'Recall': 1.0, 'F1 Score': 1.0, 'iou': 1.0}, 
   'background': {'precision': nan, 'Recall': nan, 'F1 Score': nan, 'iou': nan} }"

All files are PNG 512X512

Thanks!!

May I know if they are two completely different dataset , right? One is grayscale dataset, another is color images dataset?

More, for your experiment2 - training with color images dataset, please try to use below parameters.

  • loss: “cross_entropy”
  • weight: 2e-06
  • crop_and_resize_prob : 0.01

Refer to Problems encountered in training unet and inference unet - #27 by Morganh

May I know if they are two completely different dataset , right? One is grayscale dataset, another is color images dataset?

Yes, its two datasets. The grayscale works well with masks with all pixels in {0, 255} (but does not work on masks with all pixels in {0, 1}) Now I’m trying to go to multiclass segmentation with color images. But I’m trying to build up in baby steps. So before doing the multiclass, I’m trying to do a binary color segmentation.

Making that change results in a bad evaluation with

/usr/local/lib/python3.6/dist-packages/iva/unet/scripts/evaluate.py:80: RuntimeWarning: invalid value encountered in true_divide
/usr/local/lib/python3.6/dist-packages/iva/unet/scripts/evaluate.py:81: RuntimeWarning: invalid value encountered in true_divide
/usr/local/lib/python3.6/dist-packages/iva/unet/scripts/evaluate.py:82: RuntimeWarning: invalid value encountered in true_divide

Here is the spec file:
unet_train_resnet_unet_isbi.txt (1.4 KB)

Is it possible to share some of your color images dataset?

Your last recommendation for hyperparameters didn’t work for binary semantic segmentation but it did work for multiclass semantic segmentation with my images, which is what I wanted.

Thank you very much for all your support.

By the way I am curious to know for the future if there is an option for faster support in solving these kind of issues even if it’s paid, does such a thing exist?

Thanks for your update and suggestion. I will sync with internal team for the improvement to ease end users.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.