the featureMapShapes is for input size of 300x300, my new model’s input is 800x1000, how to set the featureMapShapes param? should i set it by height or width? thank you.
In the tensorflow version you are converting, NN does a resize to 300x300 itself and then works with the image of this size. In the config.py we are discarding of all the Preprocessing and thus we need to make a resize ourselves. So it’s not quite possible to make an input of other size than 300x300 as of right now without changing a whole architecture of the network.
I was struggling with the same issue after training a model with input size 100 x 100.
I have found out how to set the featureMapShapes parameter for this size by simply having the Object Detection API print the sizes upon construction of the model.
...
feature_maps.append(feature_map)
print('FEATURE MAP SIZE: {}'.format(feature_map.get_shape))
import sys
sys.exit(0)
return collections.OrderedDict(
[(x, y) for (x, y) in zip(feature_map_keys, feature_maps)])
...
Then, start the training script. After a few seconds, it should output the above print statement for each SSD layer, with the corresponding feature map size and then exit. For example, a 100 x 100 input size correponds to the following feature map sizes: [7, 4, 2, 1, 1, 1].
Note that this still only works for square input sizes.