I was wondering if the semantic segmentation (segnet FCN_RESNET18_SUNRGB) for the SUNRGBD has been trained only for the RGB part or also for the RGB-D. There are multiple papers, such as https://arxiv.org/pdf/1511.00561.pdf (section 4.2), that mention that the training for segnet or FCN_RESNET18_SUNRGB had been trained only with the RGB part of the dataset.
Can someone confirm this? I am trying to compare segnet from github hello AI world example vs another dnn for specifically the rgb-d images.
to add perception capabilities to the network. We are comparing CNN vs graph neural network for depth awareness images, so in order to do a proper comparison the model should be trained on rgb-d.
Is the “mono-depth” also doing semantic segmentation?