When applying powerful deep learning approaches on real world tasks like pixel level annotation of urban scenes it becomes clear that even those strong learners may fail dramatically and are still not ready for deployment in the wild. For semantic segmentation, one of the main practical challenges consists in finding large annotated collection to feed the data hungry networks. Synthetic images in combination with adaptive learning models have shown to help with this issue, but in general, different synthetic sources are analyzed separately, not leveraging on the potential growth in data amount and sample variability that could result from their combination. With our work we investigate for the first time the multi-source adaptive semantic segmentation setting, proposing some best practice rule for the data and model integration. Moreover we show how to extend an existing semantic segmentation approach to deal with multiple sources obtaining promising results.
Dettaglio pubblicazione
2019, International Conference on Image Analysis and Processing, Pages 292-301 (volume: 11751)
Towards Multi-source Adaptive Semantic Segmentation (04b Atto di convegno in volume)
Russo Paolo, Tommasi Tatiana, Caputo Barbara
keywords