Source Data-free Unsupervised Domain Adaptation for Semantic Segmentation

Abstract

Deep learning-based semantic segmentation methods require a huge amount of training images with pixel-level annotations. Unsupervised domain adaptation (UDA) for semantic segmentation enables transferring knowledge learned from the synthetic data (source domain) with low-cost annotations to the real images (target domain). However, current UDA methods mostly require full access to the source domain data for feasible adaptation, which limits their applications in real-world scenarios with privacy, storage, or transmission issues. To this end, this paper identifies and addresses a more practical but challenging problem of UDA for semantic segmentation, where access to the original source domain data is forbidden. In other words, only the pre-trained source model and unlabelled target domain data are available for adaptation. To tackle the problem, we propose to construct a set of source domain virtual data to mimic the source domain distribution by identifying the target domain high-confidence samples predicted by the pre-trained source model. Then by analyzing the data properties in the cross-domain semantic segmentation tasks, we propose an uncertainty and prior distribution-aware domain adaptation method to align the virtual source domain and the target domain with both adversarial learning and self-training strategies. Extensive experiments on three cross-domain semantic segmentation datasets with in-depth analyses verify the effectiveness of the proposed method.

Publication
In Proceedings of the 29th ACM International Conference on Multimedia

Related