Progressive Modality Cooperation for Multi-Modality Domain Adaptation

Image credit: Unsplash

Abstract

Domain adaptation aims to leverage a label-rich domain (the source domain) to help model learning in a label-scarce domain (the target domain). Most domain adaptation methods require the co-existence of source and target domain samples to reduce the distribution mismatch. However, access to the source domain samples may not always be feasible in real-world applications due to different problems (e.g., storage, transmission, and privacy issues). In this work, we deal with the source data-free unsupervised domain adaptation problem and propose a novel approach referred to as Virtual Domain Modeling for Domain Adaptation (VDM-DA), in which the virtual domain acts as a bridge between the source and target domains. Specifically, based on the pre-trained source model, we generate the virtual domain samples by using an approximated Gaussian Mixture Model (GMM) in the feature space, such that the virtual domain maintains a similar distribution with the source domain without access to the original source data. Moreover, we also design an effective distribution alignment method to reduce the distribution divergence between the virtual domain and the target domain by gradually improving the compactness of the target domain distribution through model learning. In this way, we successfully achieve the goal of distribution alignment between the source and target domains when training deep networks without access to the source domain data. We conduct extensive experiments on four benchmark datasets for both 2D image-based and 3D point cloud-based cross-domain object recognition tasks, where the proposed method referred to as Virtual Domain Modeling for Domain Adaptation (VDM-DA) achieves the promising performance on all datasets.

Publication
IEEE Transactions on Image Processing, 30

Related