Domain adaptation aims to minimize the domain gap and successfully transfer the model trained on the labeled source domain to the unlabeled target domain. They suppose that label sets are identical across domains (closed set domain adaptation). Recent works try to relax the assumption by proposing open-set, partial, open-partial domain adaptation. However, in the general scenario, we cannot select the proper domain adaptation method because no prior knowledge about the target domain label set is given.
Universal domain adaptation is a generalized setting. Given a labeled source domain, for any related target domain, regardless of how its label set differs from that of the source domain, we need to classify its samples correctly if it belongs to any class in the source label set, or mark it as “unknown” otherwise.
- Universal Domain Adaptation via Compressive Attention Matching (ICCV 2023)
- Upcycling Models under Domain and Category Shift (CVPR 2023) [paper] [code]
- Subsidiary Prototype Alignment for Universal Domain Adaptation (NeurIPS 2022) [paper]
- Unified Optimal Transport Framework for Universal Domain Adaptation (NeurIPS 2022) [paper] [code]
- Geometric Anchor Correspondence Mining with Uncertainty Modeling for Universal Domain Adaptation (CVPR 2022)
- Divergence Optimization for Noisy Universal Domain Adaptation (CVPR 2021) [paper] [code]
- Active Universal Domain Adaptation (ICCV 2021) [paper]
- OVANet: One-vs-All Network for Universal Domain Adaptation (ICCV 2021) [paper] [code]
- Domain Consensus Clustering for Universal Domain Adaptation (CVPR 2021) [paper] [code]