Self-ensembling for visual domain adaptation

This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et al., 2017) of temporal ensembling (Laine et al;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Adaptation MNIST-to-USPS Mean teacher Accuracy 98.26 # 3
Domain Adaptation SVHN-to-MNIST Mean teacher Accuracy 99.18 # 1
Domain Adaptation Synth Signs-to-GTSRB Mean teacher Accuracy 98.66 # 1
Domain Adaptation USPS-to-MNIST Mean teacher Accuracy 98.07 # 5
Domain Adaptation VisDA2017 Mean teacher Accuracy 85.4 # 16

Methods


No methods listed for this paper. Add relevant methods here