Abstract
Learning with noisy-labels has become an important research topic in computer
vision where state-of-the-art (SOTA) methods explore: 1) prediction
disagreement with co-teaching strategy that updates two models when they
disagree on the prediction of training samples; and 2) sample selection to
divide the training set into clean and noisy sets based on small training loss.
However, the quick convergence of co-teaching models to select the same clean
subsets combined with relatively fast overfitting of noisy labels may induce
the wrong selection of noisy label samples as clean, leading to an inevitable
confirmation bias that damages accuracy. In this paper, we introduce our
noisy-label learning approach, called Asymmetric Co-teaching (AsyCo), which
introduces novel prediction disagreement that produces more consistent
divergent results of the co-teaching models, and a new sample selection
approach that does not require small-loss assumption to enable a better
robustness to confirmation bias than previous methods. More specifically, the
new prediction disagreement is achieved with the use of different training
strategies, where one model is trained with multi-class learning and the other
with multi-label learning. Also, the new sample selection is based on
multi-view consensus, which uses the label views from training labels and model
predictions to divide the training set into clean and noisy for training the
multi-class model and to re-label the training samples with multiple top-ranked
labels for training the multi-label model. Extensive experiments on synthetic
and real-world noisy-label datasets show that AsyCo improves over current SOTA
methods.