Training Better Deep Neural Networks with Uncertainty Mining Net
Published in BMVC, 2021
In this work, we consider the problem of training deep neural networks on partially labeled data with label noise. That is, semi-supervised training of deep neural networks with noisily labeled data. As far as we know, this is a scarcely studied topic. We present a novel end-to-end deep generative framework for improving classifier performance when dealing with such data challenges. We call it Uncertainty Mining Net (UMN). We utilize all the available data (labeled and unlabeled) to train the classifier via a semi-supervised generative framework. During training, UMN estimates the uncertainty of the labels to focus on clean data for learning. More precisely, UMN applies a novel sample-wise label uncertainty estimation scheme. Extensive experiments and comparisons against state-of-the-art methods on several popular benchmark datasets demonstrate that UMN can reduce the impact of label noise and significantly improve classifier performance.