Noise robustness in DNNs and leveraging inductive bias for learning without explicit human annotations

October 2nd, 12:00 pm- 1:00 pm in DCH 3092
Speaker: Fatih Furkan Yilmaz (STAT)

Please indicate interest, especially if you want lunch, here.
Abstract:

Classification problems today are typically solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples. The second, labeling step is often the most expensive one as it requires manually going through all examples. In this talk, we will discuss skipping the labeling step entirely and propose to directly train the deep neural network on the noisy candidate labels and early stop the training to avoid overfitting. With this procedure we’ll exploit an intriguing property of large overparameterized neural networks: While they are capable of perfectly fitting the noisy data, gradient descent fits clean labels much faster than the noisy ones, thus early stopping resembles training on the clean labels. We’ll first discuss some of the recent studies about the noise robustness of NNs that provide theoretical and practical motivation for this property and present experimental results of the state-of-the-art models for the widely considered CIFAR-10 classification problem. Our results show that early stopping the training of standard deep networks such as ResNet-18 on part of the Tiny Images dataset, which does not involve any human labeled data, and of which only about half of the labels are correct, gives a significantly higher test performance than when trained on the clean CIFAR-10 training dataset, which is a labeled version of the Tiny Images dataset, for the same classification problem. In addition, our results show that the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels, which is the noise most prevalent in works demonstrating noise robustness of NNs.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *