As machine learning is applied to increasingly sensitive tasks, and applied on noisier and noisier data, it has become important that the algorithms we develop for ML are robust to potentially worst-case noise. In this class, we will survey a number of recent developments in the study of robust machine learning, from both a theoretical and empirical perspective. Tentatively, we will cover a number of related topics, both theoretical and applied, including:
Our goal (though we will often fall short of this task) is to devise theoretically sound algorithms for these tasks which transfer well to practice.
The intended audience for this class is CS graduate students in Theoretical Computer Science and/or Machine Learning, who are interested in doing research in this area. However, interested undergraduates and students from other departments are welcome to attend as well. The coursework will be light and consist of some short problem sets as well as a final project.
We will assume mathematical maturity and comfort with algorithms, probability, and linear algebra. Background in machine learning will be helpful but should not be necessary.