Speaker: Professor Peter Bartlett
Affiliation: UC Berkeley

Abstract

Deep learning has revealed some major surprises from the perspective of statistical complexity: even without any explicit effort to control model complexity, these methods find prediction rules that give a near-perfect fit to noisy training data and yet exhibit excellent prediction performance in practice. This talk surveys recent work on methods that predict accurately in probabilistic settings despite fitting too well to training data. We give a characterization of this phenomenon in linear regression and in ridge regression, and we present a fundamentally nonlinear setting where benign overfitting occurs: a two-layer neural network trained using gradient descent on a classification problem.

References: 

  • Benign overfitting in linear regression. Bartlett, Long, Lugosi, Tsigler. PNAS 117(48):30063–30070, 2020. arXiv:1906.11300
  • Benign overfitting in ridge regression. Tsigler, Bartlett. arXiv:2009.14286
  • Deep learning: a statistical viewpoint. Bartlett, Montanari, Rakhlin. Acta Numerica 30:87-201, 2021.  arXiv:2103.09177
  • Benign overfitting without linearity. Chatterji, Frei, Bartlett. arXiv:2202.05928

About Maths Colloquium

The Mathematics Colloquium is directed at students and academics working in the fields of pure and applied mathematics, and statistics. 

We aim to present expository lectures that appeal to our wide audience.

Information for speakers

Information for speakers

Maths colloquia are usually held on Mondays, from 2pm to 3pm, in various locations at St Lucia.

Presentations are 50 minutes, plus five minutes for questions and discussion.

Available facilities include:

  • computer 
  • data projector
  • chalkboard or whiteboard

To avoid technical difficulties on the day, please contact us in advance of your presentation to discuss your requirements.

Venue

Priestley Building (67)
Room: 
442 and Zoom (https://uqz.zoom.us/j/81688396546)