ARC Colloquium: Yuanzhi Li (CMU)

Algorithms & Randomness Center (ARC)

Yuanzhi Li (CMU)

Monday, June 29, 2020

Virtual via Bluejeans - 11:00 am


Title:  Backward Feature Correction: How can Deep Learning perform Deep Learning

Abstract:  How does a 110-layer ResNet learn a high-complexity classifier using relatively few training examples and short training time? We present a theory towards explaining this deep learning process in terms of hierarchical learning. We refer to hierarchical learning as the learner learns to represent a complicated target function by decomposing it into a sequence of simpler functions, to reduce sample and time complexity. This work formally analyzes how multi-layer neural networks can perform such hierarchical learning efficiently and automatically simply by applying stochastic gradient descent (SGD) to the training objective.

Moreover, we present, to the best of our knowledge, the first theory result indicating how very deep neural networks can be sample and time efficient on certain hierarchical learning tasks, even when no known non-hierarchical algorithms (such as kernel method, linear regression over feature mappings, tensor decomposition, sparse coding, and their simple combinations) are efficient. We establish a new principle called “backward feature correction” to show how the features in the lower-level layers in the network can also be improved via training higher-level layers, which we believe is the key to understand the deep learning process in multi-layer neural networks.

Brief Bio: Yuanzhi Li is an assistant professor at CMU, Machine Learning Department. He did his Ph.D. at Princeton, under the advice of Sanjeev Arora (2014-2018) as well as a one-year postdoc at Stanford. His wife is Yandi Jin.  

Speaker's Webpage

Videos of recent talks are available at:

Click here to subscribe to the seminar email list:

Event Details


  • Monday, June 29, 2020
    12:00 pm - 1:00 pm