Monday, February 11, 2019 in Klaus 1116
Schedule:
9:30am Refreshments
10:00 - 11:00 Distinguished Lecture by Éva Tardos (Cornell)
Learning and Efficiency of Outcomes in Games
11:00 - 11:30 Tuo Zhao (Georgia Tech ISyE)
Towards Understanding First Order Algorithms for Nonconvex Optimization in Machine Learning
11:30 - 12:00 Galyna Livshyts (Georgia Tech Math)
A tight net with respect to a random matrix norm and applications to estimating singular values
Followed by lunch in the Klaus Atrium
--------------------------
Keynote Speaker
Éva Tardos, Cornell University
Title: Learning and Efficiency of Outcomes in Games
Abstract: Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we have studied Nash equilibria of games, and developed good understanding how to quantify the impact of strategic user behavior on overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where players use a form of learning that helps them adapt to the environment. We ask if the quantitative guarantees obtained for Nash equilibria extend to such out of equilibrium game play, or even more broadly, when the game or the population of players is dynamically changing and where participants have to adapt to the dynamic environment.
Bio: Éva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, and she was Computer Science department chair from 2006 to 2010. She received her BA and PhD from Eötvös University in Budapest. She joined the faculty at Cornell in 1989. Tardos’s research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms and quantifying the efficiency of selfish routing. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, and is an external member of the Hungarian Academy of Sciences. She is the recipient of a number of fellowships and awards including the Packard Fellowship, the Gödel Prize, Dantzig Prize, Fulkerson Prize, ETACS prize, and the IEEE von Neumann Medal. She is editor-in-Chief of the Journal of the ACM, has been editor-in-Chief of SIAM Journal of Computing, and editor of several other journals including Combinatorica; she served as problem committee member for many conferences, and was program committee chair for the ACM-SIAM Symposium on Discrete Algorithms (1996), as well as FOCS’05, and EC’13.
-------------------------
Tuo Zhao, Georgia Tech ISyE
Title: Towards Understanding First Order Algorithms for Nonconvex Optimization in Machine Learning
Abstract: Stochastic Gradient Descent-type (SGD) algorithms have been widely applied to many non-convex optimization problems in machine learning, e.g., training deep neural networks, variational Bayesian inference and collaborative filtering. Due to current technical limit, however, establishing convergence properties of SGD for these highly complicated practical non-convex problems is generally infeasible. Therefore, we propose to analyze the behavior of the SGD-type algorithms through two simpler but non-trivial non-convex problems – (1) Streaming Principal Component Analysis and (2) Training Non-overlapping Two-layer Convolutional Neural Networks. Specifically, we prove that for both examples, SGD attains a sub-linear rate of convergence to the global optima with high probability. Our theory not only helps us better understand SGD, but also provides new insights on more complicated non-convex optimization problems in machine learning.
-----------------------
Galyna Livshyts, Georgia Tech Math
Title: A tight net with respect to a random matrix norm and applications to estimating singular values
Abstract: In this talk we construct a net around the unit sphere with good properties. We show that with exponentially high probability, the value of |Ax| on the sphere can be approximated well using this net, where A is any random matrix with independent columns. We apply it to study the smallest singular value of random matrices under very mild assumptions, and obtain sharp small ball behavior.