Workshop: Talk titles/abstracts

Workshop on Algorithms and Randomness

Monday, May 14 - Thursday, May 17, 2018
Georgia Tech, Klaus Building, Room 1116

 

Speaker: Nikhil Bansal (Eindhoven)

Title: An algorithmic version of Banaszczyk's discrepancy theorem

Abstract:  In the 90's Banaszczyk developed a very powerful method for discrepancy problems, that goes beyond the partial coloring method. His result was based on deep results from convex geometry and was non-constructive. In this talk, I will present an alternate proof of this result, which is based on elementary techniques and also gives an efficient algorithm. This leads to the first efficient algorithms for several previous results in discrepancy.
Based on joint work with Daniel Dadush, Shashwat Garg and Shachar Lovett.


Speaker:  Megan Bernstein (Georgia Tech)

Title:  Cutoff with window for the random to random shuffle

Abstract:  Cutoff is a remarkable property of many Markov chains in which they after number of steps abruptly transition in a smaller order window from an unmixed to a mixed distribution. Most random walks on the symmetric group, also known as card shuffles, are believed to mix with cutoff, but we are far from being able to prove this. Spectral techniques are one of the few known that are strong enough to give cutoff, but diagonalization is difficult for random walks on groups where the generators of the walk are not conjugation invariant. Using a diagonalization by Dieker and Saliola , I will show an upper bound that, together with a lower bound from Subag, gives that cutoff occurs for the random-to-random card shuffle on n cards occurs at 3/4n log n - 1/4n log log n ± cn steps (answering a 2001 conjecture of Diaconis). Includes joint work with Evita Nestoridi.


Speaker:  Pietro Caputo (Roma Tre)

Title: Non-reversible random walks on sparse random structures: invariant measure and mixing time

Abstract:   We analyse the convergence to stationarity for a class of random walks on random directed graphs. Examples include the configuration model with prescribed in- and out-degree sequences. The invariant measure has a nontrivial shape and is characterised via recursive distributional equations. The mixing time is given by the entropy of the equilibrium distribution divided by the average one-step entropy of the Markov chain. Moreover, the chain has a sharp cutoff behaviour around this time. The results are further extended to a large family of sparse Markov chains whose transition matrix has exchangeable random rows satisfying a sparsity assumption. Based on joint work with Charles Bordenave and Justin Salez.  


Speaker:  Daniel Dadush (CWI)

Title:  A Friendly Smoothed Analysis of the Simplex Method

Abstract:  Explaining the excellent practical performance of the simplex method for linear programming has been a major topic of research for over 50 years. One of the most successful frameworks for understanding the simplex method was given by Spielman and Teng (JACM `04), who the developed the notion of smoothed analysis. Starting from an arbitrary linear program with d variables and n constraints, Spielman and Teng analyzed the expected runtime over random perturbations of the LP (smoothed LP), where variance sigma^2 Gaussian noise is added to the LP data. In particular, they gave a two-stage shadow vertex simplex algorithm which uses an expected O(d^55 n^86 sigma^{−30}) number of simplex pivots to solve the smoothed LP. Their analysis and runtime was substantially improved by Deshpande and Spielman (FOCS `05) and later Vershynin (SICOMP `09). The fastest current algorithm, due to Vershynin, solves the smoothed LP using an expected O(d^3 log^3 n sigma^{−4}) number of pivots (for small sigma), improving the dependence on n from polynomial to logarithmic. 

While the original proof of Spielman and Teng has now been substantially simplified, the resulting analyses are still quite long and complex and the parameter dependencies far from optimal. In this work, we make substantial progress on this front, providing an improved and simpler analysis of the shadow simplex method, where our main algorithm requires an expected O(d^2 sqrt(log n) sigma^{-2}) number of simplex pivots. We obtain our results via an improved shadow bound, key to earlier analyses as well, combined with algorithmic techniques of Borgwardt (ZOR `82) and Vershynin. As an added bonus, our analysis is completely modular, allowing us to obtain non-trivial bounds for perturbations beyond Gaussians, such as Laplace perturbations.

This is joint work with Sophie Huiberts (CWI): https://arxiv.org/abs/1711.05667.


Speaker:  Martin Dyer (Leeds)

Title:  Matchings and the switch chain

Abstract:  We will examine the problem of approximately counting all perfect matchings in hereditary classes of (nonbipartite) graphs. In particular, we consider the convergence of the switch Markov chain proposed by Diaconis, Graham and Holmes. We determine the largest hereditary class for which this chain is ergodic, and define a large new hereditary class of graphs for which it is rapidly mixing. We show further that the chain has exponential mixing time for  slightly larger classes. We will also discuss the question of ergodicity of the switch chain in arbitrary graphs.


Speaker: Alan Frieze (CMU)

Title: Random Walk Cuckoo Hashing Abstract: We consider the expected insertion time of Cuckoo Hashing when clashes are resolved randomly. We prove O(1) expected time insertion in two cases.

(i) d choices of location and location capacity one: joint with Tony Johansson.
(ii) 2 choices of location and location capacity d: joint with Samantha Petti


SpeakerDavid Gamarnik (MIT)

Title:  Maximum Cut problem on sparse random hypergraphs . Structural results using the interpolation method and the algorithmic implications.

Abstract:  We consider a particular version of the Maximum Cut problem (to be defined in the talk)  on a  sparse random K-uniform hypergraph, when K is even. The goal is to compute the maximum cut value w.h.p., and ideally to find an algorithm which finds a near maximum cut value in polynomial time. Using the interpolation technique we obtain the value of the maximum cut asymptotically as the edge density increases. Specifically, the interpolation method allows us to connect the maximum cut value with the ground state of the associated Sherrington-Kirkpatrick (SK) spin glass model. Furthermore, in the case when K is at least 4, we show that the configurations nearing the ground state in the SK model exhibits the Overlap Gap Property (OGP): every two such configurations are either nearly identical or nearly orthogonal to each other. Using the interpolation method the same is shown to hold for the original Maximum Cut problem. The presence of the OGP implies that no local algorithm defined as a factor of i.i.d., which will be defined in the talk, can succeed in constructing a nearly optimal cut. The existence of a polynomial time algorithm for constructing a nearly optimum cut on a random K-uniform hypergraph remains an open problem.

Joint work with Wei-Kuo Chen,  Dmitry Panchenko and  Mustazee Rahman.


Speaker:  Mark Jerrum (Queen Mary)

Title:  A polynomial-time approximation algorithm for all-terminal network reliability

Abstract:  Let G be an undirected graph in which each edge is assigned a failure probability.  Suppose the edges fail independently with the given probabilities.  We are interested in computing the probability that the resulting graph is connected, the so-called all-terminal reliability of G.  As this is an #P-hard problem, computing an exact solution is infeasible.  Karger gave an efficient approximation algorithm, in the sense of FPRAS or Fully Polynomial Randomised Approximation Scheme, for all-terminal unreliability (the probability of the network being disconnected).  However, the existence of an FPRAS for unreliability does not imply an FPRAS for reliability, as the operation of subtracting from 1 does not preserve relative error.

I shall present an FPRAS for all-terminal reliability or, equivalently, evaluating the Tutte polynomial T(G;x,y) on the semi-infinite line x = 1 and y ≥ 1.  This is the first good news on approximating the Tutte polynomial of unrestricted graphs for a long time.  The main contribution is to confirm a conjecture by Gorodezky and Pak (Random Structures and Algorithms, 2014), that the expected running time of the "cluster-popping" algorithm in so-called bi-directed graphs is bounded by a polynomial in the size of the input.  Surprisingly, the cluster popping algorithm on bi-directed graphs solves a disguised version of the all-terminal reliability problem on undirected graphs, a fact that was known to researchers in network reliability in 1980, but seems to have been forgotten in the meantime.

 Joint work with Heng Guo (Edinburgh)


Speaker:  Jeff Kahn (Rutgers)

Title:  The constant in Shamir's Problem

Abstract:  Shamir's Problem (circa 1980) asks:  for fixed r at least 3 and n a

(large) multiple of r, how large should M be so that M random r-subsets of {1, ... ,n} are likely to contain a perfect matching (that is, n/r disjoint r-sets)?  About ten years ago Johansson, Vu and I proved the natural conjecture that this is true if M>C n log(n), for some large C=C(r).  Here we give the asymptotically correct statement:
the same behavior holds for any fixed C> 1/r.


Speaker:  Yin Tat Lee (Washington)

Title:  Sampling from polytopes

Abstract:  This is a survey talk on how to sample points from a polytope explicitly given by {Ax>=b} (in both theory and practice).

 Joint work with Ben Cousins and Santosh Vempala


Speaker:  Eyal Lubetzky (Courant)

Title:  Dynamics for the critical 2D Potts/FK model: many questions and a few answers

Abstract:  I will survey the recent developments vs. the many problems that remain open on single-site (bond) and cluster dynamics for the Potts (FK) model on Z^2 at its critical point for different values of q and different boundary conditions.


Speaker:  Aleksander Madry (MIT)

Title:  Gradient Descent: The Mother of All Algorithms?

AbstractMore than a half of century of research in theoretical computer science brought us a great wealth of advanced algorithmic techniques. These techniques can then be combined in a variety of ways to provide us with sophisticated, often beautifully elegant algorithms. This diversity of methods is truly stimulating and intellectually satisfying. But is it also necessary?

In this talk, I will address this question by discussing one of the most, if not the most, fundamental continuous optimization techniques: the gradient descent method. I will briefly describe how this method can be applied, sometime in a quite non-obvious manner, to a number of classic algorithmic tasks, such as the maximum flow problem, the bipartite matching problems, the k-server problem, as well as matrix scaling and balancing. The resulting perspective will provide us with a broad, unifying view on this diverse set of problems. A perspective that was key to making the first in decades progress on each one of these tasks.


Speaker: Fabio Martinelli (Roma Tre)

Title: On the infection time of the Duarte model: the role of energy barriers.

Abstract:  In the Duarte model each vertex of \mathbf Z^2can be either infected or healthy. In the bootstrap percolation model version, infected vertices stay infected while a healthy vertex becomes infected if at least two of its North, South, and West neighbors are infected. In the model version with kinetic constraints (KCM), each vertex with rate one changes its state by tossing a biased coin iff the same constraint is satisfied. 

For both versions of the model, an important problem is to determine the divergence as q\to 0of the infection time of the origin when the initial infection set is q-random. 

For the bootstrap percolation version, the problem was solved in 2016 by B. Bollobas, H. Duminil-Copin, R. Morris, and P. Smith. For the KCM version, our recent work proves that hidden logarithmically growing energy barriers produce a much sharper divergence. The result also confirms for the Duarte model a universality conjecture for general critical KCM on \mathbf Z^2put forward by R. Morris, C. Toninelli and myself.

 Joint work with R. Morris and C. Toninelli (upper bound) and L. Mareche' and C. Toninelli (lower bound).   


Title: Sum-of-Squares Meets Nash: Lower Bounds for Finding Any Equilibrium

Abstract: Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. 

In this work, we propose a framework of rounding for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilibria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree 'd' SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. 

This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semidefinite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. 

Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for e = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2^o(n) -size list of candidates and obtain an e-approximate NE. For some constant e, we show a similar result for degree o(log (n)) SoS relaxation and list size n^o(log (n)) . Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. 

Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel.

(Joint work with Pravesh K. Kothari)


Speaker:  Raghu Meka (UCLA)

Title:  Learning discrete Markov Random Fields with nearly optimal runtime and sample complexity.

Abstract:  We give an algorithm for learning the structure of an undirected graphical model that has essentially optimal sample complexity and running time.  We make no assumptions on the structure of the graphical model.  For Ising models, this subsumes and improves on all prior work.  For general t-wise MRFs, these are the first results of their kind.

Our approach is new and uses a multiplicative-weight update algorithm.  Our algorithm-- Sparsitron-- is easy to implement (has only one parameter) and holds in the online setting. It also gives the first provably efficient solution to the problem of learning sparse Generalized Linear Models (GLMs).

Joint work with Adam Klivans.


SpeakerMike Molloy (Toronto)

Title:  Regular subgraphs of a random graph

Abstract:  The threshold for a random graph to have a subgraph with minimum degree k ≥ 3 (i.e. a k-core) has been well understood for decades:
Pittel, Spencer and Wormald determined that it is equal to ckn for a particular constant ck. The threshold for a random graph to have a k-regular subgraph still eludes us. Bollobas, Kim and Verstraete conjecture that the two thresholds are different and prove this for k = 3.

We prove that the thresholds are very close: the regular subgraph threshold is at most (ck + e −k/100)n for large k.

Joint work with Dieter Mitsche and Pawel Pralat


Title:  Random walks on the chambers of a hyperplane arrangement

Abstract: The Bidigare-Hanlon-Rockmore random walk on the chambers of a real hyperplane arrangement is a Markov chain that generalizes famous examples, such as the Tsetlin library and riffle shuffles. We will introduce lower bounds for the separation distance and a strong stationary time, which allow for the first time to study cutoff for hyperplane arrangement walks under certain conditions.  We will also discuss how the method for the lower bound can be used to prove a uniform lower bound for the mixing time of Glauber dynamics on a monotone system, such as the Ising model, reproving a result of Ding and Peres.


Title:  Mixing in groups: From Ramanujan graphs to the product replacement algorithm

Abstract:   I will survey several recent works involving mixing for random walks on groups and regular graphs. In joint work with Eyal Lubetzky, we determined precisely the mixing time of random walk on optimal d-regular expanders (Ramanujan graphs) and showed these walks exhibit cutoff. Much less is known for the random walk on n by n invertible matrices mod 2 obtained by adding a random row to another random row; the spectral gap was found by Kassabov (2005) but the mixing time is unknown. The computational mixing time in this group was related by Rivest and Sotiraki to a cryptographic signature scheme. If we focus on one column in these matrices, we obtain a random walk on the hypercube considered by Chung and Graham (1997), who gave an upper bound of O(n\log n) for the mixing time;  this was refined to cutoff  at time (3/2) n\log n+O(n) in joint work with Anna Ben Hamou (2017). Recently, (jointly with R. Tanaka and A. Zhai) we extended the latter result to any fixed set of columns, and more generally, to the product replacement algorithm on any finite group; surprisingly, the leading constant 3/2 persists.


Title: Sphere packings, codes, and kissing numbers via hard core models

Abstract: We prove a lower bound on the expected size of a spherical code drawn from a "hard cap" model and the expected density of a packing drawn from the hard sphere model in high dimensions. These results allows us to improve the lower bound on the kissing number in high dimensions by a factor d and to prove new lower bounds on the entropy of sphere packings of density \Theta(d 2^{-d}) in R^d. Joint work with Matthew Jenssen and Felix Joos.


Speaker:  Dana Randall (Georgia Tech)

Title:  Markov Chain Algorithms for Programmable Active Matter

AbstractWe consider stochastic solutions to problems arising from programmable active matter based on emergent phenomena.  We view active matter as a collection of simple computational elements (or particles) with limited memory that self-organize to solve system-wide problems of movement, configuration, and coordination.  First, we present a solution to the compression problem, which aims to have the particle system gather as tightly together as possible through a distributed, local, and asynchronous algorithms. We assume the geometric amoebot model and show that we can achieve compression whenever we start with a particle system that is simply connected. We also present a stochastic solution to the separation problem, in which heterogenous (colored) particles interact initially without bias, but separate into homogeneous clusters in the presence of new environmental conditions.

Based on joint work with Sarah Cannon, Joshua Daymude, Cem Gokmen and Andrea Richa.


Speaker:  Sofya Raskhodnikova (Boston University)

Title:  Fast Algorithms for Testing Geometric Properties

Abstract: How quickly can we determine if an object satisfies some basic geometric property? For example, is the object a half-plane? Is it convex? Is it connected? If we need to answer such a question exactly, it requires at least as much time as it takes to read the object. In this talk, we will focus on approximate versions of these questions and will discuss how to solve them in time that depends only on the approximation parameter, but not the size of the input.

Specifically, an algorithm is given access to a discretized image represented by an n x n matrix of 0/1 pixel values. Another input to the algorithm is an approximation parameter, epsilon. The algorithm is required to accept images with the desired property and reject (with high probability) images that are far from having the desired property. An image is far if at least an epsilon fraction of its pixels has to be changed to get an image with the required property. For example, in this model, if the algorithm is allowed to read pixels of its choice, the half-plane property and convexity can be tested in time O(1/epsilon). If the algorithm receives access to pixels chosen uniformly and independently at random, then the half-plane property still takes O(1/epsilon) time, but for convexity the (optimal) bound on the running time is O(1/epsilon^(4/3)).

Based on joint work with Piotr Berman and Meiram Murzabulatov.


Speaker Alistair Sinclair (UC Berkeley)

Title:  Lee-Yang Theorems and the Ising Model

Abstract:  The celebrated Lee-Yang Theorem of the 1950s says that the zeros of the partition function of the ferromagnetic Ising model (viewed as a polynomial in the field parameter) lie on the unit circle in the complex plane.  In this talk I will discuss a revival of interest in this result, inspired by computational questions.  I will focus on two developments.  First, I will explain how a generalization of the Lee-Yang theorem to rule out repeated zeros leads to hardness results for computing averages of observables associated with the Ising model.  Then I will show how to combine the theorem with recent technology of Barvinok and others to obtain deterministic approximation algorithms for the partition function.

This is joint work with Piyush Srivastava and Jingcheng Liu.


Speaker:  Allan Sly (Princeton)

Title:  Cutoff for the Swendsen-Wang dynamics

Abstract:  We prove cutoff for the Swendsen-Wang dynamics on the lattice at high enough temperatures, relating the mixing time to the infinite volume spectral gap.  The proof combines two earlier methods of proving cutoff, the update support and information percolation, to give a proof of cutoff in a non-local dynamics.

Joint work with Danny Nam


Speaker: Daniel Stefankovic (Rochester)

Title:  Inapproximability of the Independent Set Polynomial in the Complex Plane

Abstract:  Hard-core model (also known as independent set polynomial) is one of the simplest models in statistical physics. The complexity of approximating the value of the partition function of the model on max-degree-Delta graphs (and also sampling from the Gibbs distribution) is now understood for positive values of the parameter (Weitz'2005, Sly'2009)---the complexity threshold aligns with the uniqueness/non-uniqueness threshold for the existence of multiple Gibbs measures on Delta-regular tree.

In the case of complex parameter the complexity of approximating the partition function is not yet understood---the complexity threshold appears to align with the existence of zeros of the partition function on trees of max-degree-Delta. We show that outside of the conjectured zero-free region the problem of approximating the value of the partition function is hard (#P-hard). Joint work with Ivona Bezakova, Andreas Galanis, and Leslie Ann Goldberg.


Speaker:  Virginia Vassilevska Williams (MIT)

Title:  Towards Tight Approximation Bounds for Graph Diameter and Eccentricities

Abstract: Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no  known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be {\em approximated}. Chechik et al.\ [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of $3/2$ in $\tO(m^{3/2})$ time. Roditty and Vassilevska W.\ [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no $O(n^{2-\eps})$ time algorithm for $\eps>0$ can achieve an approximation factor better than $3/2$ in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than $3/2$. It was, however, completely plausible that a $3/2$-approximation is possible in linear time.

In this work we conditionally rule out such a possibility by showing that unless SETH fails no $O(m^{3/2-\eps})$ time algorithm can achieve an approximation factor better than $5/3$. We provide similarly tight results for other distance-related parameters, also providing new algorithms.
       
To establish the our lower bounds we study the $S$-$T$ Diameter problem: Given a graph and two subsets $S$ and $T$ of vertices, output the largest distance between a vertex in $S$ and a vertex in $T$. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. In this talk I will explain the lower bound constructions for $S$-$T$ Diameter and will outline how one can extend them for Diameter as well.  

Joint work with Arturs Backurs, Nicole Wein, Liam Roditty and Gilad Segal. To appear in STOC 2018.


Speaker:  David Woodruff (CMU)

Title:  Relative Error Tensor Low Rank Approximation

Abstract:  We consider relative error low rank approximation of tensors with respect to the Frobenius norm: given an order-q tensor A, output a rank-k tensor B for which |A-B|_F <= (1+eps)OPT, where OPT =inf_{rank-k A'} |A-A'|_F. Despite the success on obtaining relative error low rank approximations for matrices, no such results were known for tensors. One structural issue is there may be no rank-k tensor A' achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the tensor rank, which is NP-hard. We bypass these issues via (1) bicriteria and (2) parameterized complexity solutions. As an example, we give an algorithm which outputs a rank (k/eps)^{q-1} tensor B for which |A-B|_F <= (1+eps)OPT in nnz(A)+n*poly(k/eps) time in the real RAM model for an n x n x ... n tensor. Here nnz(A) is the number of non-zero entries in A. For outputting a rank-k tensor, or even a bicriteria solution with rank-Ck for a certain constant C>1, we show an exp(k^{1-o(1)}) time lower bound under the Exponential Time Hypothesis. Our results also give the first relative error low rank approximations for tensors for a large number of robust error measures for which nothing was known, as well as column row and tube subset selection problems, generalizing and strengthening results for CUR decompositions of matrices. Based on joint with Zhao Song (UT Austin) and Peilin Zhong (Columbia).


Speaker:  Yufei Zhao (MIT)

Title:  Large deviations in random graphs

Abstract:  What is the probability that the number of triangles in an Erd\H{o}s–R\'enyi random graph exceeds its mean by a constant factor?

This problem has been a useful litmus test for concentration bound techniques. Even the order of the log-probability was considered a difficult problem until its resolution a few years ago by Chatterjee and DeMarco--Kahn. I will discuss the problem of determining the exponential rate of the tail probability, and survey some recent developments and open problems.