Theoretical Neuroscience Day: Wed. March 15, 2017 2pm

Algorithms & Randomness Center (ARC) and

GT Neural Engineering Center present:

Theoretical Neuroscience Day

Wednesday, March 15, 2017

Marcus Nanotechnology Building 1116-1118, 2-5pm

2pm: Distinguished Lecture by Bruno Olshausen (UC Berkeley)
           Director, Redwood Center for Theoretical Neuroscience
           Helen Wills Neuroscience Institute and School of Optometry, UC Berkeley
           Title: Neural computations for active perception

3:00-3:15pm Coffee break

3:15pm: Chris Rozell (GT ECE)
               Title: Optimal sensory coding theories for neural systems under biophysical constraints

3:45pm: Santosh Vempala (GT CS)
               Title: A Computer Science View of the Brain

4:15pm: Reception

Talk Abstracts:

Bruno Olshausen
Title: Neural computations for active perception
The human visual system does not passively view the world, but actively moves its sensor array through eye, head and body movements.  How do neural circuits in the brain control and exploit these movements in order to build a scene representation that can guide useful behavior?  Here we focus on three aspects of this problem: 1) how do we see in the presence of fixational eye movements?  2) what is the optimal spatial layout of the image sampling array for a visual system that must search via eye movements?  and 3) how is information integrated across multiple fixations in order to form a holistic scene representation that allows for visual reasoning about compositional structure?   We address these questions by optimizing model neural systems to perform active vision tasks.  These model systems in turn provide us with new ways to think about structures found in biology, and they point to new experiments that explore the neural mechanisms enabling active vision.

Chris Rozell
Title: Optimal sensory coding theories for neural systems under biophysical constraints
The natural stimuli that biological vision must use to understand the world are extremely complex. Recent advances in machine learning have shown that low-dimensional geometric models (e.g., sparsity, manifolds) can capture much of the structure in complex natural images.  I will describe our work building efficient neural coding models that optimally exploit this structure.  These results incorporate the constraints of biophysical systems and the physical world by drawing on  mathematical tools such as dynamical systems, optimization, unsupervised learning, randomized dimensionality reduction, and manifold learning.  These results show that incorporating natural constraints can lead to theoretical models that account for a wide range of observed phenomenon, including complex response properties of individual neurons, architectural features of the network (e.g., makeup of different cell types), and reported perceptual results from human psychophysical experiments.

Santosh Vempala
Title: A Computer Science View of the Brain
Computational perspectives on scientific phenomena have often proven to be remarkably insightful. Rapid advances in computational neuroscience, and the resulting plethora of data and models highlight the lack of an overarching theory for how the brain accomplishes perception and cognition (the mind). Taking the view that the answer must surely have a computational component, we present a few approachable questions for computer scientists, along with some recent work (with Christos Papadimitriou, Samantha Petti and Wolfgang Maass) on mechanisms for the formation of memories, the creation of associations between memories and the benefits of such associations.