Principal Research Scientist
CSAIL (Computer Science and Artificial Intelligence Lab)
MIT Computer Vision & Graphics Group
Investigator, Athinoula A. Martinos Imaging Center
MIT, 77 Massachusetts Avenue
Cambridge, MA 02139
Phone: 617 452 2492
After a French baccalaureate in Physics and Mathematics and a B.Sc. in Psychology (minor in Philosophy), Aude Oliva received two M.Sc. degrees –in Experimental Psychology, and in Cognitive Science and a Ph.D from the Institut National Polytechnique of Grenoble, France. She joined the MIT faculty in the Department of Brain and Cognitive Sciences in 2004 and the MIT Computer Science and Artificial Intelligence Laboratory - CSAIL - in 2012. She is also affiliated with the Athinoula A. Martinos Imaging Center at the McGoven Institute for Brain Research, and with the MIT Big Data Initiative at CSAIL.
Her research is cross-disciplinary, spanning human perception/cognition, computer vision, and cognitive neuroscience, focusing on research questions at the intersection of the three domains. Her work in Computational Perception and Cognition builds on the synergy between human and machine perception and cognition, and how it applies to solving high-level recognition problems like understanding scenes and events, perceiving space, localizing sounds, recognizing objects, modelling attention, eye movements and visual memory, as well as predicting subjective properties of images (like image memorability). Her research integrates knowledge and tools from image processing, image statistics, computer vision, human perception, cognition and neuro-imaging (fMRI, MEG).
Her work is regularly featured in the scientific and popular press, in museums of Art and Science as well as in textbooks of Perception, Cognition, Computer Vision and Design. She is the recipient of a National Science Foundation CAREER Award (2006) in Computational Neuroscience, an elected Fellow of the Association for Psychological Science (APA), the recipient of the 2014 Guggenheim fellowship in Computer Science and an Osher Fellow of the Exploratorium, San Francisco. Her research programs are funded by the National Science Foundation, the National Eye Institute, Google, Toyota and Xerox. See her google scholar profile page.
My cross-disciplinary research in Computational Neuroscience, Cognitive Computing and Computer Vision, bridges from theory to experiments to applications, accelerating the rate at which discoveries are made by solving problems through a novel way of thinking.
High-resolution, spatiotemporally resolved neuroimaging is a sort of Holy Grail for neuroscience. It means that we can capture when, where, and in what form information flows through the human brain during mental operations. In the team, we study the fundamental neural mechanisms of human perception and cognition and develop computational models inspired by brain architecture. We are developing state-of-the-art human brain mapping approach fusing magnetic resonance imaging, magnetoencephalography, and computational modeling, to investigate the neural flow of perceived or imagined events. Unpacking the structure of operations such as sensory perception, memory, imagination, action, and prediction in the human brain has far-reaching implications for understanding not just typical brain functions, but also the maintenance or even augmentation of these functions in the face of internal (disease or injury) and external (information overload) challenges.
Understanding cognition on an individual level facilitates communication between natural and artificial systems, resulting in improved interfaces, devices, and neuroprosthetics for healthy and disabled people. Our work has identified that events carry the attribute of memorability, a predictive value of whether a novel event will be later remembered or forgotten. Predicting memorability is not an inexplicable phenomenon: people have a tendency to remember and forget the same images, faces, words, and graphs. Importantly, we are developing computational models that predict what people will remember, as they are encoding an event or even before they witness an event. Cognitive-level algorithms of memory will be a game changer for society, with applications ranging from accurate medical diagnostic tools to educational materials that will foresee the needs of people, to compensate when cognition fails.
Inspired by strategies from human vision and cognition, we build deep learning models of object and place recognition. To this aim, we are building a core of visual knowledge (e.g., Places dataset, a large-scale dataset resource for deep learning models) that can be used to train artificial systems for visual understanding and common-sense tasks, such as identifying where the agent is (i.e., the place), what objects are within reach, what potential surprising events may occur, which types of actions people are performing, and what may happen next in the scene.