Principal Research Scientist, CSAIL
Investigator, MIT Computer Vision & Graphics Group
Investigator, Athinoula A. Martinos Imaging Center
Investigator, MIT-CSAIL Systems That Learn
Investigator, IBM-MIT BM3C Laboratory
Expert, National Science Foundation, CISE/IIS
MIT, 77 Massachusetts Avenue
Cambridge, MA 02139
Phone: 617 452 2492
After a French baccalaureate in Physics and Mathematics and a B.Sc. in Psychology (minor in Philosophy), Aude Oliva received two M.Sc. degrees –in Experimental Psychology, and in Cognitive Science and a Ph.D from the Institut National Polytechnique of Grenoble, France. She joined the MIT faculty in the Department of Brain and Cognitive Sciences in 2004 and the MIT Computer Science and Artificial Intelligence Laboratory - CSAIL - in 2012. She is also affiliated with the Athinoula A. Martinos Imaging Center at the McGoven Institute for Brain Research MIT, the IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension (BM3C), and the MIT CSAIL Initiative Systems That Learn.
Her research is cross-disciplinary, spanning human perception/cognition, computer vision, and cognitive neuroscience, focusing on research questions at the intersection of the three domains. Her work in Computational Perception and Cognition builds on the synergy between human and machine recognition, and how it applies to solving high-level recognition problems like understanding scenes and events, perceiving space, modelling attention, eye movements and memory, as well as predicting subjective properties of images (like memorability). Her research integrates knowledge and tools from image processing, image statistics, computer vision, computer science as well as human perception, cognition and neuro-imaging (fMRI, MEG).
With ~ 100 publications to date, her work is regularly featured in the scientific and popular press, in museums of Art and Science as well as in textbooks of Perception, Cognition, Computer Vision and Design. She is the recipient of a 2006 National Science Foundation CAREER Award in Computational Neuroscience, the 2014 Guggenheim fellowship in Computer Science and the 2016 Vannevar Bush Faculty Fellowship in Cognitive Neuroscience. She is an elected Fellow of the Association for Psychological Science (APA), and an Osher Fellow of the Exploratorium, San Francisco. Since 2015, she is appointed as an Expert at the National Science Foundation, Directorate of Computer & Information Science and Engineering (CISE) in the areas of Computational Neuroscience, Brain and Artificial intelligence. Her research programs at MIT are funded by the National Science Foundation, the National Security Science and Engineering program, the National Eye Institute, Toyota, IBM, Google and Xerox. See her google scholar profile page.
My cross-disciplinary research in Computational Neuroscience, Cognitive Computing and Computer Vision, bridges from theory to experiments to applications, accelerating the rate at which discoveries are made by solving problems through a novel way of thinking.
High-resolution, spatiotemporally resolved neuroimaging is a sort of Holy Grail for neuroscience. It means that we can capture when, where, and in what form information flows through the human brain during mental operations. In the team, we study the fundamental neural mechanisms of human perception and cognition and develop computational models inspired by brain architecture. We are developing state-of-the-art human brain mapping approach fusing magnetic resonance imaging, magnetoencephalography, and computational modeling, to investigate the neural flow of perceived or imagined events. Unpacking the structure of operations such as sensory perception, memory, imagination, action, and prediction in the human brain has far-reaching implications for understanding not just typical brain functions, but also the maintenance or even augmentation of these functions in the face of internal (disease or injury) and external (information overload) challenges.
Understanding cognition on an individual level facilitates communication between natural and artificial systems, resulting in improved interfaces, devices, and neuroprosthetics for healthy and disabled people. Our work has identified that events carry the attribute of memorability, a predictive value of whether a novel event will be later remembered or forgotten. Predicting memorability is not an inexplicable phenomenon: people have a tendency to remember and forget the same images, faces, words, and graphs. Importantly, we are developing computational models that predict what people will remember, as they are encoding an event or even before they witness an event. Cognitive-level algorithms of memory will be a game changer for society, with applications ranging from accurate medical diagnostic tools to educational materials that will foresee the needs of people, to compensate when cognition fails.
Inspired by strategies from human vision and cognition, we build deep learning models of object and place recognition. To this aim, we are building a core of visual knowledge (e.g., Places dataset, a large-scale dataset resource for deep learning models) that can be used to train artificial systems for visual understanding and common-sense tasks, such as identifying where the agent is (i.e., the place), what objects are within reach, what potential surprising events may occur, which types of actions people are performing, and what may happen next in the scene.