Today, science and technology is at the threshold of paradigm-shifting discoveries. However, an obstacle remains: as technology grows exponentially, our understanding of the human mind does not. We are approaching an era in which the benefits of a highly technologized society won’t be fully realized unless we are able to understand how humans encode, process, retain, predict and imagine. To this end, we combine methods from computer science, neuroscience and cognitive science to explain and model how perception and cognition are realized in human and machine. Our research bridges from theory to experiments to applications, accelerating the rate at which discoveries are made by solving problems through a multi-disciplinary way of thinking.
Highlight: The Algonauts Project
The quest to understand the nature of human intelligence and engineer more advanced forms of artificial intelligence are increasingly intertwined. The Algonauts Project brings biological and artificial intelligence researchers together on a common platform to exchange ideas and advance both fields. Our first challenge and workshop, Explaining the Human Visual Brain, will focus on building computer vision models that simulate how the brain sees and recognizes objects, a topic that has long fascinated neuroscientists and computer scientists. Challenge results and a workshop be held at MIT, on July 19-20, 2019.
Sponsors: NSF, MIT-IBM Watson AI Lab, MIT Quest for Intelligence
Highlight: The MIT Quest for Intelligence
Aude Oliva is the MIT Executive Director of the MIT-IBM Watson AI Lab, a new engagement model between academia and industry, and the Executive Director of the MIT Quest for Intelligence, an MIT-wide initiative which seeks to discover the foundations of human and machine intelligence and deliver transformative new technology for humankind. The Quest is funding over 100 MIT Principal Investigators and is offering up to 100 research opportunities (UROP) to undergraduate students. In her new roles, her goal is to strenghen the relationships between the science and engineering of intelligence in both academic and industry settings, as well as promote interdisciplinary education in the science and technology of human and machine intelligence.
Highlight: Helping computers fill in the gaps between video frames
In a paper at ECCV 2018, Bolei Zhou and Alex Andonian present a module to add-on to deep neural networks to link the relations happening between the frames in a video to better represent the different states of events. The Temporal Relation Network module learns how objects or agents change in a video at different moment in time. See the Project webpage and the MIT news.
Funded by NSF and ONR Vannevar Bush Faculty Fellowship
Highlight: Moments in Time: A Large-Scale Dataset for Event Understanding
We release the first version of Moments in Time dataset, a large-scale, human-annotated one million video dataset capturing visual and/or audible actions, produced by humans, animals, objects or nature that together shall allow for the recognition of compound activities occurring at longer time scales. See the website for more information and related news: YouTube Video, MIT TechReview, EnterpriseTech
Highlight: Quantifying Interpretability of Deep Neural Networks : Seeing through the artificial box
In a computer vision paper and talk at CVPR 2017, the team proposes a general framework called Network Dissection that allows to quantify and compare what artificial units of deep neural networks learn, offering a tool to see what visual deep NNs learn, making the neural network box more transparent. See the website for more information and some related news: MIT news, TechCrunch, Quartz
Highlight: Daredevil-like ability allows us to size up rooms—even when we can’t see them
In a neuroscience work described in Science News, and eNeuro article, we discover a neuromagnetic brain signature that decodes the size of the space surrounding the observer based on reverberation. To some extent, this “sonar sense” is a given to all of us, as we often exercise a form of passive echolocation by unconsciously processing echoes to navigate places or localize objects. The finding is highlighted in APS (Association for Psychological Science), UK Daily Mail.
Highlight: Predicting Which Images are Memorable
Using Convolutional Neural Networks, our paper at ICCV 2015 presents the first computational cognition model of visual memory. The deep learning model is able to predict how memorable an image will be to a group of people. Predicting memorability is a way to estimate the utility of novel information for cognitive computing systems. The work has been featured in many media outlets, including The Atlantic, The Washington Post, NBC News, TechCrunch, Business Insider, PetaPixel. Dataset, article and model are available here
Highlight: Places Dataset and Place Challenge for Artificial Vision Systems
Our goal with Places is to build a core dataset of human visual knowledge that can be used to train artificial systems for high-level understanding tasks, such as place and scene recognition, object recognition, action and event prediction, and theory-of-mind inference. The first Places database release contains 2,5 millions of images useful for training deep learning architectures. See the on-line demo and Learning Deep Features for Scene Recognition using Places Database (NIPS 2014); Object Detectors Emerge in Deep Scene CNNs (ICLR 2015). See media news on TechCrunch. Places2 dataset and challenge, contains 10 millions labelled images.
Funded by National Science Foundation, CISE/IIS, Robust Intelligence Program
Highlight: Aude Oliva is a 2014 Guggenheim fellow
Aude Oliva has been named a 2014 Guggenheim Fellow in recognition of her contributions to the field of computer science. The John Simon Guggenheim Memorial Foundation appoints Fellows "on the basis of impressive achievement in the past and exceptional promise for future accomplishment". The purpose is to give fellows "time in which they can work with as much creative freedom as possible". See the New York Times press release.
Funded by the John Simon Guggenheim Memorial Foundation
Highlight: When Time meets Space in the Human Brain
Visual recognition is a dynamic process: to make progress in human neuroscience, we need to know simultaneously when and where the human brain perceives and understands what it sees. In a new work described in Nature Neuroscience (Cichy, Pantazis & Oliva, Resolving human object recognition in space and time) our team explains how to combine non invasive neuro-imaging methods (MEG and fMRI) to witness the stages of visual object recognition in the human brain, at both millisecond and millimeter scales. See MIT News article "Expanding our View of Vision"
Funded by National Eye Institute
Highlight: How good is your eyesight?
With more than 8 million hits, ASAP Science video explains the principle behind our hybrid image illusion, using the bi-portrait of Marilyn Monroe and Albert Einstein. Knowing how the visual system works, hybrid images allow to create multi-layered images, where what you see from afar is different from what you see near by. A chapter on the hybrid image illusion (A. Oliva & P.G. Schyns) is in press in the Oxford Compendium of Visual Illusions.
Highlight: 10,000+ Face photographs
We have released a new image dataset, the 10k US Adult Faces Database, with over 10,000 pictures of faces that match the distribution of the adult US population, along with memorability and attribute scores for 2,200+ of them. This dataset goes along with the new article by Bainbridge, Isola and Oliva in Journal of Experimental Psychology: General (2013), on the intrinsic memorability of faces. The memorability scores of this dataset are also used in Khosla et al (2013), ICCV.
Funded by NSF, Google & Xerox
Highlight: Let's test your beer goggles !
Hybrid Marilyn Monroe / Albert Einstein is featured in the famous BBC TV show: QI: Series K Episode 14. In this illusion, Marilyn Monroe seen from a distance metamorphoses into Albert Einstein when seen close up. The Monroe/Einstein hybrid image is one of the Eight Einsteins hybrid piece in Exhibition at the MIT Museum of Science, Cambridge. A chapter on the hybrid image illusion (A. Oliva & P.G. Schyns) will appear in the forthcoming Oxford Compendium of Visual Illusions.
Highlight: The Brain Discerning Taste for Size
The human brain can recognize thousands of different objects, but neuroscientists have long grappled with how the brain organizes object representation — in other words, how the brain perceives and identifies different objects. Now researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Department of Brain and Cognitive Sciences have discovered that the brain organizes objects based on their physical size (See MIT News Article and The Scientist article). Article in Neuron (Konkle & Oliva, 2012).
Funded by National Eye Institute
Highlight: What Makes a Picture Memorable?
At the World Memory Championships, athletes compete to recall massive amounts of information; contestants must memorize and recall sequences of abstract images and the names of people whose faces are shown in photographs. While these tasks might seem challenging, our research suggests that images that possess certain properties are memorable. Our findings can explain why we have all had some images stuck in our minds, but ignored or quickly forgotten others. A short news article, and our 2014 article in IEEE Pattern Analysis and Machine Intelligence (PAMI).
Funded by National Science Foundation, Google and Xerox
Highlight: What Makes a Data Visualization Memorable?
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: “What makes a visualization memorable?” We ran a large scale memory study and discovered that observers are very consistent in which visualizations they find memorable and forgettable. Article in IEEE Trans. on Visualization and Computer Graphics, Harvard News Release.
Funded by National Science Foundation, Google and Xerox
Highlight: Two for the View of One: The Art of Hybrid Images
Artists, designers, and visual scientists have long been searching for ways to make multiples meanings out of a single image. This article reviews a method Phillipe Schyns and Aude Oliva developed named Hybrid images, which are static pictures with two stable interpretations that change depending on the image’s viewing distance or size: one that appears when the image is viewed up-close, and the other that appears from afar. Hybrid images can be used to create compelling prints and photographs in which the observer experiences different percepts when interacting with the image. A recent short article in Art & Perception. The original technique was published in Schyns & Oliva (1994) to study how images are processed by the visual system.