Today, science and technology is at the threshold of paradigm-shifting discoveries. However, an obstacle remains: as technology grows exponentially, our understanding of the human mind does not. We are approaching an era in which the benefits of a highly technologized society won’t be fully realized unless we are able to understand how humans encode, process, retain, predict and imagine. To this end, we combine methods from computer science, neuroscience and cognitive science to explain and model how perception and cognition are realized in human and machine. Our research in Computational Neuroscience, Cognitive Computing and Computer Vision, bridges from theory to experiments to applications, accelerating the rate at which discoveries are made by solving problems through a multi-disciplinary way of thinking.

Highlight: Daredevil-like ability allows us to size up rooms—even when we can’t see them

In a neuroscience work described in Science News, we discover a novel neuromagnetic brain signature that decodes the size of the surrounding space around the observer based on reverberation and sounds. To some extent, this “sonar sense” is a given to all of us, as we often exercise a form of passive echolocation by unconsciously processing echoes to navigate places or localize objects. The finding is also highlighted in APS (Association for Psychological Science), UK Daily Mail.

Funded by National Eye Institute

Highlight: Predicting Which Images are Memorable

Using Convolutional Neural Networks, our paper at ICCV 2015 presents the first computational cognition model of visual memory. The deep learning model is able to predict how memorable an image will be to a group of people. Predicting memorability is a way to estimate the utility of novel information for cognitive computing systems. The work has been featured in many media outlets, including The Atlantic, The Washington Post, NBC News, TechCrunch, Business Insider, PetaPixel. Dataset, article and model are available here

Funded by National Science Foundation, Neural and Cognitive Systems

Highlight: Places Dataset and Place Challenge for Artificial Vision Systems

Our goal with Places is to build a core dataset of human visual knowledge that can be used to train artificial systems for high-level understanding tasks, such as place and scene recognition, object recognition, action and event prediction, and theory-of-mind inference. The first Places database release contains 2,5 millions of images useful for training deep learning architectures. See the on-line demo and Learning Deep Features for Scene Recognition using Places Database (NIPS 2014); Object Detectors Emerge in Deep Scene CNNs (ICLR 2015). See media news on TechCrunch. Places2 dataset and challenge, contains 10 millions labelled images.

Funded by National Science Foundation, CISE/IIS, Robust Intelligence Program

Highlight: Aude Oliva is a 2014 Guggenheim fellow

Aude Oliva has been named a 2014 Guggenheim Fellow in recognition of her contributions to the field of computer science. The John Simon Guggenheim Memorial Foundation appoints Fellows "on the basis of impressive achievement in the past and exceptional promise for future accomplishment". The purpose is to give fellows "time in which they can work with as much creative freedom as possible". See the New York Times press release.

Funded by the John Simon Guggenheim Memorial Foundation

Highlight: When Time meets Space in the Human Brain

Visual recognition is a dynamic process: to make progress in human neuroscience, we need to know simultaneously when and where the human brain perceives and understands what it sees. In a new work described in Nature Neuroscience (Cichy, Pantazis & Oliva, Resolving human object recognition in space and time) our team explains how to combine non invasive neuro-imaging methods (MEG and fMRI) to witness the stages of visual object recognition in the human brain, at both millisecond and millimeter scales. See MIT News article "Expanding our View of Vision"

Funded by National Eye Institute

Highlight: How good is your eyesight?

With more than 8 million hits, ASAP Science video explains the principle behind our hybrid image illusion, using the bi-portrait of Marilyn Monroe and Albert Einstein. Knowing how the visual system works, hybrid images allow to create multi-layered images, where what you see from afar is different from what you see near by. A chapter on the hybrid image illusion (A. Oliva & P.G. Schyns) is in press in the Oxford Compendium of Visual Illusions.


Highlight: 10,000+ Face photographs

We have released a new image dataset, the 10k US Adult Faces Database, with over 10,000 pictures of faces that match the distribution of the adult US population, along with memorability and attribute scores for 2,200+ of them. This dataset goes along with the new article by Bainbridge, Isola and Oliva in Journal of Experimental Psychology: General (2013), on the intrinsic memorability of faces. The memorability scores of this dataset are also used in Khosla et al (2013), ICCV.

Funded by NSF, Google & Xerox

Highlight: Let's test your beer goggles !

Hybrid Marilyn Monroe / Albert Einstein is featured in the famous BBC TV show: QI: Series K Episode 14. In this illusion, Marilyn Monroe seen from a distance metamorphoses into Albert Einstein when seen close up. The Monroe/Einstein hybrid image is one of the Eight Einsteins hybrid piece in Exhibition at the MIT Museum of Science, Cambridge. A chapter on the hybrid image illusion (A. Oliva & P.G. Schyns) will appear in the forthcoming Oxford Compendium of Visual Illusions.


Highlight: The Brain Discerning Taste for Size

The human brain can recognize thousands of different objects, but neuroscientists have long grappled with how the brain organizes object representation — in other words, how the brain perceives and identifies different objects. Now researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Department of Brain and Cognitive Sciences have discovered that the brain organizes objects based on their physical size (See MIT News Article and The Scientist article). Article in Neuron (Konkle & Oliva, 2012).

Funded by National Eye Institute

Highlight: What Makes a Picture Memorable?

At the World Memory Championships, athletes compete to recall massive amounts of information; contestants must memorize and recall sequences of abstract images and the names of people whose faces are shown in photographs. While these tasks might seem challenging, our research suggests that images that possess certain properties are memorable. Our findings can explain why we have all had some images stuck in our minds, but ignored or quickly forgotten others. A short news article, and our 2014 article in IEEE Pattern Analysis and Machine Intelligence (PAMI).

Funded by National Science Foundation, Google and Xerox

Highlight: What Makes a Data Visualization Memorable?

An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: “What makes a visualization memorable?” We ran a large scale memory study and discovered that observers are very consistent in which visualizations they find memorable and forgettable. Article in IEEE Trans. on Visualization and Computer Graphics, Harvard News Release.

Funded by National Science Foundation, Google and Xerox

Highlight: Two for the View of One: The Art of Hybrid Images

Artists, designers, and visual scientists have long been searching for ways to make multiples meanings out of a single image. This article reviews a method Phillipe Schyns and Aude Oliva developed named Hybrid images, which are static pictures with two stable interpretations that change depending on the image’s viewing distance or size: one that appears when the image is viewed up-close, and the other that appears from afar. Hybrid images can be used to create compelling prints and photographs in which the observer experiences different percepts when interacting with the image. A recent short article in Art & Perception. The original technique was published in Schyns & Oliva (1994) to study how images are processed by the visual system.