Form

ANIMACY & MIND
The ability to detect and understand other minds is critical for physical (and social) survival. The overarching question for this research is "How do our brains tell a Who from a What?"

THE TIPPING POINT
Looser & Wheatley (2010). Psych Science. Featured in "News of the Week", Science, 331,19.

Perhaps the most salient icon of another mind is the face.  But not all faces have minds.  Mannequins have faces.  Dolls, a
vatars, masks, sculptures and most paintings in the National Gallery have faces.  Psychological science has focused on the question "How do we detect a face?" for the last several decades, but perhaps the more important question is "How do we detect that a face has a mind attached?" To find out the answer to this question, we created doll-human continuua (click here to see a movie example of our stimuli). We wanted to know where, along these continuua, the face starts to seem alive.

We found a consistent tipping point:  a face had to be at least 65% human before people would say that it was more human than doll.  Note that this is significantly different than 51% human -- the mathematical tipping point for a 0-100 continuum. People are stringent about what counts as alive.  The same tipping point occurred whether people were asked if the face "had a mind", "could form a plan," or was "able to experience pain", indicating that recognizing life in a face is tantamount to recognizing the capacity for a mental life.
Looser & Wheatley (2010)


THE EYES HAVE IT

It turns out that finding another mind requires scrutinizing one particular facial feature: the eyes. Seeing a single eye was enough for participants to judge the presence of life. In contrast, an equal-sized view of the nose, lips, and a patch of skin were far less useful (Looser & Wheatley, 2010).  Indeed, as one participant put it "I looked for the moment when the face seemed to look back." This suggests that the age-old aphorism is correct: we consider eyes to be "the windows to the soul."  


This natural perceptual scrutiny --for fine ocular cues that convey a mental life --may help explain why eyes are the achilles heel of computer graphics imagery.  



FACE - THEN - MIND

Although we may be experts at detecting a mind in a face, this visual scrutiny takes longer than detecting a face. Face detection is associated with an increased electrocortical response to faces about 170ms after the image first hits the retina. This well-documented, face-specific response occurs for all kinds of faces --  dolls, humans, cartoons, cats.  At this early stage, any face will do.

(in the graph, see how the blue and green lines are overlapping at 170ms.)

However, give the brain a few hundred more milliseconds and only the human faces sustain a heightened response. (in the graph, see how the blue and green lines diverge by 400ms.) This tells us that once a face is detected, the brain continues to scrutinize that face for evidence of a mind. Wheatley, Weinberg, Looser, Moran, & Hajcak (2011).


DUAL EFFICIENCY

These two stages of face perception -- 1) Detect all faces  2) Filter out false alarms -- is remarkably efficient. It ensures the rapid liberal face detection necessary for survival (better to false alarm to a face-like rock pattern than miss a foe) while ensuring that we do not waste precious cognitive resources on the false alarms. We may notice a mannequin but we will not waste our energy trying to read its thoughts.


"MIRROR SYSTEM" AND THE "SOCIAL NETWORK"

Two competing hypotheses have emerged as to how our brains accomplish discriminating people from objects: the mirror neuron system and the social network. Putative "mirror" neurons become active both when a person performs an action and when a person observes that action being performed by another. Neurons within the social network become active in social contexts, such as during the assessment of emotion in others, or while imagining another's state of mind.

The social network (yellow) includes areas associated with biological motion (1, superior temporal sulcus), biological form (6, lateral fusiform gyrus), mentalizing (3, medial prefrontal cortex; 4, posterior cingulate) and affective processing (2, insula; 5, amygdala). The mirror system (blue) consists of (7) inferior parietal cortex and (8) ventral premotor/inferior frontal cortex.


We examined the differential activation of these networks under conditions in which a cartoon figure and its movements remained constant, but the backgrounds were changed to bias an interpretation of animacy (e.g., ice-skater) or inanimacy (e.g., spinning top).

We found that only the social network is specifically more active when people interpret the figure as animate.  However, both systems must work together for optimal social intelligence. Wheatley, Milleville, & Martin (2007).





THE GRAY AREAS 

Now that we know what system is active when we interpret animacy, we are investigating the gray areas. When do objects take on human qualities? When do people appear less than human? And, how do we feel when confronted with examples such as these that jump their natural categories?



RELATED PUBLICATIONS
Looser, C.E., Guntupalli, S. & Wheatley, T. (2012). Multivoxel patterns in face-sensitive temporal regions reveal an encoding schema based on detecting life in a face. Social Cognitive and Affective Neuroscience.

Wheatley, T., Kang, O., Parkinson, C. & Looser, C. E., (2012).  Mind perception to mental connection: Synchrony as a mechanism for social understanding. Social and Personality Psychology Compass.

Wheatley, T., Weinberg, A., Looser, C. E., Moran, T., & Hajcak, G. (2011).  Mind perception: Real but not artificial faces sustain neural activity beyond the N170/VPP.  PLoS ONE, epub Mar 31, 2011.

Looser, C.E., & Wheatley, T. (2010).  The tipping point of animacy: How, when, and where we perceive life in a face.  Psychological Science, 21,1854–1862. [Featured in "News of the Week", Science, 331,19].
Wheatley, T., Milleville, S. C., & Martin, A. (2007). Understanding animate agents: Distinct roles for the social network and mirror system. Psychological Science, 18, 469-474. Discussed by G. Chin,"Editor's Choice", Science, 316, 1255, 2007].
Wheatley, T., Weisberg, J., Beauchamp, M. S. & Martin, A. (2005) Automatic priming of semantically related words reduces activity in the fusiform gyrus. Journal of Cognitive Neuroscience, 17, 1871-1885.