Why Do We See Faces In Things? HUMANS ARE EXPERTS AT SEEING PATTERNS IN NON-LIVING THINGS, ESPECIALLY FACIAL ONES, such as the now-famous “face on Mars” in photographs obtained by the Viking 1 orbiter in 1976, which was actually just a trick of light and shadow.
The face of Jesus, so many Christians think, may be found in burnt toast and other (so many) everyday meals. Even a Twitter account (now deleted) was set up specifically to collect examples of the “faces in stuff” phenomena.
This phenomenon is more formally known as face pareidolia.

University of Sydney researchers have revealed that not only do we routinely misinterpret inanimate items as human faces, but our brains also treat these objects as candidates for emotional expression the same way they treat human faces.
This universal mechanism perhaps emerged due to the requirement to determine whether a person is a friend or adversary swiftly. The Sydney group detailed its efforts in a recent report for Proceedings of the Royal Society B.
Lead author David Alais of the University of Sydney told The Guardian: “We are such a sophisticated social species, and face recognition is significant… You need to recognize who it is, whether it is family or a friend or foe, and what are their intentions and emotions? Faces are identified remarkably fast. The brain seems to do this using a kind of template-matching procedure.
So if it sees an item that appears to have two eyes above a nose above a mouth, then it goes, ‘Oh I’m seeing a face.’ It’s a bit fast and loose, and sometimes it makes mistakes, so something that resembles a face will often trigger this template match.”
Also Read: What are the most common female body shapes?
For a long time, this has piqued Alais’s interest. For instance, he and several coworkers expanded upon earlier studies employing quick sequences of faces to show that our judgments of a person’s identity and attractiveness are skewed toward people we’ve recently seen in a report published in Scientific Reports in 2016. The binary task created by Alais et al. was meant to resemble the interface used to choose possible mates on popular dating apps like Tinder, where users swipe left or right depending on whether they find the profile photographs attractive or unattractive.
The researchers discovered that numerous aspects of stimuli are systematically biased toward recent experience, including orientation, facial expression and attractiveness, and perceived slimness.
Next, in 2019, a research was published in the Journal of Vision that used the same experimental methodology to the way we evaluate works of art.
Researchers led by Alais and his colleagues discovered that museum-goers tend to generalize about the quality of paintings rather than evaluate them individually. In reality, we are prone to a “contrast effect,” and this serial-dependence bias manifests itself in our art assessment.
When we see a painting right after an appealing one, we give it a higher rating, and when we see a less appealing one, we give it a lower rating.
Next, researchers looked at the neural bases of our ability to “read” social cues from other people’s faces. Alais had a similar thought about the phenomena of facial pareidolia.
He gave the examples of a sliced bell pepper that appears to be scowling and a towel dispenser that appears to be smiling as examples of things that “not only look like faces but can even transmit a sense of personality or social meaning.”
The placement of the mouth, nose, and eyes are universal aspects of the human face, but these are not the only factors in how a person perceives another person’s face. Our minds may be hardwired to recognize these commonalities, but understanding the emotions of others, as well as whether or not they are paying attention to us, is essential for social reading. According to a report published in the journal Psychological Science last year, Alais’s team conducted a sensory adaption experiment and found that we do, in fact, interpret facial pareidolia in much the same way as we do with real faces.
The sample size of this current study is tiny, with only 17 college students participating; nevertheless, all of them had previously tested their abilities with sets of eight genuine faces and eight pareidolia photos. (There were no records of the study results.) Forty real faces and forty pareidolia images were utilized in the studies, each representing a different emotion (high angry, low angry, low cheerful, and high joyful) and falling into one of four categories. In the studies, participants were given the images and asked to judge how angry or joyful they appeared.
As Alais explained to The Guardian, Pareidolia images are interpreted by the same system that ordinarily perceives emotion in a real face. “You are somehow unable to totally turn off that face response and emotion response and see it as an object. It remains simultaneously an object and a face.”
In particular, the findings demonstrated that individuals could consistently assess the pareidolia photos for facial emotion. Similarly to Tinder users and museum visitors, the participants exhibited serial reliance bias. That is, if you see a joyful or furious illusory face in an object, you’ll attribute its emotion more to the last one you saw. Additionally, when real faces and pareidolia images were combined in the second experiment, serial reliance was most prominent when subjects viewed the pareidolia images first. As a result, “expression processing is not strongly connected to human facial features,” as Alais et al. observed, suggesting a common underlying mechanism.
“This ‘cross-over’ condition is important, as it shows that the same underlying facial expression process is involved, regardless of image type,” said Alais.
“This means that seeing faces in clouds is more than a child’s fantasy. When objects look compellingly facelike, it is more than an interpretation: They really are driving your brain’s face-detection network. And that scowl or smile—your that’s brain’s facial expression system at work. For the brain, fake or real, faces are processed similarly.”