Exploring the psychological aspects of designing and interacting with human-like robots
Humanoid robots are moving steadily from laboratories and research demos toward commercial deployment. Machines built by companies such as Tesla, Figure AI, Agility Robotics, and Boston Dynamics are increasingly designed to operate in environments originally built for humans – factories, warehouses, hospitals, and homes.
But as robots begin to resemble people more closely, another question re-emerges – one that engineers, psychologists, and filmmakers have been grappling with for decades.
How human should a robot look?
The problem was first described by Japanese roboticist Masahiro Mori in 1970, when he proposed the idea now known as the “uncanny valley”. Mori suggested that human emotional responses to robots become increasingly positive as robots appear more human – up to a point. At that point, the relationship suddenly collapses into a deep psychological discomfort.
Robots that are almost human, but not quite, can feel eerie, unsettling, or even disturbing.
Interestingly, long before humanoid robotics became a serious engineering pursuit, filmmakers were already exploring this strange psychological territory. Some of the most memorable on-screen robots illustrate exactly where the uncanny valley lies – and why it matters.
Fictional robots and the uncanny valley
The Stepford wives – when perfection becomes disturbing
One of the clearest cultural illustrations of the uncanny valley appears in The Stepford Wives, first released in 1975 and later remade in 2004 starring Nicole Kidman.
In the story, suburban husbands secretly replace their wives with eerily perfect robotic duplicates. These replacements look human – often indistinguishably so at first glance – but their behavior gradually reveals something deeply unnatural.
- Their smiles are too fixed.
- Their enthusiasm is too consistent.
- Their personalities are strangely hollow.
The discomfort arises precisely because the robots appear almost human while lacking the subtle inconsistencies that make real people believable.
In psychological terms, the Stepford wives represent a classic uncanny valley trigger: high visual realism combined with subtly incorrect behavior.
The synths – unsettling normality in Humans
A more contemporary illustration of the uncanny valley appears in the British television series Humans, first shown on Channel 4 in 2015.
In this world, humanoid robots known as “synths” have become household appliances. They cook meals, care for children, drive cars, and assist the elderly. At first glance, they are almost indistinguishable from people.
But the show deliberately introduces subtle cues that place them squarely inside the uncanny valley.
- The synths’ faces are slightly too still.
- Their eye movements feel fractionally delayed.
- Their voices carry a controlled, emotionally neutral tone.
Perhaps the most distinctive feature is their eyes – often rendered with an unnatural brightness that instantly signals something is different.
The result is a carefully balanced tension. The synths are human enough to be mistaken for people across a room, yet artificial enough to make prolonged interaction uncomfortable. Characters in the series frequently react with a mixture of convenience, distrust, and unease.
What makes Humans particularly interesting from a robotics perspective is that the discomfort arises not from malfunction or violence, but from ordinary social interaction. The synths behave politely, efficiently, and predictably – yet the lack of small human imperfections makes them feel wrong.
In many ways, the series presents a scenario that robotics engineers may soon face in the real world: machines capable of performing human roles so convincingly that the remaining differences become psychologically magnified.
Where earlier fictional robots often relied on obvious mechanical features, the synths demonstrate how near-perfect imitation may be more unsettling than obvious artificiality – a central idea of the uncanny valley.
Data – deliberately outside the valley
In Star Trek: The Next Generation, the android officer Data represents a very different design philosophy.
Played by Brent Spiner, Data is explicitly artificial. His skin tone is slightly unusual, his emotional expression is limited, and his behavior is clearly mechanical.
Rather than trying to pass as human, the character sits comfortably outside the uncanny valley. Viewers accept Data as a machine with human-like intelligence rather than a near-human imitation.
Interestingly, this design decision mirrors a strategy used by many robotics companies today: avoid perfect realism and emphasize the machine identity instead.
Industrial robots such as Atlas robot from Boston Dynamics follow a similar philosophy. Their mechanical appearance signals clearly that they are machines, which reduces psychological discomfort.
Sonny – the unsettling intelligence of I, Robot
In I, Robot, the robot Sonny occupies a position much closer to the uncanny valley.
Unlike most robots in the film, Sonny possesses expressive eyes, emotional nuance, and highly human body language. His face resembles a simplified human skull beneath translucent synthetic material.
This design makes Sonny visually recognizable as a machine while still feeling eerily human.
The effect is intentional. The film repeatedly places Sonny in scenes where his emotional responses challenge the assumption that robots are purely logical machines.
The audience is left uncertain whether Sonny is a tool, a person, or something in between – a classic uncanny valley ambiguity.
Ava – the calculated eeriness of Ex Machina
A more recent and deliberately unsettling example appears in Ex Machina.
The humanoid robot Ava, played by Alicia Vikander, combines human facial features with visibly mechanical components. Transparent sections of her body reveal circuitry and metal structures beneath.
This hybrid design creates a fascinating psychological tension.
Ava is clearly a machine, yet her voice, facial expressions, and movements are convincingly human. The audience is constantly forced to reconcile these conflicting signals.
The result is not simply discomfort but uncertainty – a feeling that the robot may understand human emotions while not necessarily sharing them.
Why the uncanny valley happens
Psychologists have proposed several explanations for why humans react negatively to near-human robots.
One theory suggests that humans evolved to detect abnormalities in faces and bodies as potential signs of illness or danger. Subtle irregularities in a robot’s appearance or movement may trigger these ancient threat-detection systems.
Another explanation focuses on expectation mismatch. When something looks human, the brain expects human behavior – natural movement, emotional nuance, and complex social cues. When those expectations are not met, the brain experiences cognitive dissonance.
Movement plays a particularly important role. Even a visually convincing robot can become unsettling if its motion appears slightly unnatural – delayed eye movements, rigid gestures, or poorly synchronized speech.
For robotics engineers, this means that solving the uncanny valley problem is not just about physical design. It also involves control systems, motion planning, perception, and interaction design.
Not everyone dislikes human-like robots
Interestingly, the uncanny valley is not universal.
Some research suggests that certain people actually prefer highly human-like robots. These individuals often report that human-like appearance makes robots easier to understand socially and emotionally.
In fields such as elder care, healthcare, and companionship robotics, designers sometimes deliberately pursue human realism for precisely this reason.
Japanese roboticist Hiroshi Ishiguro has spent years developing highly realistic androids designed to resemble specific individuals. His work suggests that familiarity may reduce uncanny responses over time.
Cultural factors may also play a role. Studies have found that attitudes toward humanoid robots vary significantly across societies, with Japan historically showing greater acceptance of human-like machines.
What this means for the humanoid robotics industry
As humanoid robots move toward commercial deployment, companies must decide how closely their machines should resemble people.
- Some developers emphasize mechanical transparency – robots that clearly look like machines.
- Others pursue human-inspired designs that balance familiarity with abstraction.
- A smaller group continues to explore fully realistic androids.
The right answer may depend on context. A humanoid robot working on a factory floor may not need a human face at all, while robots operating in healthcare or hospitality environments may benefit from more human-like features.
Ultimately, the goal of humanoid robotics may not be perfect imitation.
Instead, it may be something more subtle: designing machines that fit naturally into human environments without triggering the strange psychological discomfort that Masahiro Mori first described more than half a century ago.
And if Hollywood’s long history with fictional robots tells us anything, it is that humans can tolerate – and sometimes even embrace – robots that are clearly machines.
It is the ones that look almost human that continue to make us uneasy.
