As robots become more expressive and socially capable, the line between machines and living characters is starting to blur. From Disney’s lifelike Olaf robot (main image) to interactive droids inspired by Star Wars, recent developments highlight how far robotics has come in replicating human-like behavior and emotion.
But what actually makes a robot feel “alive” – and why do some machines feel natural while others fall into the so-called “uncanny valley”?
To explore these questions, Robotics & Automation News spoke with Sarah Sebo, assistant professor of computer science at the University of Chicago, whose research focuses on human-robot interaction and the social, cultural and technological impact of robots in everyday environments.
In this Q&A, Sebo explains the psychology behind lifelike robots, the limits of realism, and what increasingly social machines could mean for workplaces and public spaces.

Interview with Sarah Sebo, assistant professor of computer science at the University of Chicago
Robotics & Automation News: Recent developments from companies like Disney show robots becoming increasingly lifelike and expressive. From a human-robot interaction perspective, what makes a robot feel “alive” to people?
Sarah Sebo: A lot of what makes robots feel “alive” is how much agency and experience robots can convey: can they “think” on their own and make their own decisions? do they convey a sense of “experiencing” the world, mimicking feelings and sensations like pain or pleasure?
The more robots express qualities that convey agency and experience as Gray et al. (2007) so well articulates, the more “alive” they feel and the more “mind” people perceive them to have.
R&AN: Do highly realistic robots improve user engagement, or do they risk entering the “uncanny valley” where people feel uncomfortable?
SS: The “uncanny” valley describes how people perceive a robot’s human-like appearance. Generally, the more human-like a robot becomes, the more positively it is perceived, with a big exception.
When robots are nearly human-like, but can still be distinguished as robots, such as the android robots (for example, Sophia), they can come across as creepy or strange. It’s still unclear exactly where this strangeness comes from, it could be due to the idea that a robot is “so close” to being human, yet is still not quite there.
While we see this in robot appearance, I’ve seen this less when it comes to other aspects of human-robot interaction. Let’s take speech delay as an example.
Many of the robots my lab programs have delays in speech, when the robot is processing what people say and formulating a response. This kind of speech delay really can hamper human-robot interactions.
The closer robot speech delay can get to human speech delay, the better the robot is perceived – no uncanny valley. In my opinion, it’s very possible that the “uncanny valley” may be specific to appearance and voice, qualities that involve unique human expression.
R&AN: In environments like theme parks, robots are designed to entertain and interact socially. How transferable are these interaction models to more practical settings such as retail, healthcare, or hospitality?
SS: Many of the same social skills that robots may use in theme parks are applicable in other contexts as well. For example, robots must determine which people seem interested in having a robot approach them, approach people from a “socially appropriate” angle, engage in greetings and small talk, and know how to exit conversations and say “goodbye”.
Many social skills are transferable between contexts, and some will need to be specific depending on the robot’s context (for example, portraying a specific character).
R&AN: As robots become more socially capable, how do people’s expectations of them change – and what risks arise if those expectations are not met?
SS: I’ve witnessed the rise in people’s expectations first-hand over the last decade. Ten years ago, people were impressed with a robot that could do anything on its own.
Today, people have zero patience for robots in my lab that might have 2 seconds of speech delay, which is required for them to use AI-based tools to generate verbal responses.
If expectations are not met, it is possible that people will disengage, move on, or discontinue use of the robot. This makes it essential, in many contexts, to convey the capabilities of a robot before engagement, to avoid this kind of disappointment.
R&AN: To what extent do people treat robots as tools versus social entities, and how does that influence design decisions?
SS: The degree to which a person anthropomorphizes an artificial agent is influenced by a variety of factors. I’m sure many of us have seen this with how we and our friends engage with AI agents – some of us use tools like ChatGPT, Gemini, and Claude as an enhanced “Google search” and ask it questions such as “when is the best time to visit Disney world?” and instructions such as “edit this email I have to send to my boss”.
However, others engage with these agents like they are social entities, falling in love with them and turning to them for emotional support. Certain people may be more likely to anthropomorphize artificial agents than others and also certain robot factors may also influence people to view them as social agents, such as a human-like appearance or human-like behavior. This is incredibly important because people engage with “sophisticated tools” in very different ways than they engage with “social entities”.
R&AN: There is growing interest in humanoid robots for work environments. How important is human-like appearance and behavior for productivity, as opposed to purely functional design?
SS: One large advantage of taking on a humanoid form is that a robot can complete most tasks that people can complete such as walking up a flight of stairs, opening a door, pressing an elevator button, and reaching a book on a shelf.
This can be very useful for robots that are intended to be “multi-purpose” and complete a variety of tasks in human environments. Robots can also provide a lot of value by taking on forms that are not humanoid, so that they can complete tasks that complement what people can do.
While many human-like robot features may not be necessary for task completion, the one human-like feature I think is most important for social interaction is having a face and eyes.
R&AN: Looking ahead, what are the key social or ethical challenges we should be paying attention to as robots become more integrated into everyday public spaces?
SS: The key challenge I see as we integrate robots more into our lives is how it will impact our social lives. Technological advancements (for example, social media, texting, chat-based AI tools) have made it such that we communicate with other people face-to-face less and less.
Robots have the potential to contribute to this trend. Face-to-face communication is essential for the formation and maintenance of human relationships, one of the most important factors involved in living a good and satisfied life.
I think it will be important to keep our eyes out for how human-robot interactions may replace human-human interactions in ways that may damage our social health and to specifically design robots to encourage human-human interactions rather than replace them.
