Human-Like Robots - How Our Brains React




+49 241 80-25521



A research team from RWTH Aachen University and Cambridge University deciphers the "Uncanny Valley" phenomenon through their work with robots.


Scientists have identified mechanisms in the human brain that could help explain the phenomenon of the ‘Uncanny Valley’ – the unsettling feeling we get from robots and virtual agents that are too human-like. The team also showed that some people respond more adversely to human-like agents than others.

As technology improves, so too does our ability to create life-like artificial agents, such as robots and computer graphics. “Resembling the human shape or behavior can be both an advantage and a drawback,” explains Professor Astrid Rosenthal-von der Pütten, Chair for Individual and Technology at RWTH Aachen University. “The likeability of an artificial agent increases the more human-like it becomes, but only up to a point: sometimes people seem not to like it when the robot or computer graphic becomes too human-like.”

This phenomenon was first described in 1970 by robotics professor Masahiro Mori, who coined an expression in Japanese, an "eerie valley" or "creepy ditch", that went on to be translated as the ‘Uncanny Valley’.

“For a neuroscientist, the ‘Uncanny Valley’ is an interesting phenomenon,” explains Dr Fabian Grabenhorst, a Sir Henry Dale Fellow and Lecturer in the Department of Physiology, Development and Neuroscience at the University of Cambridge. “It implies a neural mechanism that first judges how close a given sensory input, such as the image of a robot, lies to the boundary of what we perceive as a human or non-human agent. This information would then be used by a separate valuation system to determine the agent’s likeability.”

Analyzing Brains

To investigate these mechanisms, the researchers studied brain patterns in 21 healthy individuals during two different tests using functional magnetic resonance imaging –fMRI for short, which measures changes in blood flow within the brain as a proxy for how active different regions are. In the first test, participants were shown a number of images that included humans, artificial humans, android robots, humanoid robots and mechanoid robots, and were asked to rate them in terms of likeability and human-likeness.

Then, in a second test, the participants were asked to decide which of these agents they would trust to select a personal gift for them, a gift that a human would like. Here, the researchers found that participants generally preferred gifts from humans or from the more human-like artificial agents – except those that were closest to the human/non-human boundary, in-keeping with the Uncanny Valley phenomenon.

By measuring brain activity during these tasks, the researchers were able to identify which brain regions were involved in creating the sense of the Uncanny Valley. Some of the brain areas close to the visual cortex, which deciphers visual images, tracked how human-like the images were, by changing their activity the more human-like an artificial agent became – in a sense, creating a spectrum of ‘human-likeness’. A spectrum of human resemblance was created.

Conclusions can be drawn from the results regarding the development and design of artificial figures that are more pleasant and acceptable for humans. Dr. Grabenhorst explains: “We know that valuation signals in these brain regions can be changed through social experience. This means that our ventromedial cortex perceives a new social counterpart more sympathetically when we experience that this artificial figure acts positively for us – for example, by choosing the most beautiful gift for us.”

“This is the first study to show individual differences in the strength of the Uncanny Valley effect, meaning that some individuals react overly and others less sensitively to human-like artificial agents,” says Professor Rosenthal-von der Pütten. “This means there is no one robot design that fits—or scares—all users. In my view, smart robot behaviour is of great importance, because users will abandon robots that do not prove to be smart and useful.”

The research was funded by Wellcome and the German Academic Scholarship Foundation.

Source: Press and Communications