ֱ̽ of Cambridge - Fabian Grabenhorst /taxonomy/people/fabian-grabenhorst en Scientists identify possible source of the ‘Uncanny Valley’ in the brain /research/news/scientists-identify-possible-source-of-the-uncanny-valley-in-the-brain <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/replieeq2face.jpg?itok=MT3mr_Tr" alt="Android" title="Android, Credit: Max Braun" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>As technology improves, so too does our ability to create life-like artificial agents, such as robots and computer graphics – but this can be a double-edged sword.</p>&#13; &#13; <p>“Resembling the human shape or behaviour can be both an advantage and a drawback,” explains Professor Astrid Rosenthal-von der Pütten, Chair for Individual and Technology at RWTH Aachen ֱ̽. “ ֱ̽likeability of an artificial agent increases the more human-like it becomes, but only up to a point: sometimes people seem not to like it when the robot or computer graphic becomes too human-like.”</p>&#13; &#13; <p>This phenomenon was first described in 1978 by robotics professor Masahiro Mori, who coined an expression in Japanese that went on to be translated as the ‘Uncanny Valley’.</p>&#13; &#13; <p>Now, in a series of experiments reported in the <em>Journal of Neuroscience</em>, neuroscientists and psychologists in the UK and Germany have identified mechanisms within the brain that they say help explain how this phenomenon occurs – and may even suggest ways to help developers improve how people respond.</p>&#13; &#13; <p>“For a neuroscientist, the ‘Uncanny Valley’ is an interesting phenomenon,” explains Dr Fabian Grabenhorst, a Sir Henry Dale Fellow and Lecturer in the Department of Physiology, Development and Neuroscience at the ֱ̽ of Cambridge. “It implies a neural mechanism that first judges how close a given sensory input, such as the image of a robot, lies to the boundary of what we perceive as a human or non-human agent. This information would then be used by a separate valuation system to determine the agent’s likeability.”</p>&#13; &#13; <p>To investigate these mechanisms, the researchers studied brain patterns in 21 healthy individuals during two different tests using functional magnetic resonance imaging (fMRI), which measures changes in blood flow within the brain as a proxy for how active different regions are.</p>&#13; &#13; <p>In the first test, participants were shown a number of images that included humans, artificial humans, android robots, humanoid robots and mechanoid robots, and were asked to rate them in terms of likeability and human-likeness.</p>&#13; &#13; <p>Then, in a second test, the participants were asked to decide which of these agents they would trust to select a personal gift for them, a gift that a human would like. Here, the researchers found that participants generally preferred gifts from humans or from the more human-like artificial agents – except those that were closest to the human/non-human boundary, in-keeping with the Uncanny Valley phenomenon.</p>&#13; &#13; <p>By measuring brain activity during these tasks, the researchers were able to identify which brain regions were involved in creating the sense of the Uncanny Valley. They traced this back to brain circuits that are important in processing and evaluating social cues, such as facial expressions.</p>&#13; &#13; <p>Some of the brain areas close to the visual cortex, which deciphers visual images, tracked how human-like the images were, by changing their activity the more human-like an artificial agent became – in a sense, creating a spectrum of ‘human-likeness’.</p>&#13; &#13; <p>Along the midline of the frontal lobe, where the left and right brain hemispheres meet, there is a wall of neural tissue known as the medial prefrontal cortex. In previous studies, the researchers have shown that this brain region contains a generic valuation system that judges all kinds of stimuli; for example, they showed previously that this brain area signals the reward value of pleasant high-fat milkshakes and also of social stimuli such as pleasant touch.</p>&#13; &#13; <p>In the present study, two distinct parts of the medial prefrontal cortex were important for the Uncanny Valley. One part converted the human-likeness signal into a ‘human detection’ signal, with activity in this region over-emphasising the boundary between human and non-human stimuli – reacting most strongly to human agents and much less to artificial agents.</p>&#13; &#13; <p> ֱ̽second part, the ventromedial prefrontal cortex (VMPFC), integrated this signal with a likeability evaluation to produce a distinct activity pattern that closely matched the Uncanny Valley response.</p>&#13; &#13; <p>“We were surprised to see that the ventromedial prefrontal cortex responded to artificial agents precisely in the manner predicted by the Uncanny Valley hypothesis, with stronger responses to more human-like agents but then showing a dip in activity close to the human/non-human boundary—the characteristic ‘valley’,” says Dr Grabenhorst.</p>&#13; &#13; <p> ֱ̽same brain areas were active when participants made decisions about whether to accept a gift from a robot by signalling the evaluations that guided participants’ choices. One further region – the amygdala, which is responsible for emotional responses – was particularly active when participants rejected gifts from the human-like, but not human, artificial agents. ֱ̽amygdala’s ‘rejection signal’ was strongest in participants who were more likely to refuse gifts from artificial agents.</p>&#13; &#13; <p> ֱ̽results could have implications for the design of more likable artificial agents. Dr Grabenhorst explains: “We know that valuation signals in these brain regions can be changed through social experience. So, if you experience that an artificial agent makes the right choices for you - such as choosing the best gift - then your ventromedial prefrontal cortex might respond more favourably to this new social partner.”</p>&#13; &#13; <p>“This is the first study to show individual differences in the strength of the Uncanny Valley effect, meaning that some individuals react overly and others less sensitively to human-like artificial agents,” says Professor Rosenthal-von der Pütten. “This means there is no one robot design that fits—or scares—all users. In my view, smart robot behaviour is of great importance, because users will abandon robots that do not prove to be smart and useful.”</p>&#13; &#13; <p> ֱ̽research was funded by Wellcome and the German Academic Scholarship Foundation.</p>&#13; &#13; <p><em><strong>Reference</strong><br />&#13; Rosenthal-von der Pütten, AM et al. <a href="https://www.jneurosci.org/content/39/33/6555">Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley.</a> Journal of Neuroscience; 1 July 2019; DOI: 10.1523/JNEUROSCI.2956-18.2019</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Scientists have identified mechanisms in the human brain that could help explain the phenomenon of the ‘Uncanny Valley’ – the unsettling feeling we get from robots and virtual agents that are too human-like. They have also shown that some people respond more adversely to human-like agents than others.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">For a neuroscientist, the ‘Uncanny Valley’ is an interesting phenomenon</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Fabian Grabenhorst</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/maxbraun/1489103461/" target="_blank">Max Braun</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Android</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div> Mon, 01 Jul 2019 17:00:57 +0000 cjb250 206182 at ‘Mindreading’ neurons simulate decisions of social partners /research/news/mindreading-neurons-simulate-decisions-of-social-partners <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/grabenhorstamygdalafig4fmain-website.gif?itok=Vd_jy4Uz" alt="Location of neurons predicting partner’s choices superimposed on a stained section through one animal’s amygdala. Colours indicate different nuclei." title="Location of neurons predicting partner’s choices superimposed on a stained section through one animal’s amygdala. Colours indicate different nuclei., Credit: Fabian Grabenhorst" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Researchers at the ֱ̽ of Cambridge identified the previously-unknown neuron type, which they say actively and spontaneously simulates mental decision processes when social partners learn from one another.</p>&#13; &#13; <p> ֱ̽study, published today in <em>Cell</em>, suggests that these newly-termed ‘simulation neurons’ – found in the amygdala, a collection of nerve cells in the temporal lobe of the brain – allow animals (and potentially also humans) to reconstruct their social partner’s state of mind and thereby predict their intentions.</p>&#13; &#13; <p> ֱ̽researchers go on to speculate that if simulation neurons became dysfunctional this could restrict social cognition, a symptom of autism. By contrast, they suggest overactive neurons could result in exaggerated simulation of what others might be thinking, which may play a role in social anxiety.</p>&#13; &#13; <p> ֱ̽study’s lead author, Dr Fabian Grabenhorst from the Department of Physiology, Development and Neuroscience, says: “We started out looking for neurons that might be involved in social learning. We were surprised to find that amygdala neurons not only learn the value of objects from social observation but actually use this information to simulate a partner’s decisions.”</p>&#13; &#13; <p>Simulating others’ decisions is a sophisticated cognitive process that is rooted in social learning. By observing a partner’s foraging choices, for instance, we learn which foods are valuable and worth choosing. Such knowledge not only informs our own decisions but also helps us predict the future decisions of our partner.</p>&#13; &#13; <p>Psychologists and philosophers have long suggested that simulation is the mechanism by which humans understand each other’s minds. Yet, the neural basis for this complex process has remained unclear. ֱ̽amygdala is well known for its diverse roles in social behaviour and has been implicated in autism. Until now, however, it was unknown whether amygdala neurons also contribute to advanced social cognition, such as simulating others’ decisions. </p>&#13; &#13; <p> ֱ̽study recorded activity from individual amygdala neurons as macaque monkeys took part in an observational learning task. Sat facing each other with a touch screen between them, the animals took turns in making choices to obtain rewards. To maximise their fruit juice reward, the animals were required to learn and track the reward probabilities associated with different pictures displayed on the screen.</p>&#13; &#13; <p> ֱ̽study allowed one animal to observe its partner’s choices so that they could learn the pictures’ reward values. Once the pictures switched between them, the observing animal could make use of this knowledge when it was their turn to choose.</p>&#13; &#13; <p>Surprisingly, the researchers found that when an animal observed its partner, the observer’s amygdala neurons seemed to play out a decision computation. These neurons first compared the reward values of the partner’s choice options before signalling the partner’s likely choice, consistent with a simulated decision process. Importantly, these activity patterns occurred spontaneously, well before partner’s choices and without decision requirement for the observer.</p>&#13; &#13; <p>Based on their findings, the scientists created the first computer model of the amygdala’s neural circuits involved in social cognition. By showing how specific types of neurons influence one another, this model suggests that the amygdala contains a ‘decision circuit’ which works out the animal’s own choices and a separate ‘simulation circuit’ which computes a prediction of the social partner’s choice. </p>&#13; &#13; <p></p>&#13; &#13; <p>Grabenhorst said: “Simulation and decision neurons are closely intermingled within the amygdala. We managed to distinguish between them and their different functions by carefully examining one neuron at a time. This would not have been possible with human brain imaging techniques that measure the averaged activity of large numbers of neurons.”</p>&#13; &#13; <p>“We think that simulation neurons are important building blocks for social cognition — they allow animals to reconstruct their partners’ mental decision processes. Simulation neurons could also constitute simple precursors for the amazing cognitive capacities of humans, such as ‘Theory of Mind’.”</p>&#13; &#13; <p> ֱ̽scientists suggest that if simulation neurons were dysfunctional or completely absent, this could impoverish social behaviour.</p>&#13; &#13; <p>Grabenhorst says: “If simulation neurons don’t function properly, a person might not be able to relate very well to the mental states of others. We know very little about how specific neuron types contribute to social cognition and to the social challenges faced by individuals with autism. By identifying specific neurons and circuit mechanisms for mental simulation, our study may offer new insights into these conditions.”</p>&#13; &#13; <p><strong><em>Reference:</em></strong></p>&#13; &#13; <p><em>Grabenhorst, F et al. <a href="https://www.cell.com/cell/fulltext/S0092-8674(19)30225-9">Primate amygdala neurons simulate decision processes of social partners</a>. Cell; 11 April; DOI: 10.1016/j.cell.2019.02.042</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Scientists have identified special types of brain cells that may allow us to simulate the decision-making processes of others, thereby reconstructing their state of mind and predicting their intentions. Dysfunction in these ‘simulation neurons’ may help explain difficulties with social interactions in conditions such as autism and social anxiety.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Simulation neurons are important building blocks for social cognition</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Fabian Grabenhorst</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Fabian Grabenhorst</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Location of neurons predicting partner’s choices superimposed on a stained section through one animal’s amygdala. Colours indicate different nuclei.</div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Animal Research</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><div> ֱ̽brain is an incredibly complex organ, and while we can study some brain function in tissue culture, computer models and rodents, the study of advanced <span data-scayt-word="behaviour" data-wsc-lang="en_US">behaviour</span> (both normal and abnormal) requires a human-like brain. Our only option is therefore to study these processes in non-human primates, such as marmosets and rhesus macaques.</div>&#13; &#13; <div> </div>&#13; &#13; <div> ֱ̽majority of non-human primates used in biomedical research are either marmosets or rhesus macaques. We must justify the use of these species to ourselves, our Animal Welfare and Ethical Review Body (<span data-scayt-word="AWERB" data-wsc-lang="en_US">AWERB</span>), and to the Home Office and the Animals in Science Committee, providing proof that there is no alternative. </div>&#13; &#13; <div> </div>&#13; &#13; <div>Our research is aimed at underpinning our knowledge of how the brain functions in healthy individuals and how malfunctions can have potentially serious health implications. In particular, the work concerns how we use information about reward for making crucial decisions and has relevance to issues as widespread as obesity, drug addiction, schizophrenia and Parkinson’s disease. A better understanding of how reward affects our decisions could lead to significant health benefits in the long term. For more information, <a href="/research/research-at-cambridge/animal-research/what-types-of-animal-do-we-use/non-human-primates">click here</a>.</div>&#13; </div></div></div><div class="field field-name-field-slideshow field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/sites/default/files/grabenhorstcellabstractfig.jpg" title="Graphic showing two decision systems in the primate amygdala. Courtesy of Fabian Grabenhorst." class="colorbox" data-colorbox-gallery="" data-cbox-img-attrs="{&quot;title&quot;: &quot;Graphic showing two decision systems in the primate amygdala. Courtesy of Fabian Grabenhorst.&quot;, &quot;alt&quot;: &quot;&quot;}"><img class="cam-scale-with-grid" src="/sites/default/files/styles/slideshow/public/grabenhorstcellabstractfig.jpg?itok=tqJAiABG" width="590" height="288" alt="" title="Graphic showing two decision systems in the primate amygdala. Courtesy of Fabian Grabenhorst." /></a></div><div class="field-item odd"><a href="/sites/default/files/grabenhorstamygdalafig4f.jpg" title="Location of neurons predicting partner’s choices superimposed on a stained section through one animal’s amygdala. Colours indicate different nuclei. Courtesy of Fabian Grabenhorst." class="colorbox" data-colorbox-gallery="" data-cbox-img-attrs="{&quot;title&quot;: &quot;Location of neurons predicting partner’s choices superimposed on a stained section through one animal’s amygdala. Colours indicate different nuclei. Courtesy of Fabian Grabenhorst.&quot;, &quot;alt&quot;: &quot;&quot;}"><img class="cam-scale-with-grid" src="/sites/default/files/styles/slideshow/public/grabenhorstamygdalafig4f.jpg?itok=TvQ4iO1a" width="590" height="288" alt="" title="Location of neurons predicting partner’s choices superimposed on a stained section through one animal’s amygdala. Colours indicate different nuclei. Courtesy of Fabian Grabenhorst." /></a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Fri, 12 Apr 2019 07:00:00 +0000 Anonymous 204702 at