ֱ̽ of Cambridge - Faculty of Philosophy /taxonomy/affiliations/faculty-of-philosophy News from the Faculty of Philosophy. en Cambridge Festival celebrates pioneering women for International Women’s Day /stories/cambridge-festival-iwd-2025 <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>For International Women’s Day (8 March), the Cambridge Festival (19 March – 4 April) is celebrating some of the remarkable contributions of women across diverse fields. From philosophy and music to AI and cosmology, the festival will highlight the pioneering work of women who have shaped our understanding of the world in profound ways.</p> </p></div></div></div> Fri, 07 Mar 2025 10:28:52 +0000 zs332 248752 at Interfering in big decisions friends and family take could violate a crucial moral right, philosopher argues /research/news/interfering-in-big-decisions-friends-and-family-take-could-violate-a-crucial-moral-right-philosopher <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/man-and-woman-speaking-photo-by-charlesdeluvio-on-unsplash-885x428.jpg?itok=mT3-x0-B" alt="Two people speaking, sat at a table" title="Two people speaking, sat at a table, Credit: Charlesdeluvio on Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>If you’ve told an adult friend or family member that they should not take a job, not date someone, not try skydiving or not move abroad, you may have violated a crucial moral right to ‘revelatory autonomy’ and ‘self-authorship’, according to a philosopher at Christ’s College, Cambridge.</p>&#13; &#13; <p>Dr Farbod Akhlaghi’s study, published in the journal <em>Analysis</em>, is the first of its kind to suggest that we have a moral right to ‘revelatory autonomy’, that is the right to discover for ourselves who we’ll become as a result of making ‘transformative choices’, choices to have experiences that teach us what that experience will be like for us whilst also changing our core preferences, values and desires.</p>&#13; &#13; <p>Dr Akhlaghi says: “ ֱ̽ability to see that the person we’ve become is the product of decisions that we made for ourselves is very important.</p>&#13; &#13; <p>“I’m not telling people what to do. I’m just highlighting part of what is morally at stake in these very common interactions and trying to develop a framework for us to understand them. I hope some may find this helpful, as these will always be difficult moments for all of us.”</p>&#13; &#13; <p>Traditionally, philosophers interested in ‘transformative experiences’ have focused on the decision-maker not on the people who are in a position to influence that person’s choices. But Dr Akhlaghi thinks that these neglected interactions present ‘an urgent ethical challenge’:</p>&#13; &#13; <p>“There are lots of different reasons why we might seek to intervene – some selfish, others well meaning – but whatever our motivation, we can cause significant harm, including to the people we love most.”</p>&#13; &#13; <p>While Akhlaghi accepts that advice can be offered without crossing the moral line, he warns that it is all too easy to slip into various forms of interference, such as forcing, coercing, manipulating or even ‘rationally persuading’ someone away from a transformative choice, in ways that may violate their right to revelatory autonomy.</p>&#13; &#13; <p>Akhlaghi says: “Rational persuasion is probably the most common form of interference. Giving, when asked, factual information about a choice that you have knowledge about and the other person does not, can be justified. But while rational persuasion respects someone’s ability to reason, even this form of engagement can involve disrespecting their autonomous self-authorship.</p>&#13; &#13; <p>For example, Akhlaghi continues: “Offering reasons, arguments or evidence as if one is in a privileged position with respect to what the other person’s experience would be like for them disrespects their moral right to revelatory autonomy.”</p>&#13; &#13; <p>Initially inspired to consider this area of moral philosophy by personal experiences, Dr Akhlaghi examines and rejects a number of other conditions under which it could be argued that trying to prevent someone from making transformative choices is morally justified.</p>&#13; &#13; <p><strong>For example</strong></p>&#13; &#13; <p>Dissuading someone from becoming a parent because you think parenthood would make their life worse is problematic because becoming a parent is a positive experience for some and not for others, and no one can know that outcome in advance, even if the person doing the dissuading has experienced being a parent themselves.</p>&#13; &#13; <p>A different example in the study relates to dissuading someone from making a career change that involves a big pay cut because you think that they would struggle to afford their expensive tastes. This is just as problematic, Akhlaghi says, because:</p>&#13; &#13; <p>“We can only know what the future person’s interests are and whether their present interests will be fulfilled after a transformative choice has been made.”</p>&#13; &#13; <p>“ ֱ̽person who changes job might manage to afford their expensive tastes and we don’t even know if that future person would still have these tastes. This highlights another problem – whose interests matter morally when trying to justify interfering: those of the present or the future person?”</p>&#13; &#13; <p><strong>Is it ever right to interfere?</strong></p>&#13; &#13; <p>“It is only permissible to interfere to try to prevent a transformative choice,” Akhlaghi argues “if someone’s right to revelatory autonomy is outweighed by competing moral considerations.”</p>&#13; &#13; <p>A would-be killer’s right to revelatory autonomy is, for instance, plausibly outweighed by the wrongness of killing others solely to discover who they would become by doing so. Equally, protecting a friend from gratuitous self-mutilation would plausibly outweigh their right to autonomously discover what it would be like to harm themselves in this way.</p>&#13; &#13; <p>Akhlaghi suggests that the more likely it is that a choice will affect someone’s ‘core preferences, identity and values’, the stronger the moral reasons would need to be to justify interfering in their decision. For instance, interfering in someone’s decision to go to university or not, would require far stronger moral reasons than them choosing whether to eat a cheeseburger or not.</p>&#13; &#13; <p>Finally, Akhlaghi clarifies that his study concerns voluntary choices to have ‘transformative experiences.’ Some such experiences are instead either the unintended consequences of something we did, or ones we are forced into as, for example, children might be by a divorce. These raise different but related problems he hopes to explore in future work.</p>&#13; &#13; <p><strong>Reference</strong></p>&#13; &#13; <p><em>Farbod Akhlaghi, '<a href="https://academic.oup.com/analysis/advance-article/doi/10.1093/analys/anac084/6966040">Transformative experience and the right to revelatory autonomy</a>', Analysis (2022), DOI: 10.1093/analys/anac084</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>We have a moral duty to allow others to make ‘transformative choices’ such as changing careers, migrating and having children, a new study argues. This duty can be outweighed by competing moral considerations such as preventing murder but in many cases we should interfere with far greater caution.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽ability to see that the person we’ve become is the product of decisions that we made for ourselves is very important</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Farbod Akhlaghi</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Charlesdeluvio on Unsplash</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Two people speaking, sat at a table</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Wed, 25 Jan 2023 07:30:00 +0000 ta385 236421 at ֱ̽philosopher who wants us to think deeply about ordinary things /this-cambridge-life/the-philosopher-who-wants-us-to-think-deeply-about-ordinary-things <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Nikhil Krishnan, winner of a 2021 Pilkington Prize for outstanding teaching, says that what he loves about teaching is what he loves about philosophy: you can’t know in advance where it’s going to lead. Outside of the lecture hall he’s unravelling how philosophy came to be what it is today.</p> </p></div></div></div> Wed, 15 Dec 2021 12:56:46 +0000 cg605 228731 at Cambridge academics elected to British Academy /news/cambridge-academics-elected-to-british-academy-0 <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/baf.jpg?itok=lUvgJawX" alt="Clockwise: Professor Holton, Professor Franklin, Professor Lieu, Professor Tsimpli, Professor Bell." title="Clockwise: Professor Holton, Professor Franklin, Professor Lieu, Professor Tsimpli, Professor Bell., Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This year a total of 84 Fellows have been elected to the Fellowship, of which five are Cambridge academics:</p> <ul> <li>Professor Duncan Bell, Professor of Political Thought and International Relations, Fellow of Christ’s College</li> <li>Professor Sarah Franklin, Chair of Sociology, Fellow of Christ's College</li> <li>Professor Richard Holton, Professor of Philosophy, Fellow of Peterhouse</li> <li>Professor Samuel Lieu, President of the International Union of Academies, Bye Fellow of Robinson College</li> <li>Professor Ianthi Tsimpli, Professor of English and Applied Linguistics, Fellow of Fitzwilliam College </li> </ul> <p>Founded in 1902, the British Academy is the UK’s national academy for the humanities and social sciences. It is a Fellowship of over 1400 of the leading minds in these subjects from the UK and overseas. ֱ̽Academy is also a funding body for research, nationally and internationally, and a forum for debate and engagement.</p> <p>Welcoming the Fellows, the new President of the British Academy, Professor Julia Black, said: </p> <p>“As the new President of the British Academy, it gives me great pleasure to welcome this new cohort of Fellows, who are as impressive as ever and remind us of the rich and diverse scholarship and research undertaken within the SHAPE disciplines – the social sciences, humanities and the arts. I am very much looking forward to working with them on our shared interests.  </p> <p>“ ֱ̽need for SHAPE subjects has never been greater. As Britain recovers from the pandemic and seeks to build back better, the insights from our diverse disciplines will be vital to ensure the health, wellbeing and prosperity of the UK and will continue to provide the cultural and societal enrichment that has sustained us over the last eighteen months. Our new Fellows embody the value of their subjects and I congratulate them warmly for their achievement.”</p> <p>Professor Bell works on the history of modern British and American political thought, focusing mainly on visions of empire and international politics in the nineteenth and twentieth centuries. “I am delighted to be joining such a distinguished institution,” said Bell. “ ֱ̽humanities and social sciences are vital in both understanding and changing the world, and the British Academy does great work in helping to sustain them.”</p> <p>Professor Franklin’s research addresses the history and culture of UK IVF, the IVF-stem cell interface, cloning, embryo research, and changing understandings of kinship, biology, and technology. “It is an honour and a privilege to be elected to the Academy and I’m delighted by this news,” said Franklin.</p> <p>Professor Holton’s current work focuses mainly on Moral Psychology, Ethics, Philosophy of Law, and Philosophy of Language. He said “obviously I’m delighted to be elected, and somewhat daunted! I love the broad range of the British Academy. It does great work supporting the humanities and social sciences, both in the UK and internationally, and I very much look forward to being part of that at a time when these disciplines face many challenges.”</p> <p>Professor Lieu is the current President of the International Union of Academies (Union Académique Internationale) − an organisation founded after the First World War to promote peace through research collaboration among national academies. Professor Lieu’s field is the history of pre-Islamic Central Asia, especially the history of religions transmitted along the Silk Road (e.g., Christianity and Manichaeism). He said: “I feel greatly honoured and privileged to be elected Fellow of the British Academy, an organisation which has generously supported my subject area, especially in terms of project and publication grants.”</p> <p>Professor Tsimpli works on multilingualism, first and second language development in children and adults, language impairment, attrition and processing, and the effects of socioeconomic challenges on multilingual children’s language, cognition and school skills. She said “I’m deeply honoured to have been elected and look forward to being part of this community of researchers.”</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Five Cambridge academics have been elected to the Fellowship of the British Academy in recognition of their contribution to the humanities and social sciences.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽need for SHAPE subjects has never been greater.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Professor Julia Black, President of the British Academy</div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Clockwise: Professor Holton, Professor Franklin, Professor Lieu, Professor Tsimpli, Professor Bell.</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 23 Jul 2021 09:17:55 +0000 cg605 225591 at Philosopher’s thumbs-down to social media ‘likes’ gets award thumbs-up from Royal Institute /research/news/philosophers-thumbs-down-to-social-media-likes-gets-award-thumbs-up-from-royal-institute <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/lucymcdonaldimage2creditnordincaticstjohnscollegecropmainweb.jpg?itok=OfNDrylN" alt="Dr Lucy McDonald at St John’s College, Cambridge " title="Dr Lucy McDonald at St John’s College, Cambridge , Credit: Nordin Ćatić, St John’s College" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>‘Please Like This Paper’, published today (12 May 2021) in the Institute’s journal <a href="https://www.cambridge.org/core/journals/philosophy"><em>Philosophy</em></a>, argues that while ‘like’ functions help social media users feel they are being heard, they might actually be making us worse listeners/readers. It also suggests that ‘likes’ and ‘like tallies’, in particular, play a central role in fostering political polarisation. </p>&#13; &#13; <p> ֱ̽essay’s author, <a href="https://www.joh.cam.ac.uk/fellow-profile/2433">Dr Lucy McDonald</a>, a Junior Research Fellow in Philosophy at St John’s College, Cambridge, says of liking: “It is a form of pseudo-engagement which absolves us of the guilt of not responding to others’ posts but creates the bare minimum of human connection.”</p>&#13; &#13; <p>Contrary to some recent legal judgements, McDonald argues that liking defamatory content “should not necessarily count as endorsement of that content.” An active social media user herself, McDonald accepts that ‘like’ tallies: “give us information we previously lacked, but this information seems to have had a number of corrosive effects on internet discourse. These effects seem worrying enough to offset any particular benefits ‘like’ data may offer … there may be some things we are better off not knowing.” </p>&#13; &#13; <p>McDonald argues that “we should not think of accrued likes as a reliable measure of the esteem in which a person is held.” Instead, “the ‘like’ tally both institutes and measures a digital form of what the French sociologist Pierre Bourdieu called ‘social capital’, or “the product of accumulated social labour”. ‘Like’ tallies have, McDonald points out, “made social capital both more visible and more measurable online” with a number of harmful effects.</p>&#13; &#13; <p>“If our audience has thousands of posts to sift through, we need to say something dramatic to get their (and the algorithms’) attention. Our desire for engagement with others, and the social capital that comes with it, can make us care less about whether the claims we make and share online are true, as well as whether the content we share has been deliberately designed by others to trigger our biases and vulnerabilities, or to serve some nefarious political goal. This makes social media users more vulnerable to manipulation and can lead to the dissemination of harmful ideologies.</p>&#13; &#13; <p>“This also hampers meaningful and productive political deliberation online. If we are not interested in getting at the truth, but only in getting ‘likes’, and if we know that others take this approach, too, we will not be interested in exchanging information, reasons, and arguments with one another, but rather with fighting it out for the most exciting online content.</p>&#13; &#13; <p>“In its early days, the internet was heralded for its potential to improve democracy. Many thought the internet could bring about what Jürgen Habermas calls the ‘ideal speech situation’. But the ‘like’ function has revitalised the age-old worry that vivid rhetoric and emotional appeals will win out over rational deliberation in democracies. It has done this by quantifying social capital and making it ever-present in online communication, thereby making demagoguery a more salient and tempting prospect than ever before.</p>&#13; &#13; <p>“ ֱ̽‘like’ function plays an instrumental role in fostering political polarisation because it reminds us constantly of our online social capital, and it strengthens the cognitive and social incentives for producing content that accrues many ‘likes’ – many will therefore adjust their circles (consciously or subconsciously) in order to guarantee a steady stream of ‘likes’.”</p>&#13; &#13; <p>McDonald welcomes some social media users taking active steps to reduce the impact of ‘like’ tallies by installing extensions like the Facebook Demetricator, which hides all metrics, and some media platforms experimenting with removing tallies from users’ newsfeeds “even if they risk dramatically disrupting the distribution and measurement of online social capital.” </p>&#13; &#13; <p>McDonald proposes that ‘likes’ are best theorised as an “essentially phatic act”, as characterised by the anthropologist Bronisław Malinowski in the 1920s, because we use them to build social bonds and bring people together. In this sense, ‘likes’ are similar to gestures like smiles or nods.</p>&#13; &#13; <p>Many people, McDonald observes, ‘like’ friends’ posts “routinely and out of a sense of obligation, without really reading or engaging with them. “We expect our friends to listen to us, not to ignore us, and so ‘liking’ posts helps reassure people that they have an audience, which is still listening and engaged.”</p>&#13; &#13; <p>McDonald points out that despite how widespread social media use is, this behaviour is rarely discussed in contemporary philosophy of language, which “still tends to focus on face-to-face, one-on-one spoken interaction.” She also argues that ‘likes’ “transmit many different kinds of information; their ‘content’ is not stable, and they have no recognisable, conventional ‘meaning’.”</p>&#13; &#13; <p>“This tiny act could seem inconsequential or frivolous. After all, to ‘like’ a post is simply to press a button. Yet it is of huge social significance. With ‘likes’ come considerable power.” </p>&#13; &#13; <p><em>Philosophy</em> journal’s editors Professor Maria Alvarez and Professor Bill Brewer said: “ ֱ̽essay is striking for its successful combination of philosophical investigation and rich and varied empirical detail.”</p>&#13; &#13; <p><a href="https://royalinstitutephilosophy.org/"> ֱ̽Royal Institute of Philosophy</a>'s 2021 essay prize was jointly awarded to Nikhil Venkatesh (UCL) for ‘Surveillance Capitalism: a Marx-inspired Account’.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽Royal Institute of Philosophy has awarded (jointly) its 2021 essay prize to a ֱ̽ of Cambridge researcher for the first philosophical analysis of ‘liking’ on social media. ֱ̽essay, which focuses on Facebook, warns that ‘likes’ encourage communicative laziness while ‘like tallies’ fuel fake news, ‘<span data-scayt-word="gamify" data-wsc-id="kojud443htz6fn1at" data-wsc-lang="en_US">gamify</span> sociality’ and play to our psychological weaknesses.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽‘like’ function plays an instrumental role in fostering political polarisation because it reminds us constantly of our online social capital</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Lucy McDonald</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Nordin Ćatić, St John’s College</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dr Lucy McDonald at St John’s College, Cambridge </div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Read on</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Find out more about social media's effects on our daily lives, <a href="/stories/socialmedia">including tips for healthy social media use, here</a>.</p>&#13; </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Wed, 12 May 2021 07:00:00 +0000 ta385 223971 at Living with artificial intelligence: how do we get it right? /research/discussion/living-with-artificial-intelligence-how-do-we-get-it-right <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/gic-on-stocksy_0.jpg?itok=JEpbgoWy" alt="" title="Credit: GIC on Stocksy" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?</p>&#13; &#13; <p>True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.</p>&#13; &#13; <p>If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.</p>&#13; &#13; <p>Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?</p>&#13; &#13; <p>On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.</p>&#13; &#13; <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p>&#13; &#13; <p>So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.</p>&#13; &#13; <p>For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.</p>&#13; &#13; <p>However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. ֱ̽‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.</p>&#13; &#13; <p> ֱ̽‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.</p>&#13; &#13; <p>Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.</p>&#13; &#13; <p>As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.</p>&#13; &#13; <p>Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?</p>&#13; &#13; <p>These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.</p>&#13; &#13; <p>This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”</p>&#13; &#13; <p>We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.</p>&#13; &#13; <p>But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.</p>&#13; &#13; <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; download a <a href="/system/files/issue_35_research_horizons_new.pdf">pdf</a>; view on <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">Issuu</a>.</em></p>&#13; &#13; <p><em>Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a>, where they work on '<a href="https://www.lcfi.ac.uk/projects/ai-agents-and-persons/">Agents and persons</a>'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Huw Price and Karina Vold</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">GIC on Stocksy</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a></div></div></div> Wed, 28 Feb 2018 11:00:51 +0000 Anonymous 195752 at Brain, body and mind: understanding consciousness /research/features/brain-body-and-mind-understanding-consciousness <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/160223brain-signaturescredit-srivas-chennu.jpg?itok=jXsZt8XY" alt="Electrical brain &#039;signatures&#039;. ֱ̽patient to the left is in a vegetative state; the patient in the middle is also in a vegetative state but their brain appears as conscious as the brain of the healthy individual at the right." title="Electrical brain &amp;#039;signatures&amp;#039;. ֱ̽patient to the left is in a vegetative state; the patient in the middle is also in a vegetative state but their brain appears as conscious as the brain of the healthy individual at the right., Credit: Srivas Chennu" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In 10 minutes, Srivas Chennu can work out what’s going on inside your head.</p>&#13; &#13; <p>With the help of an electrode-studded hairnet wired up to a box that measures patterns of electrical activity, he can monitor the firing of millions of neurons deep within the brain. A few minutes later, wheeling his trolley-held device away, he has enough information to tell how conscious you really are.</p>&#13; &#13; <p>What Chennu is looking for with his electroencephalogram (EEG) is the brain’s electrical ‘signature’. At any one moment in the body’s most complex organ, networks of neurons are firing up and creating ‘brain waves’ of electrical activity that can be detected through the scalp net.</p>&#13; &#13; <p>This isn’t new technology – the first animal EEG was published a century ago – but computational neuroscientist Chennu has come up with a way of combining its output with a branch of maths called graph theory to measure the level of a person’s consciousness. What’s more, he’s developing the technology as a bedside device for doctors to diagnose patients suffering from consciousness disorders (such as a vegetative state caused by injury or stroke) to work out the best course of action and to support family counselling.</p>&#13; &#13; <p>“Being conscious not only means being awake, but also being able to notice and experience,” he explains. “When someone is conscious, there are patterns of synchronised neural activity arcing across the brain that can be detected using EEG and quantified with our software.”</p>&#13; &#13; <p>So for a healthy brain, the brain’s signature might look like a raging scrawl of lines sweeping back and forth, as integrated groups of neurons perceive, process, understand and sort information. When we sleep, this diminishes to a squiggle of the faintest strokes as we lose consciousness, flaring occasionally if we dream.</p>&#13; &#13; <p>“Understanding how consciousness arises from neural interactions is an elusive and fascinating question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question – it takes on a very real significance.</p>&#13; &#13; <p>“ ֱ̽patient might be awake, but to what extent are they aware? Can they hear, see, feel? And if they are aware, does their level of awareness equate to their long-term prognosis?”</p>&#13; &#13; <p>Chennu points to charts showing the brain signature of two vegetative patients. On one chart, just a few lines appear above the skull. In the other, the lines are so many they resemble, as Chennu describes, a multi-coloured mohican, almost indistinguishable from the signature one would see from a healthy person.</p>&#13; &#13; <p>Did either of the patients wake up? “Yes, the second patient did, a year after this trace was taken. ֱ̽point is, if you think that a patient will wake up, what would you do differently as a clinician, or as a family member?”</p>&#13; &#13; <p> ֱ̽research is based on the finding that a patient in a vegetative state could respond to yes or no questions, as measured by distinct patterns of brain activity using functional magnetic resonance imaging. It was discovered by Chennu’s colleagues in the Department of Clinical Neurosciences and the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU), led by Dr Adrian Owen.</p>&#13; &#13; <p>In 2011, the group found the same attention to commands could be measured using EEG – a less expensive and more widely available technology. Three years later, Chennu and Dr Tristan Bekinschtein from the CBSU, and now in the Department of Psychology, showed that their mathematical analysis of the EEG outputs was enough to measure the ambient amount of connectivity in a patient’s brain.</p>&#13; &#13; <p>Chennu hopes that the machine will fill a technology gap: “Misdiagnosis of true levels of consciousness in vegetative patients continues to be around 40% and depends on behavioural examination. In part this is because there is no gold standard for the assessment of a patient’s awareness at the bedside.”</p>&#13; &#13; <p>With funding from the Evelyn Trust, he will assess and follow the treatment and rehabilitation trajectory of 50 patients over a three-year period. This will be the first time that a study has linked diagnosis, treatment and outcome to regular real-time assessment of the activity of a patient’s brain.</p>&#13; &#13; <p>Meanwhile he is continuing to develop the medical device with industry as part of the National Institute for Health Research Healthcare Technology Co-operative for Brain Injury, which is hosted within the Department of Clinical Neurosciences.</p>&#13; &#13; <p>“Medical advances mean that we are identifying subtypes of brain injury and moving away from ‘one size fits all’ to more-targeted treatment specific for an individual’s needs,” adds Chennu, who is also funded by the James S. McDonnell Foundation and works as part of a team led by Professors John Pickard and David Menon.</p>&#13; &#13; <p>Intriguingly the device could even offer a channel of communication, as Chennu speculates: “ ֱ̽question that fascinates us is what type of consciousness do patients have? Perhaps we can create systems to translate neural activity into commands for simple communication – interfaces that could provide a basic but reliable communication channel from the ‘inbetween place’ in which some patients exist.</p>&#13; &#13; <p>“Moreover, we think that the measurement of brain networks will provide clinically useful information that could help with therapeutics for a larger majority of patients, irrespective of whether they are able to demonstrate hidden consciousness.”</p>&#13; &#13; <p><em>How conscious is my dog? Can robots become conscious? Are people in a vegetative state conscious? Don't miss Philosopher Professor Tim Crane and neuroscientist Dr Srivas Chennu at the <a href="https://www.festival.cam.ac.uk/">Cambridge Science Festival</a>, where they will look into our minds and wrestle with the meaning of what it is to be conscious. 'Brain, body and mind: new directions in the neuroscience and philosophy of consciousness'</em><em>, the Research Horizons Public Lecture, will be on Wednesday 16 March 2016, 8pm–9pm, Mill Lane Lecture Rooms, Mill Lane, Cambridge. <a href="https://www.festival.cam.ac.uk/events/brain-body-and-mind-new-directions-neuroscience-and-philosophy-consciousness">Pre-booking required</a>.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A bedside device that measures ‘brain signatures’ could help diagnose patients who have consciousness disorders – such as a vegetative state – to work out the best course of treatment and to support family counselling. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽patient might be awake, but to what extent are they aware? Can they hear, see, feel? And if they are aware, does their level of awareness equate to their long-term prognosis?</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Srivas Chennu</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Srivas Chennu</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Electrical brain &#039;signatures&#039;. ֱ̽patient to the left is in a vegetative state; the patient in the middle is also in a vegetative state but their brain appears as conscious as the brain of the healthy individual at the right.</div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">New directions in the study of the mind</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><strong>We know a great deal about the brain but what does it actually mean to be conscious, asks a new research <a href="https://newdirectionsproject.com/">programme</a> in the Faculty of Philosophy.</strong></p>&#13; &#13; <p>In what way are newborn babies, or animals, conscious? Why do some experiences become part of one’s consciousness yet others do not?</p>&#13; &#13; <p>“It’s sometimes assumed that it’s obvious what consciousness is, and the only question is how it is embodied in the brain,” says Professor Tim Crane. “But many people now recognise that it’s not clear what it means to say that something has a mind, or is capable of thought or conscious experience. My view is that there are lots of assumptions that are being made in order to get to that conclusion and not all of the assumptions are correct.”</p>&#13; &#13; <p>Crane leads a new research initiative in the Faculty of Philosophy supported by the John Templeton Foundation that aims to tackle the broad question of the essence of the mind. And to do this they are moving beyond the reductionist view that everything can be explained in terms of the nuts and bolts of neuroscience.</p>&#13; &#13; <p>“That doesn’t mean we are interested in proving the existence of the immortal soul, or defending any religious doctrine – we are interested in the idea that the brain’s-eye view isn’t everything when it comes to understanding the mind.</p>&#13; &#13; <p>“ ֱ̽nervous system clearly provides the mechanism for thought and consciousness but learning about it doesn’t tell us everything we need to know about phenomena like the emotion of parental love, or ambition or desire. ֱ̽mere fact that something goes on in your brain when you think does not explain what thinking essentially is.”</p>&#13; &#13; <p> ֱ̽team in Cambridge are also distributing funds for smaller projects elsewhere in the world, each of which is tackling similar questions of consciousness in philosophy, neuroscience and psychology.</p>&#13; &#13; <p>“Collectively we want to recognise ‘the reality of the psychological’ without saying that it’s really just brain chemicals,” adds Crane. “It’s important to face up to the fact that we are not just our neurons.”</p>&#13; </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="http://www.newdirectionsproject.com">New Directions in the Study of the Mind</a></div></div></div> Tue, 23 Feb 2016 10:27:50 +0000 lw355 168072 at ֱ̽future of intelligence: Cambridge ֱ̽ launches new centre to study AI and the future of humanity /research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/9068870200cfc82be178o.jpg?itok=DrGeEJbQ" alt="Supercomputer" title="Supercomputer, Credit: Sam Churchill" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Human-level intelligence is familiar in biological “hardware” – it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.</p>&#13; &#13; <p>While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the ֱ̽ of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”</p>&#13; &#13; <p>Now, thanks to an unprecedented £10 million grant from the <a href="https://www.leverhulme.ac.uk/">Leverhulme Trust</a>, the ֱ̽ of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.</p>&#13; &#13; <p> ֱ̽Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.</p>&#13; &#13; <p>Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.</p>&#13; &#13; <p> ֱ̽Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. ֱ̽Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the ֱ̽’s Centre for the Study of Existential Risk (<a href="https://www.cser.ac.uk/">CSER</a>), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.</p>&#13; &#13; <p>Dr Ó hÉigeartaigh said: “ ֱ̽Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”</p>&#13; &#13; <p> ֱ̽Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the ֱ̽ of Cambridge with links to the Oxford Martin School at the ֱ̽ of Oxford, Imperial College London, and the ֱ̽ of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (<a href="https://www.crassh.cam.ac.uk/">CRASSH</a>). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”</p>&#13; &#13; <p>Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said:</p>&#13; &#13; <p>“ ֱ̽field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. ֱ̽Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and  study its implications.”</p>&#13; &#13; <p> ֱ̽Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽ ֱ̽ of Cambridge is launching a new research centre, thanks to a £10 million grant from the Leverhulme Trust, to explore the opportunities and challenges to humanity from the development of artificial intelligence. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Huw Price</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/samchurchill/9068870200" target="_blank">Sam Churchill</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Supercomputer</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Thu, 03 Dec 2015 09:27:58 +0000 fpjl2 163582 at