ֱ̽ of Cambridge - Huw Price /taxonomy/people/huw-price en Robots can go all the way to Mars, but they can’t pick up the groceries /research/features/robots-can-go-all-the-way-to-mars-but-they-cant-pick-up-the-groceries <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/crop_0.jpg?itok=0WWVDq1E" alt="Puppy, a running robot developed by Fumiya Iida’s team" title="Puppy, a running robot developed by Fumiya Iida’s team, Credit: None" /></div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>In the popular imagination, robots have been portrayed alternatively as friendly companions or existential threat. But while robots are becoming commonplace in many industries, they are neither C-3PO nor the Terminator. Cambridge researchers are studying the interaction between robots and humans – and teaching them how to do the very difficult things that we find easy. <a href="/stories/robots-and-humans">Click here</a> to find out more.</p>&#13; </p></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Puppy, a running robot developed by Fumiya Iida’s team</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 21 Dec 2018 09:23:54 +0000 sc604 195172 at Living with artificial intelligence: how do we get it right? /research/discussion/living-with-artificial-intelligence-how-do-we-get-it-right <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/gic-on-stocksy_0.jpg?itok=JEpbgoWy" alt="" title="Credit: GIC on Stocksy" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?</p>&#13; &#13; <p>True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.</p>&#13; &#13; <p>If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.</p>&#13; &#13; <p>Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?</p>&#13; &#13; <p>On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.</p>&#13; &#13; <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p>&#13; &#13; <p>So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.</p>&#13; &#13; <p>For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.</p>&#13; &#13; <p>However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. ֱ̽‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.</p>&#13; &#13; <p> ֱ̽‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.</p>&#13; &#13; <p>Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.</p>&#13; &#13; <p>As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.</p>&#13; &#13; <p>Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?</p>&#13; &#13; <p>These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.</p>&#13; &#13; <p>This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”</p>&#13; &#13; <p>We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.</p>&#13; &#13; <p>But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.</p>&#13; &#13; <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; download a <a href="/system/files/issue_35_research_horizons_new.pdf">pdf</a>; view on <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">Issuu</a>.</em></p>&#13; &#13; <p><em>Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a>, where they work on '<a href="https://www.lcfi.ac.uk/projects/ai-agents-and-persons/">Agents and persons</a>'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Huw Price and Karina Vold</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">GIC on Stocksy</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a></div></div></div> Wed, 28 Feb 2018 11:00:51 +0000 Anonymous 195752 at Science fiction vs science fact: World’s leading AI experts come to Cambridge /research/news/science-fiction-vs-science-fact-worlds-leading-ai-experts-come-to-cambridge <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aibrain.jpg?itok=RYs7tHok" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽two-day conference (July 13-14) at Jesus College is the first major event held by the Leverhulme Centre for the Future of Intelligence (CFI) since its globally-publicised <a href="/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of">launch by Stephen Hawking</a> and other AI luminaries in October 2016.</p>&#13; &#13; <p>Bringing together policy makers and philosophers, as well as leading figures from science and technology, speakers include Astronomer Royal Martin Rees, Matt Hancock (Minister for Digital and Culture), Baroness Onora O'Neill and Francesca Rossi (IBM).</p>&#13; &#13; <p>Dr Stephen Cave, Executive Director of CFI, said: “Rarely has a technology arrived with such a rich history of myth, storytelling and hype as AI. ֱ̽first day of our conference will ask how films, literature and the arts generally have shaped our expectations, fears and even the technology itself.</p>&#13; &#13; <p>“Meanwhile, the second day will ask how and when we can trust the intelligent machines on which we increasingly depend – and whether those machines are changing how we trust each other."</p>&#13; &#13; <p><a href="https://www.lcfi.ac.uk/media/uploads/files/CFI_2017_programme.pdf">Programme highlights</a> of the conference include:</p>&#13; &#13; <ul><li>Sci-Fi Dreams: How visions of the future are shaping development of intelligent technology</li>&#13; <li>Truth Through Fiction: How the arts and media help us explore the challenges and opportunities of AI</li>&#13; <li>Metal people: How we perceive intelligent robots – and why</li>&#13; <li>Trust, Security and the Law: Assuring safety in the age of artificial intelligence</li>&#13; <li>Trust and Understanding: Uncertainty, complexity and the ‘black box’</li>&#13; </ul><p>Professor Huw Price, Academic Director of the Centre, and Bertrand Russell Professor of Philosophy at Cambridge, said: “During two packed days in Cambridge we’ll be bringing together some of the world’s most important voices in the study and development of the technologies on which all our futures will depend.</p>&#13; &#13; <p>“Intelligent machines offer huge benefits in many fields, but we will only realise these benefits if we know we can trust them – and maintain trust in each other and our institutions as AI transforms the world around us.”</p>&#13; &#13; <p>Other conference speakers include Berkeley AI pioneer Professor Stuart Russell, academic and broadcaster Dr Sarah Dillon, and Sir David Spiegelhalter, Cambridge’s Winton Professor of the Public Understanding of Risk. An AI-themed art exhibition is also being held to coincide with the Jesus College event.</p>&#13; &#13; <p>CFI brings together four of the world’s foremost universities (Cambridge, Berkeley, Imperial College and Oxford) to explore the implications of AI for human civilisation. Researchers will work with policy-makers and industry to investigate topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.</p>&#13; &#13; <p>Many researchers take seriously the possibility that intelligence equal to our own will be created in computers within this century. Freed of biological constraints, such as limited memory and slow biochemical processing speeds, machines may eventually become more broadly intelligent than we are – with profound implications for us all.</p>&#13; &#13; <p>Launching the £10m centre last year, Professor Hawking said: “Success in creating AI could be the biggest event in the history of civilisation but it could also be the last – unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many.</p>&#13; &#13; <p>“We cannot predict what might be achieved when our own minds are amplified by AI. ֱ̽rise of powerful AI will either be the best or the worst thing to happen to humanity. We do not yet know which.”</p>&#13; &#13; <p>Professor Maggie Boden, External Advisor to the Centre, whose pioneering work on AI has been translated into 20 languages, said: “ ֱ̽practical solutions of AI can help us to tackle important social problems and advance the science of mind and life in fundamental ways. But it has limitations which could present grave dangers. CFI aims to guide the development of AI in human-friendly ways.”</p>&#13; &#13; <p>Dr Cave added: “We've chosen the topic of myths and trust for our first annual conference because they cut across so many of the challenges and opportunities raised by AI. As well as world-leading experts, we hope to bring together a wide range of perspectives to discuss these topics, including from industry, policy and the arts. ֱ̽challenge of transitioning to a world shared with intelligent machines is one that we all face together.”</p>&#13; &#13; <p> ֱ̽first day of the conference is in partnership with the Royal Society, while the second is in partnership with Jesus College's Intellectual Forum. ֱ̽conference is being generously sponsored by Accenture and PwC.</p>&#13; &#13; <p>Further details and ticketing information can be found <a href="https://www.lcfi.ac.uk/events/Conference2017/">here</a>.</p>&#13; &#13; <p> </p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Some of the world’s leading thinkers and practitioners in the field of Artificial Intelligence (AI) will gather in Cambridge this week to look at everything from the influence of science fiction on our dreams of the future, to ‘trust in the age of intelligent machines’.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Rarely has a technology arrived with such a rich history of myth, storytelling and hype as AI.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dr Stephen Cave</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 10 Jul 2017 10:22:27 +0000 sjr81 190202 at “ ֱ̽best or worst thing to happen to humanity” - Stephen Hawking launches Centre for the Future of Intelligence /research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/hawking-launch.jpg?itok=UkDbs04v" alt="Stephen Hawking speaking at tonight&#039;s launch" title="Stephen Hawking speaking at tonight&amp;#039;s launch, Credit: Nick Saffell" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Speaking at the launch of the £10million <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (CFI) in Cambridge, Professor Hawking said the rise of AI would transform every aspect of our lives and was a global event on a par with the industrial revolution.</p>&#13; &#13; <p>CFI brings together four of the world’s leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.</p>&#13; &#13; <p>“Success in creating AI could be the biggest event in the history of our civilisation,” said Professor Hawking. “But it could also be the last – unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many.</p>&#13; &#13; <p>“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation.”</p>&#13; &#13; <p> ֱ̽Centre for the Future of Intelligence will initially focus on seven distinct projects in the first three-year phase of its work, reaching out to brilliant researchers and connecting them and their ideas to the challenges of making the best of AI. Among the initial research topics are: ‘Science, value and the future of intelligence’; ‘Policy and responsible innovation’; ‘Autonomous weapons – prospects for regulation’ and ‘Trust and transparency’.</p>&#13; &#13; <p> ֱ̽Academic Director of the Centre, and Bertrand Russell Professor of Philosophy at Cambridge, Huw Price, said: “ ֱ̽creation of machine intelligence is likely to be a once-in-a-planet’s-lifetime event. It is a future we humans face together. Our aim is to build a broad community with the expertise and sense of common purpose to make this future the best it can be.”</p>&#13; &#13; <p>Many researchers now take seriously the possibility that intelligence equal to our own will be created in computers within this century. Freed of biological constraints, such as limited memory and slow biochemical processing speeds, machines may eventually become more intelligent than we are – with profound implications for us all.</p>&#13; &#13; <p>AI pioneer Professor Maggie Boden ( ֱ̽ of Sussex) sits on the Centre’s advisory board and spoke at this evening’s launch. She said: “AI is hugely exciting. Its practical applications can help us to tackle important social problems, as well as easing many tasks in everyday life. And it has advanced the sciences of mind and life in fundamental ways. But it has limitations, which present grave dangers given uncritical use. CFI aims to pre-empt these dangers, by guiding AI development in human-friendly ways.”</p>&#13; &#13; <p>“Recent landmarks such as self-driving cars or a computer game winning at the game of Go, are signs of what’s to come,” added Professor Hawking. “ ֱ̽rise of powerful AI will either be the best or the worst thing ever to happen to humanity. We do not yet know which. ֱ̽research done by this centre is crucial to the future of our civilisation and of our species.”</p>&#13; &#13; <p><strong>Transcript of Professor Hawking’s speech at the launch of the Leverhulme Centre for the Future of Intelligence, October 19, 2016</strong></p>&#13; &#13; <p>“It is a great pleasure to be here today to open this new Centre.  We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity.  So it is a welcome change that people are studying instead the future of intelligence.</p>&#13; &#13; <p>Intelligence is central to what it means to be human.  Everything that our civilisation has achieved, is a product of human intelligence, from learning to master fire, to learning to grow food, to understanding the cosmos. </p>&#13; &#13; <p>I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer.  It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.</p>&#13; &#13; <p>Artificial intelligence research is now progressing rapidly.  Recent landmarks such as self-driving cars, or a computer winning at the game of Go, are signs of what is to come.  Enormous levels of investment are pouring into this technology.  ֱ̽achievements we have seen so far will surely pale against what the coming decades will bring.</p>&#13; &#13; <p> ֱ̽potential benefits of creating intelligence are huge.  We cannot predict what we might achieve, when our own minds are amplified by AI.  Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one — industrialisation.  And surely we will aim to finally eradicate disease and poverty.  Every aspect of our lives will be transformed.  In short, success in creating AI, could be the biggest event in the history of our civilisation.</p>&#13; &#13; <p>But it could also be the last, unless we learn how to avoid the risks.  Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.   It will bring great disruption to our economy.  And in the future, AI could develop a will of its own — a will that is in conflict with ours.</p>&#13; &#13; <p>In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.  We do not yet know which.  That is why in 2014, I and a few others called for more research to be done in this area.  I am very glad that someone was listening to me! </p>&#13; &#13; <p> ֱ̽research done by this centre is crucial to the future of our civilisation and of our species.  I wish you the best of luck!”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a speech delivered by Professor Stephen Hawking this evening.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stephen Hawking</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-115492" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/115492"> ֱ̽best or worst thing to happen to humanity</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/_5XvDCjrdXs?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Nick Saffell</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stephen Hawking speaking at tonight&#039;s launch</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-noncommercial-sharealike">Attribution-Noncommercial-ShareAlike</a></div></div></div> Wed, 19 Oct 2016 14:58:23 +0000 sjr81 180092 at ֱ̽future of intelligence: Cambridge ֱ̽ launches new centre to study AI and the future of humanity /research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/9068870200cfc82be178o.jpg?itok=DrGeEJbQ" alt="Supercomputer" title="Supercomputer, Credit: Sam Churchill" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Human-level intelligence is familiar in biological “hardware” – it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.</p>&#13; &#13; <p>While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the ֱ̽ of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”</p>&#13; &#13; <p>Now, thanks to an unprecedented £10 million grant from the <a href="https://www.leverhulme.ac.uk/">Leverhulme Trust</a>, the ֱ̽ of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.</p>&#13; &#13; <p> ֱ̽Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.</p>&#13; &#13; <p>Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.</p>&#13; &#13; <p> ֱ̽Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. ֱ̽Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the ֱ̽’s Centre for the Study of Existential Risk (<a href="https://www.cser.ac.uk/">CSER</a>), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.</p>&#13; &#13; <p>Dr Ó hÉigeartaigh said: “ ֱ̽Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”</p>&#13; &#13; <p> ֱ̽Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the ֱ̽ of Cambridge with links to the Oxford Martin School at the ֱ̽ of Oxford, Imperial College London, and the ֱ̽ of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (<a href="https://www.crassh.cam.ac.uk/">CRASSH</a>). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”</p>&#13; &#13; <p>Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said:</p>&#13; &#13; <p>“ ֱ̽field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. ֱ̽Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and  study its implications.”</p>&#13; &#13; <p> ֱ̽Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽ ֱ̽ of Cambridge is launching a new research centre, thanks to a £10 million grant from the Leverhulme Trust, to explore the opportunities and challenges to humanity from the development of artificial intelligence. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Huw Price</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/samchurchill/9068870200" target="_blank">Sam Churchill</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Supercomputer</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Thu, 03 Dec 2015 09:27:58 +0000 fpjl2 163582 at Humanity's last invention and our uncertain future /research/news/humanitys-last-invention-and-our-uncertain-future <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/lightcycles.jpg?itok=YI1H4_Xr" alt="Light cycles" title="Light cycles, Credit: Jason A. Samfield from Flickr" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called <em>Speculations concerning the first ultra-intelligent machine</em>. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.</p>&#13; &#13; <p>This machine, he continued, would be the “last invention” that mankind will ever make, leading to an “intelligence explosion” – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on <em>2001: a Space Odyssey</em>, the “survival of man” depended on the construction of this ultra-intelligent machine.</p>&#13; &#13; <p>Fast forward almost 50 years and the world looks very different. Computers dominate modern life across vast swathes of the planet, underpinning key functions of global governance and economics, increasing precision in healthcare, monitoring identity and facilitating most forms of communication – from the paradigm shifting to the most personally intimate. Technology advances for the most part unchecked and unabated.</p>&#13; &#13; <p>While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us.</p>&#13; &#13; <p>Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.</p>&#13; &#13; <p>“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.</p>&#13; &#13; <p>“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”</p>&#13; &#13; <p>Price’s interest in AGI risk stems from a chance meeting with Jaan Tallinn, a former software engineer who was one of the founders of Skype, which – like Google and Facebook – has become a digital cornerstone. In recent years Tallinn has become an evangelist for the serious discussion of ethical and safety aspects of AI and AGI, and Price was intrigued by his view:</p>&#13; &#13; <p>“He (Tallinn) said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease. I was intrigued that someone with his feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to do something about it.”</p>&#13; &#13; <p>We <em>Homo sapiens</em> have, for Tallinn, become optimised – in the sense that we now control the future, having grabbed the reins from 4 billion years of natural evolution. Our technological progress has by and large replaced evolution as the dominant, future-shaping force.</p>&#13; &#13; <p>We move faster, live longer, and can destroy at a ferocious rate. And we use our technology to do it. AI geared to specific tasks continues its rapid development – from financial trading to face recognition – and the power of computing chips doubles every two years in accordance with Moore’s law, as set out by Intel founder Gordon Moore in the same year that Good predicted the ultra-intelligence machine. <img alt="" src="/files/inner-images/ai-ex_snr-002.jpg" style="width: 250px; height: 250px; float: right;" /></p>&#13; &#13; <p>We know that ‘dumb matter’ can think, say Price and Tallinn – biology has already solved that problem, in a container the size of our skulls. That’s a fixed cap to the level of complexity required, and it seems irresponsible, they argue, to assume that the rising curve of computing complexity will not reach and even exceed that bar in the future.</p>&#13; &#13; <p> ֱ̽critical point might come if computers reach human capacity to write computer programs and develop their own technologies. This, Good’s “intelligence explosion”, might be the point we are left behind – permanently – to a future-defining AGI.</p>&#13; &#13; <p>“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”</p>&#13; &#13; <p>Price and Tallinn stress the uncertainties in these projections, but point out that this simply underlines the need to know more about AGI and other kinds of technological risk.</p>&#13; &#13; <p>In Cambridge, Price introduced Tallinn to Lord Martin Rees, former Master of Trinity College and President of the Royal Society, whose own work on catastrophic risk includes his books <em>Our Final Century</em> (2003) and <em>From Here to Infinity: Scientific Horizons</em> (2011). ֱ̽three formed an alliance, aiming to establish CSER.</p>&#13; &#13; <p>With luminaries in science, policy, law, risk and computing from across the ֱ̽ and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “ ֱ̽basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”</p>&#13; &#13; <p>Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point. “To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.”</p>&#13; &#13; <p>“What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?” he adds. “We hope that CSER will be a place where world class minds from a variety of disciplines can collaborate in exploring technological risks in both the near and far future.</p>&#13; &#13; <p>“Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate its millennium.”</p>&#13; &#13; <p><em>For more information on the Centre for Study of Existential Risk, visit <a href="https://www.cser.ac.uk/">http://cser.org/</a>   </em></p>&#13; &#13; <p><em><em><em>For more information on this story, please contact Fred Lewsey (<a href="mailto:fred.lewsey@admin.cam.ac.uk">fred.lewsey@admin.cam.ac.uk</a>) at the ֱ̽ of Cambridge Office of External Affairs and Communications.</em></em></em></p>&#13; &#13; <p><em><em><em>L-R: Huw Price, Jaan Tallinn and Martin Rees. Photo copyright/credit: Dwayne Senior</em></em></em></p>&#13; <script id="dstb-id" language="javascript"> <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- if(<span data-scaytid="32" data-scayt_word="typeof">typeof(<span data-scaytid="33" data-scayt_word="dstb">dstb)!= "undefined"){ <span data-scaytid="34" data-scayt_word="dstb">dstb();} //--><!]]]]]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]><![CDATA[><![CDATA[> //--><!]]]]><![CDATA[> //--><!]]> </script></div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge to address developments in human technologies that might pose “extinction-level” risks to our species, from biotechnology to artificial intelligence.</p>&#13; <script id="dstb-id" language="javascript"> <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- if(<span data-scaytid="2" data-scayt_word="typeof">typeof(<span data-scaytid="3" data-scayt_word="dstb">dstb)!= "undefined"){ <span data-scaytid="4" data-scayt_word="dstb">dstb();} //--><!]]]]]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]><![CDATA[><![CDATA[> //--><!]]]]><![CDATA[> //--><!]]> </script></p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Huw Price</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Jason A. Samfield from Flickr</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Light cycles</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; &#13; <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; <script id="dstb-id" language="javascript"> <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- <!--//--><![CDATA[// ><!-- if(<span data-scaytid="5" data-scayt_word="typeof">typeof(<span data-scaytid="6" data-scayt_word="dstb">dstb)!= "undefined"){ <span data-scaytid="7" data-scayt_word="dstb">dstb();} //--><!]]]]]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]]]><![CDATA[><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]]]><![CDATA[><![CDATA[><![CDATA[> //--><!]]]]]]><![CDATA[><![CDATA[> //--><!]]]]><![CDATA[> //--><!]]> </script></div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Sun, 25 Nov 2012 04:00:45 +0000 fpjl2 26963 at Three Cambridge academics elected as Fellows of ֱ̽British Academy /news/three-cambridge-academics-elected-as-fellows-of-the-british-academy <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/british-academy.jpg?itok=CO5vzTj8" alt="British Academy" title="British Academy, Credit: Claudine Hartzel" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽British Academy is the UK's national body for the promotion of the humanities and social sciences.</p>&#13; <p>It is the counterpart to the Royal Society, which exists to serve the natural sciences.</p>&#13; <p> ֱ̽British Academy aims to inspire, recognise and support excellence and high achievement across the UK and internationally.</p>&#13; <p>Established by Royal Charter in 1902, it is an independent, self-governing body of more than 900 Fellows.</p>&#13; <p> ֱ̽newly elected Fellows of ֱ̽British Academy are as follows:</p>&#13; <p><strong>Professor Huw Price </strong>is the Bertrand Russell Professor of Philosophy and a Fellow of Trinity College. Prior to joining the ֱ̽ in 2011, Professor Price headed the Centre for Time in the School of Philosophical and Historical Inquiry at the ֱ̽ of Sydney, which he helped establish in 2002. He has also held the post of Professor of Logic and Metaphysics at the ֱ̽ of Edinburgh.</p>&#13; <p>Born in Oxford, he emigrated to Australia at the age of 13, returning to the UK to complete his PhD in Philosophy at Cambridge. He has written several books, including <em>Facts and the Function of Truth</em> and <em>Time’s Arrow and Archimedes’ Point,</em> and is internationally renowned for his contributions to the areas of time-asymmetry, the philosophy of physics and pragmatism.</p>&#13; <p><strong>Professor Simon Franklin</strong> is a Fellow of Clare College, and Professor of Slavonic Studies at the Faculty of Modern and Medieval Languages. He has written extensively on Russian history and culture of all periods, but his principal research interests are medieval.</p>&#13; <p>In 2008 he was awarded the Lomonosov Gold Medal by the Russian Academy of Sciences for outstanding achievements in research in Russian history and culture. Professor Franklin’s major long-term project is a cultural history of information technologies in Russia.</p>&#13; <p><strong>Professor Simon Schaffer</strong> is Professor of History and Philosophy of Science at the Department of History and Philosophy of Science, and has been a Fellow of Darwin College since 1985. Until recently he was editor of ֱ̽British Journal for the History of Science.</p>&#13; <p>Professor Schaffer was jointly awarded the Erasmus Prize in 2005 for the book <em>Leviathan and the Air-Pump: Hobbes, Boyle and the Experimental Life</em>, which he co-authored with Steven Shapin. In 2004, he presented a series of documentaries for the BBC about light and the history of its study and knowledge.</p>&#13; <p> ֱ̽British Academy’s President, Sir Adam Roberts, said of the election: “ ֱ̽new Fellows, who come from 23 institutions across the UK, have outstanding expertise across the board – from social policy and government, to sign language and music.</p>&#13; <p>"Our Fellows play a vital role in sustaining the Academy’s activities - from identifying excellence to be supported by research awards, to contributing to policy reports and speaking at the Academy’s public events.</p>&#13; <p>"Their presence in the Academy will help it to sustain its support for research across the humanities and social sciences, and to inspire public interest in these disciplines.”</p>&#13; <p>In addition, <strong>Lord Rees of Ludlow</strong>, Master of Trinity College, and <strong>Dame Fiona Reynolds</strong>, Master-elect of Emmanuel College, have been elected to the prestigious category of Honorary Fellowship.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Three Cambridge academics are among the thirty eight scholars elected Fellows of ֱ̽British Academy this year, in recognition of their research achievements.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Our Fellows play a vital role in sustaining the Academy’s activities - from identifying excellence to be supported by research awards, to contributing to policy reports and speaking at the Academy’s public events.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Sir Adam Roberts</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="http://www.claudinehartzel.com" target="_blank">Claudine Hartzel</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">British Academy</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; <p>This work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 26 Jul 2012 14:01:14 +0000 fpjl2 25409 at