ֱ̽ of Cambridge - Beth Singler /taxonomy/people/beth-singler en Mind Over Chatter: What is the future of artificial intelligence? /research/about-research/podcasts/mind-over-chatter-what-is-the-future-of-artificial-intelligence <div class="field field-name-field-content-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-885x432/public/research/logo-for-uni-website_0.jpeg?itok=O1xsQXd6" width="885" height="432" alt="Mind Over Chatter podcast logo" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><h2>Season 2, episode 5</h2> <p>Artificial Intelligence can be found in every aspect of our lives. From A-level grade predicting algorithms to Netflix recommendations, AI is set to change the choices we make and how our personal information will be used. </p> <p><a class="cam-primary-cta" href="https://mind-over-chatter.captivate.fm/listen">Subscribe to Mind Over Chatter</a></p> <p> </p> <div style="width: 100%; height: 170px; margin-bottom: 20px; border-radius: 10px; overflow:hidden;"><iframe frameborder="no" scrolling="no" seamless="" src="https://player.captivate.fm/episode/dc2070a7-acce-4022-a211-3e2626bb0bae" style="width: 100%; height: 170px;" title="What is the future of artificial intelligence?"></iframe></div> <p>In this episode of Mind Over Chatter, we explore the future of AI - its potential benefits and harms. We cover topics ranging from how to make AI ‘ethical’, how the media representation of AI can colour the public’s perception of what the real issues are, and the importance of an international AI regulatory system. </p> <p>Dr Beth Singler, whose research explores the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics, told us about the different cultural consequences of AI, and how the way we think about the future of AI reflects more about society today than the future itself.  </p> <p>Dr John Zerilli, author of ‘A citizen’s guide to artificial intelligence’ shared his views on the consequences of AI for democratic decision-making. </p> <p>Finally, Futurist-in-Residence at the Entrepreneurship Centre at the Judge School, Richard Watson, urged us to conceive of the future of AI in terms of ‘scenario planning’, rather than predicting the future directly. </p> <h2>Key points: </h2> <p>[10:09]- how we think about the future as reflecting on what we think about the present</p> <p>[13:38]- Time for the first recap! </p> <p>[17:55]- the relationship between AI and religion, and the cultural impact of AI</p> <p>[20:35]- being ‘blessed’ and ‘cursed’ by the algorithm</p> <p>[22:04]- democracy and AI. How are we to expect citizens to be informed enough to exercise their voting rights in the best way?</p> <p>[32:27]- Time for recap number two! </p> <p>[45:01]- loss of free agency… or did we never have any?</p> <p>[58:05]- Thinking about the benefit of AI can teach us about what makes a good life</p> </div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Mind Over Chatter: ֱ̽Cambridge ֱ̽ Podcast</div></div></div> Thu, 27 May 2021 12:49:38 +0000 ns480 224381 at Preparing for the future: artificial intelligence and us /research/discussion/preparing-for-the-future-artificial-intelligence-and-us <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/overview-articleyellow.jpg?itok=3Y2b5O1n" alt="" title="Credit: Jonathan Settle / ֱ̽ of Cambridge" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>AI systems are now used in everything from the trading of stocks to the setting of house prices; from detecting fraud to translating between languages; from creating our weekly shopping lists to predicting which movies we might enjoy.</p>&#13; &#13; <p>This is just the beginning. Soon, AI will be used to advance our understanding of human health through analysis of large datasets, help us discover new drugs and personalise treatments. Self-driving vehicles will transform transportation and allow new paradigms in urban planning. Machines will run our homes more efficiently, make businesses more productive and help predict risks to society.</p>&#13; &#13; <p>While some AI systems will outperform human intelligence to augment human decision making, others will carry out repetitive, manual and dangerous tasks to augment human labour. Many of the greatest challenges we face, from understanding and mitigating climate change to quickly identifying and containing disease outbreaks, will be aided by the tools of AI.</p>&#13; &#13; <p>What we’ve seen of AI so far is only the leading edge of the revolution to come.<a href="/system/files/issue_35_research_horizons_new.pdf">/system/files/issue_35_research_horizons_new.pdf</a></p>&#13; &#13; <p>Yet the idea of creating machines that think and learn like humans has been around since the 1950s. Why is AI such a hot topic now? And what does Cambridge have to offer?</p>&#13; &#13; <p>Three major advances are enabling huge progress in AI research: the availability of masses of data generated by all of us all the time; the power and processing speeds of today’s supercomputers; and the advances that have been made in mathematics and computer science to create sophisticated algorithms that help machines learn.</p>&#13; &#13; <p>Unlike in the past when computers were programmed for specific tasks and domains, modern machine learning systems know nothing about the topic in question, they only know about learning: they use huge amounts of data about the world in order to learn from it and to make predictions about future behaviour. They can make sense of complex datasets that are difficult to use and have missing data.</p>&#13; &#13; <p>That these advances will provide tremendous benefits is becoming clear. One strand of the UK government’s Industrial Strategy is to put the UK at the forefront of the AI and data revolution. In 2017, a report by PricewaterhouseCoopers described AI as “the biggest commercial opportunity in today’s fast-changing economy”, predicting a 10% increase in the UK’s GDP by 2030 as a result of the applications of AI.</p>&#13; &#13; <p>Cambridge ֱ̽ is helping to drive this revolution – and to prepare for it.</p>&#13; &#13; <p><a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p>&#13; &#13; <p>Our computer scientists are designing systems that are cybersecure, model human reasoning, interact in affective ways with us, uniquely identify us by our face and give insights into our biological makeup.</p>&#13; &#13; <p>Our engineers are building machines that are making decisions under uncertain conditions based on probabilistic estimation of perception and for the best course of action. And they’re building robots that can carry out a series of actions in the physical world – whether it’s for self-driving cars or for picking lettuces.</p>&#13; &#13; <p>Our researchers in a multitude of different disciplines are creating innovative applications of AI in areas as diverse as discovering new drugs, overcoming phobias, helping to make police custody decisions and forecasting extreme weather events.</p>&#13; &#13; <p>Our philosophers and humanists are asking fundamental questions about the ethics, trust and humanity of AI system design, and the effect that the language of discussion has on the public perception of AI. Together with the work of our engineers and computer scientists, these efforts aim to create AI systems that are trustworthy and transparent in their workings – that do what we want them to do.</p>&#13; &#13; <p>All of this is happening in a university research environment and wider ecosystem of start-ups and large companies that facilitates innovative breakthroughs in AI. ֱ̽aim of this truly interdisciplinary approach to research at Cambridge is to invent transformative AI technology that will benefit society at large.</p>&#13; &#13; <p>However, transformative advances may carry negative consequences if we do not plan for them carefully on a societal level.</p>&#13; &#13; <p> ֱ̽fundamental advances that underpin self-driving cars may allow dangerous new weapons on the battlefield. Technologies that automate work may result in livelihoods being eliminated. Algorithms trained on historical data may perpetuate, or even exacerbate, biases and inequalities such as sex- or race-based discrimination. Without careful planning, systems for which large amounts of personal data is essential, such as in healthcare, may undermine privacy.</p>&#13; &#13; <p>Engaging with these challenges requires drawing on expertise not just from the sciences, but also from the arts, humanities and social sciences, and requires delving deeply into questions of policy and governance for AI. Cambridge has taken a leading position here too, with the recent establishment of the <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> and the <a href="https://www.cser.ac.uk/">Centre for the Study of Existential Risk</a>, as well as being one of the founding partners of <a href="https://www.turing.ac.uk/"> ֱ̽Alan Turing Institute</a> based in London.</p>&#13; &#13; <p>In the longer term, it is not outside the bounds of possibility that we might develop systems able to match or surpass human intelligence in the broader sense. There are some who think that this would change humanity’s place in the world irrevocably, while others look forward to the world a superintelligence might be able to co-create with us.</p>&#13; &#13; <p>As the ֱ̽ where the great mathematician Alan Turing was an undergraduate and fellow, it seems entirely fitting that Cambridge’s scholars are exploring questions of such significance to prepare us for the revolution to come. Turing once said: “we can only see a short distance ahead, but we can see plenty there that needs to be done.”</p>&#13; &#13; <p><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/MK31E4mSbXw" width="560"></iframe></p>&#13; &#13; <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; <a href="/system/files/issue_35_research_horizons_new.pdf">download</a> a pdf; <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">view</a> on Issuu.</em></p>&#13; &#13; <p><em>Dr Mateja Jamnik (Department of Computer Science and Technology), Dr Seán Ó hÉigeartaigh (Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, CFI), Dr Beth Singler (Faraday Institute for Science and Religion and CFI) and Dr Adrian Weller (Department of Engineering, CFI and ֱ̽Alan Turing Institute).</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Today we begin a month-long focus on research related to artificial intelligence. Here, four researchers reflect on the power of a technology to impact nearly every aspect of modern life – and why we need to be ready.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">What we’ve seen of AI so far is only the leading edge of the revolution to come.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Mateja Jamnik, Seán Ó hÉigeartaigh, Beth Singler and Adrian Weller</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Jonathan Settle / ֱ̽ of Cambridge</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 02 Feb 2018 09:00:13 +0000 lw355 194762 at Can robots feel pain? /news/can-robots-feel-pain <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/robots.jpg?itok=vyJHVQYa" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Could - and should - robots feel pain? Dr Beth Singler will be addressing this question and many others raised by developments in artificial intelligence at this year’s Hay Festival.</p>&#13; &#13; <p>She is speaking as part of the <a href="/public-engagement/the-cambridge-series-at-the-hay-festival-2017">Cambridge Series</a> and has been selected as one of the Hay 30 thinkers to watch in celebration of the prestigious literary festival’s 30th anniversary celebrations.</p>&#13; &#13; <p>Singler’s talk is based on a film she made for the Cambridge Shorts scheme, funded by the Wellcome Trust and the ֱ̽ of Cambridge. ֱ̽film was screened at the Cambridge Festival of Ideas and Beth, a Research Associate on the Human Identity in an age of Nearly-Human Machines project at the Faraday Institute for Science and Religion, has been showing it and speaking about it in public talks and at schools.</p>&#13; &#13; <p> ֱ̽feedback has been so good that three further films are being made with funding from the Faraday Institute.</p>&#13; &#13; <p>They will focus on companion robots, such as those which provide elderly care; value alignment which will cover the concept of creating “good robots”; and issues of consciousness and personhood.</p>&#13; &#13; <p>“We are interested in the bigger questions,” says Singler. “People think, for instance, that pain is a simple issue, but it is complex and opens up all sorts of questions about consciousness.”</p>&#13; &#13; <p> ֱ̽films will follow a similar format to the original Pain in the Machine film, incorporating narratives, clips from science fiction and interviews with experts.</p>&#13; &#13; <p>Cambridge Shorts supports early career researchers to make professional quality short films with local artists and filmmakers. Beth was one of a number of researchers who submitted proposals for shorts films.</p>&#13; &#13; <p> ֱ̽aim was to create a multidisciplinary collaboration between a researcher in the biomedical sciences and a researcher from the arts, humanities and social sciences disciplines.</p>&#13; &#13; <p>As part of the process, a kind of speed dating event was held and there Singler met Ewan St John Smith, who is based in the Department of Pharmacology where he is group leader of the sensory neurophysiology and pain group. That event was followed by another where the two researchers met Colin Ramsay and James Uren of Little Dragon Films.</p>&#13; &#13; <p> ֱ̽film includes science fiction clips and Singler says they are often the first to introduce the possibilities of technology to a wider audience and can influence and spur on technological developments. However, she says sci-fi tends to depend on the idea of conflict to drive the narrative and can also set the way we view technology. “It depends on binaries of utopia or dystopia. It doesn’t take the complexities in the middle into account,” she says.</p>&#13; &#13; <p>Singler’s research involves looking at human identity in an age of nearly human machines. She studies how technological advances will affect society in the near future and how they will alter how we view ourselves as human beings. “If we create sentient beings will that change how we feel about ourselves?” she asks.</p>&#13; &#13; <p>Because she is based at the Faraday Institute for Science and Religion there is also a religious aspect to the research.  Singler says a lot of talk about technology “sounds a lot like end of days eschatological narratives from the Judaeo-Christian traditions”.</p>&#13; &#13; <p>She adds that discussions around robots can tend to anthropomorphism on the one hand - could robots take on human characteristics - and robomorphism on the other - responding to humans as if they were machines. “ ֱ̽lines are blurring," she says.</p>&#13; &#13; <p>She refers to tv programmes such as Humans, but also to DeepMind's AI programme AlphaGo, saying that technology doesn’t have to look like a human for people to develop a relationship with it. People even created fanart around it.</p>&#13; &#13; <p>On the robomorphism issue Singler cites projects such as Elon Musk’s company Neuralink’s ambition to link the human brain to a machine interface. Musk’s aim is to use Artificial Intelligence and machine learning to create computers so sophisticated that humans will need to implant "neural laces" in their brains to keep up.</p>&#13; &#13; <p>Singler says the questions opened up by AI are profound.  “AI will not just replace the simple physical jobs that we may not want to do. It may also replace the mental jobs that we still want to do. We need to prepare and ask questions about what humans are actually for. For centuries we have been defined by what we do. Maybe we need to think about a post-work future and to redefine how we think about ourselves.”</p>&#13; &#13; <p>She adds that we need to create frameworks in which to discuss the implications of AI and cites the EU proposal for rights for electronic persons as one example of a legal framework.</p>&#13; &#13; <p>She says: “People say that we should stop making robots, but I think that is unlikely. I don’t think there are any definitive answers, but people need to have the spaces where they can learn about what is happening and we need to enable conversations.”<br />&#13;  </p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Dr Beth Singler will be speaking about her work on the social and ethical issues raised by robots as part of this year's Cambridge Series at the Hay Festival.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> “If we create sentient beings will that change how we feel about ourselves?”</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dr Beth Singler</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-116312" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/116312">Pain in the machine</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/ODw5Eu6VbGc?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/public-engagement/the-cambridge-series-at-the-hay-festival-2017">Hay Festival Cambridge Series </a></div></div></div> Fri, 12 May 2017 09:58:44 +0000 mjg209 188362 at Pain in the machine: a Cambridge Shorts film /research/features/pain-in-the-machine-a-cambridge-shorts-film <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/paininthemachine.gif?itok=3t3YH8Tl" alt="Still from Pain in the Machine" title="Still from Pain in the Machine, Credit: Researchers: Beth Singler and Ewan St John Smith" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Pain is vital: it is the mechanism that protects us from harming ourselves. If you put your finger into a flame, a signal travels up your nervous system to your brain which tells you to snatch your finger away. This response isn’t as simple as it sounds: the nervous system is complex and involves many areas of the brain.</p>&#13; &#13; <p>We’re developing increasingly sophisticated machines to work for us. In the future, robots might live alongside us as companions or carers. If pain is an important part of being human, and often keeps us safe, could we create a robot that feels pain?  These ideas are explored by Cambridge researchers Dr Ewan St John Smith and Dr Beth Singler in their 12-minute film <a href="https://www.youtube.com/watch?v=ODw5Eu6VbGc"><em>Pain in the Machine</em></a>.</p>&#13; &#13; <p>Already we have technologies that respond to distances and touch. A car, for example, can detect and avoid an object; lift doors won’t shut on your fingers. But although this could be seen as a step towards a mechanical nervous system, it isn’t the same as pain. Pain involves emotion. Could we make machines which feel and show emotion – and would we want to?</p>&#13; &#13; <p>Unpleasant though it is, pain has sometimes been described as the pinnacle of human consciousness. ֱ̽human capacity for empathy is so great that when a robotics company showed film clips of robots being pushed over and kicked, views responded as if the robots were being bullied and abused. Pain is both felt and perceived.</p>&#13; &#13; <p>Movies have imagined robots with their own personalities – sometimes cute but often evil. Perhaps the future will bring robots capable of a full range of emotions. These machines might share not only our capacity for pain but also for joy and excitement.</p>&#13; &#13; <p>But what about the ethical implications? A new generation of emotionally-literate robots will, surely, have rights of their own</p>&#13; &#13; <p><em>Pain in the Machine</em> is one of four films made by Cambridge researchers for the 2016 Cambridge Shorts series, funded by Wellcome Trust ISSF. ֱ̽scheme supports early career researchers to make professional quality short films with local artists and filmmakers. Researchers Beth Singler (Faculty of Divinity) and Ewan St John Smith (Department of Pharmacology) collaborated with Colin Ramsay and James Uren of Little Dragon Films.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽pain we experience as humans has physical and emotional components. Could we develop a machine that feels pain a similar way – and would we want to? ֱ̽first of four Cambridge Shorts looks at the possibilities and challenges.</p>&#13; </p></div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-116312--2" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/116312">Pain in the machine</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-2 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/ODw5Eu6VbGc?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Researchers: Beth Singler and Ewan St John Smith</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Still from Pain in the Machine</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 02 Nov 2016 08:00:00 +0000 amb206 181002 at