探花直播 of Cambridge - trust /taxonomy/subjects/trust en New open-source platform allows users to evaluate performance of AI-powered chatbots /research/news/new-open-source-platform-allows-users-to-evaluate-performance-of-ai-powered-chatbots <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-1485822619-dp_0.jpg?itok=YW1eav0N" alt="Chatbot" title="Chatbot, Credit: da-kuk via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>A team of computer scientists, engineers, mathematicians and cognitive scientists, led by the 探花直播 of Cambridge, developed an open-source evaluation platform called CheckMate, which allows human users to interact with and evaluate the performance of large language models (LLMs).</p> <p> 探花直播researchers tested CheckMate in an experiment where human participants used three LLMs 鈥 InstructGPT, ChatGPT and GPT-4 鈥 as assistants for solving undergraduate-level mathematics problems.</p> <p> 探花直播team studied how well LLMs can assist participants in solving problems. Despite a generally positive correlation between a chatbot鈥檚 correctness and perceived helpfulness, the researchers also found instances where the LLMs were incorrect, but still useful for the participants. However, certain incorrect LLM outputs were thought to be correct by participants. This was most notable in LLMs optimised for chat.</p> <p> 探花直播researchers suggest models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, make better assistants. Human users of LLMs should verify their outputs carefully, given their current shortcomings.</p> <p> 探花直播<a href="https://www.pnas.org/doi/10.1073/pnas.2318124121">results</a>, reported in the <em>Proceedings of the National Academy of Sciences (PNAS)</em>, could be useful in both informing AI literacy training, and help developers improve LLMs for a wider range of uses.</p> <p>While LLMs are becoming increasingly powerful, they can also make mistakes and provide incorrect information, which could have negative consequences as these systems become more integrated into our everyday lives.</p> <p>鈥淟LMs have become wildly popular, and evaluating their performance in a quantitative way is important, but we also need to evaluate how well these systems work with and can support people,鈥 said co-first author Albert Jiang, from Cambridge鈥檚 Department of Computer Science and Technology. 鈥淲e don鈥檛 yet have comprehensive ways of evaluating an LLM鈥檚 performance when interacting with humans.鈥</p> <p> 探花直播standard way to evaluate LLMs relies on static pairs of inputs and outputs, which disregards the interactive nature of chatbots, and how that changes their usefulness in different scenarios. 探花直播researchers developed CheckMate to help answer these questions, designed for but not limited to applications in mathematics.</p> <p>鈥淲hen talking to mathematicians about LLMs, many of them fall into one of two main camps: either they think that LLMs can produce complex mathematical proofs on their own, or that LLMs are incapable of simple arithmetic,鈥 said co-first author Katie Collins from the Department of Engineering. 鈥淥f course, the truth is probably somewhere in between, but we wanted to find a way of evaluating which tasks LLMs are suitable for and which they aren鈥檛.鈥</p> <p> 探花直播researchers recruited 25 mathematicians, from undergraduate students to senior professors, to interact with three different LLMs (InstructGPT, ChatGPT, and GPT-4) and evaluate their performance using CheckMate. Participants worked through undergraduate-level mathematical theorems with the assistance of an LLM and were asked to rate each individual LLM response for correctness and helpfulness. Participants did not know which LLM they were interacting with.</p> <p> 探花直播researchers recorded the sorts of questions asked by participants, how participants reacted when they were presented with a fully or partially incorrect answer, whether and how they attempted to correct the LLM, or if they asked for clarification. Participants had varying levels of experience with writing effective prompts for LLMs, and this often affected the quality of responses that the LLMs provided.</p> <p>An example of an effective prompt is 鈥渨hat is the definition of X鈥 (X being a concept in the problem) as chatbots can be very good at retrieving concepts they know of and explaining it to the user.</p> <p>鈥淥ne of the things we found is the surprising fallibility of these models,鈥 said Collins. 鈥淪ometimes, these LLMs will be really good at higher-level mathematics, and then they鈥檒l fail at something far simpler. It shows that it鈥檚 vital to think carefully about how to use LLMs effectively and appropriately.鈥</p> <p>However, like the LLMs, the human participants also made mistakes. 探花直播researchers asked participants to rate how confident they were in their own ability to solve the problem they were using the LLM for. In cases where the participant was less confident in their own abilities, they were more likely to rate incorrect generations by LLM as correct.</p> <p>鈥淭his kind of gets to a big challenge of evaluating LLMs, because they鈥檙e getting so good at generating nice, seemingly correct natural language, that it鈥檚 easy to be fooled by their responses,鈥 said Jiang. 鈥淚t also shows that while human evaluation is useful and important, it鈥檚 nuanced, and sometimes it鈥檚 wrong. Anyone using an LLM, for any application, should always pay attention to the output and verify it themselves.鈥</p> <p>Based on the results from CheckMate, the researchers say that newer generations of LLMs are increasingly able to collaborate helpfully and correctly with human users on undergraduate-level maths problems, as long as the user can assess the correctness of LLM-generated responses. Even if the answers may be memorised and can be found somewhere on the internet, LLMs have the advantage of being flexible in their inputs and outputs over traditional search engines (though should not replace search engines in their current form).</p> <p>While CheckMate was tested on mathematical problems, the researchers say their platform could be adapted to a wide range of fields. In the future, this type of feedback could be incorporated into the LLMs themselves, although none of the CheckMate feedback from the current study has been fed back into the models.</p> <p>鈥淭hese kinds of tools can help the research community to have a better understanding of the strengths and weaknesses of these models,鈥 said Collins. 鈥淲e wouldn鈥檛 use them as tools to solve complex mathematical problems on their own, but they can be useful assistants if the users know how to take advantage of them.鈥</p> <p> 探花直播research was supported in part by the Marshall Commission, the Cambridge Trust, Peterhouse, Cambridge, 探花直播Alan Turing Institute, the European Research Council, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).</p> <p>聽</p> <p><em><strong>Reference:</strong><br /> Katherine M聽Collins, Albert Q聽Jiang, et al. 鈥<a href="https://www.pnas.org/doi/10.1073/pnas.2318124121">Evaluating Language Models for Mathematics through Interactions</a>.鈥 Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2318124121</em></p> <p>聽</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a platform for the interactive evaluation of AI-powered chatbots such as ChatGPT.聽</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Anyone using an LLM, for any application, should always pay attention to the output and verify it themselves</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Albert Jiang</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">da-kuk via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Chatbot</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> 探花直播text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright 漏 探花直播 of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways 鈥 on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 04 Jun 2024 10:34:36 +0000 sc604 246271 at Strategic partner: Aviva /stories/aviva <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A聽new partnership between Aviva and Cambridge is asking what do advances in technology and data science mean for the future of insurance?</p> </p></div></div></div> Wed, 29 Apr 2020 11:22:05 +0000 skbf2 214132 at How different countries are reacting to the COVID-19 risk and their governments鈥 responses /stories/wintoncovid1 <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers at the Winton Centre for Risk and Evidence Communication spent the weekend surveying people's attitudes towards the risk of coronavirus, and their governments鈥 reactions.聽</p> </p></div></div></div> Tue, 24 Mar 2020 12:52:31 +0000 fpjl2 212732 at In tech we trust? /research/features/in-tech-we-trust <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/david-werbrouck-304966-unsplash_0.jpg?itok=7L-Q6nEB" alt="" title="Credit: Daniel Werbrouck" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new <a href="https://www.trusttech.cam.ac.uk/">Strategic Research Initiative on Trustworthy Technologies</a>, which brings together science, technology and humanities researchers from across the 探花直播.</p>&#13; &#13; <p>In fact, Singh, a researcher in Cambridge鈥檚 Department of Computer Science and Technology, has been collaborating with lawyers for several years: 鈥淎 legal perspective is paramount when you鈥檙e researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.鈥</p>&#13; &#13; <p>Governance and public trust present some of the greatest challenges in technology today. 探花直播European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a 鈥榬ight to an explanation鈥 regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. 鈥淲ith penalties including fines of up to 4% of global turnover or 鈧20 million, people are realising that they need to take data protection much more seriously,鈥 he says.</p>&#13; &#13; <p>Singh is particularly interested in how data-driven systems and algorithms 鈥 including machine learning 鈥 will soon underpin and automate everything from transport networks to council services.</p>&#13; &#13; <p>As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the 鈥業nternet of Things鈥 continues to instrument the physical world, machines will increasingly mediate and influence our lives.</p>&#13; &#13; <p>It鈥檚 a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: 鈥淲e work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they鈥檙e doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.鈥</p>&#13; &#13; <p>What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at 探花直播Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.</p>&#13; &#13; <p>鈥淣ot long ago, many markets were traded on exchanges by people in pits screaming and yelling,鈥 Weller recalls. 鈥淭oday, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets 鈥 and liquid markets are good for society.鈥</p>&#13; &#13; <p>But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. 鈥 探花直播flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,鈥 he says.</p>&#13; &#13; <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a>Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.</p>&#13; &#13; <p>How much we trust the 鈥榖lack box鈥 of machine learning systems, both as individuals and society, is clearly important. 鈥淭here are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion 鈥 to check that appropriate process was followed, and to enable meaningful challenge,鈥 says Weller. 鈥淓qually, to have effective real-world deployment of algorithmic systems, people will have to trust them.鈥</p>&#13; &#13; <p>But even if we can lift the lid on these black boxes, how do we interpret what鈥檚 going on inside? 鈥淭here are many kinds of transparency,鈥 he explains. 鈥淎 user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.鈥</p>&#13; &#13; <p>If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it 鈥榯hinks鈥 we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.</p>&#13; &#13; <p>When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard 探花直播, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that 鈥渂lack-sounding鈥 names were 25% more likely to result in the delivery of this kind of advertising.</p>&#13; &#13; <p>Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. 鈥淚t鈥檚 a worry,鈥 he acknowledges. 鈥淎nd people sometimes stop there 鈥 they assume it鈥檚 a case of garbage in, garbage out, end of story. In fact, it鈥檚 just the beginning, because we鈥檙e developing techniques that can automatically detect and remove some forms of bias.鈥</p>&#13; &#13; <p>Transparency, reliability and trustworthiness are at the core of Weller鈥檚 work at the Leverhulme Centre for the Future of Intelligence and 探花直播Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible 鈥 or desirable 鈥 in AI.</p>&#13; &#13; <p>Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. 探花直播stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning 鈥 and what it would be wise to guard against.</p>&#13; &#13; <p>Weller believes the future of work is a huge issue: 鈥淢any jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.鈥滱nd academics must keep talking as well as thinking. 鈥淲e鈥檙e grappling with pressing and important issues,鈥 he concludes. 鈥淎s technical experts we need to engage with society and talk about what we鈥檙e doing so that policy makers can try to work towards policy that鈥檚 technically and legally sensible.鈥</p>&#13; &#13; <div><em>Inset image: read more about our AI research in the 探花直播's research magazine;聽download聽a聽<a href="/system/files/issue_35_research_horizons_new.pdf">pdf</a>;聽view聽on聽<a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">Issuu</a>.</em></div>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it鈥檚 time for new thinking about new technology.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">With penalties including fines of up to 鈧20 million, people are realising that they need to take data protection much more seriously</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Jat Singh</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/grayscale-photo-of-person-running-in-panel-paintings-5GwLlb-_UYk" target="_blank">Daniel Werbrouck</a></div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Want to hear more? </div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Join us at the Cambridge Science Festival to hear Adrian Weller聽discuss聽how we can ensure AI systems are transparent, reliable and trustworthy.聽</p>&#13; &#13; <p>Thursday 15 March 2018,聽7:30pm聽- 8:30pm</p>&#13; &#13; <p>Mill Lane Lecture Rooms, 8 Mill Lane, Cambridge, UK, CB2 1RW</p>&#13; &#13; <p><a href="https://www.festival.cam.ac.uk/events/trust-and-transparency-ai-systems">BOOK HERE</a></p>&#13; </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; 探花直播text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 23 Feb 2018 09:30:00 +0000 lw355 195572 at Science fiction vs science fact: World鈥檚 leading AI experts come to Cambridge /research/news/science-fiction-vs-science-fact-worlds-leading-ai-experts-come-to-cambridge <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aibrain.jpg?itok=RYs7tHok" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> 探花直播two-day conference (July 13-14)聽at Jesus College is the first major event held by the Leverhulme Centre for the Future of Intelligence (CFI) since its globally-publicised <a href="/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of">launch by Stephen Hawking</a> and other AI luminaries in October 2016.</p>&#13; &#13; <p>Bringing together policy makers and philosophers, as well as leading figures from science and technology, speakers include Astronomer Royal Martin Rees, Matt Hancock (Minister for Digital and Culture), Baroness Onora O'Neill and Francesca Rossi (IBM).</p>&#13; &#13; <p>Dr Stephen Cave, Executive Director of CFI, said: 鈥淩arely has a technology arrived with such a rich history of myth, storytelling and hype as AI. 探花直播first day of our conference will ask how films, literature and the arts generally have shaped our expectations, fears and even the technology itself.</p>&#13; &#13; <p>鈥淢eanwhile, the second day will ask how and when we can trust the intelligent machines on which we increasingly depend 鈥 and whether those machines are changing how we trust each other."</p>&#13; &#13; <p><a href="https://www.lcfi.ac.uk/media/uploads/files/CFI_2017_programme.pdf">Programme highlights</a> of the conference include:</p>&#13; &#13; <ul><li>Sci-Fi Dreams: How visions of the future are shaping development of intelligent technology</li>&#13; <li>Truth Through Fiction: How the arts and media help us explore the challenges and opportunities of AI</li>&#13; <li>Metal people: How we perceive intelligent robots 鈥 and why</li>&#13; <li>Trust, Security and the Law: Assuring safety in the age of artificial intelligence</li>&#13; <li>Trust and Understanding: Uncertainty, complexity and the 鈥榖lack box鈥</li>&#13; </ul><p>Professor Huw Price, Academic Director of the Centre, and Bertrand Russell Professor of Philosophy at Cambridge, said: 鈥淒uring two packed days in Cambridge we鈥檒l be bringing together some of the world鈥檚 most important voices in the study and development of the technologies on which all our futures will depend.</p>&#13; &#13; <p>鈥淚ntelligent machines offer huge benefits in many fields, but we will only realise these benefits if we know we can trust them 鈥 and maintain trust in each other and our institutions as AI transforms the world around us.鈥</p>&#13; &#13; <p>Other conference speakers include Berkeley AI pioneer Professor Stuart Russell, academic and broadcaster Dr Sarah Dillon, and Sir David Spiegelhalter, Cambridge鈥檚 Winton Professor of the Public Understanding of Risk. An AI-themed art exhibition is also being held to coincide with the Jesus College event.</p>&#13; &#13; <p>CFI brings together four of the world鈥檚 foremost universities (Cambridge, Berkeley, Imperial College and Oxford) to explore the implications of AI for human civilisation. Researchers will work with policy-makers and industry to investigate topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.</p>&#13; &#13; <p>Many researchers take seriously the possibility that intelligence equal to our own will be created in computers within this century. Freed of biological constraints, such as limited memory and slow biochemical processing speeds, machines may eventually become more broadly intelligent than we are 鈥 with profound implications for us all.</p>&#13; &#13; <p>Launching the 拢10m centre last year, Professor Hawking said: 鈥淪uccess in creating AI could be the biggest event in the history of civilisation but it could also be the last 鈥 unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many.</p>&#13; &#13; <p>鈥淲e cannot predict what might be achieved when our own minds are amplified by AI. 探花直播rise of powerful AI will either be the best or the worst thing to happen to humanity. We do not yet know which.鈥</p>&#13; &#13; <p>Professor Maggie Boden, External Advisor to the Centre, whose pioneering work on AI has been translated into 20 languages, said: 鈥 探花直播practical solutions of AI can help us to tackle important social problems and advance the science of mind and life in fundamental ways. But it has limitations which could present grave dangers. CFI aims to guide the development of AI in human-friendly ways.鈥</p>&#13; &#13; <p>Dr Cave added: 鈥淲e've chosen the topic of myths and trust for our first annual conference because they cut across so many of the challenges and opportunities raised by AI. As well as world-leading experts, we hope to bring together a wide range of perspectives to discuss these topics, including from industry, policy and the arts. 探花直播challenge of transitioning to a world shared with intelligent machines is one that we all face together.鈥</p>&#13; &#13; <p> 探花直播first day of the conference is in partnership with the Royal Society, while the second is in partnership with Jesus College's Intellectual Forum. 探花直播conference is being generously sponsored by Accenture and PwC.</p>&#13; &#13; <p>Further details and ticketing information can be found <a href="https://www.lcfi.ac.uk/events/Conference2017/">here</a>.</p>&#13; &#13; <p>聽</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Some of the world鈥檚 leading thinkers and practitioners in the field of Artificial Intelligence (AI) will gather in Cambridge this week to look at everything from the influence of science fiction on our dreams of the future, to 鈥榯rust in the age of intelligent machines鈥.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Rarely has a technology arrived with such a rich history of myth, storytelling and hype as AI.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dr Stephen Cave</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; 探花直播text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 10 Jul 2017 10:22:27 +0000 sjr81 190202 at Artificial intelligence: computer says YES (but is it right?) /research/features/artificial-intelligence-computer-says-yes-but-is-it-right <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/1610202019-by-experienssthierry-ehrmann.jpg?itok=Qk9V5cgv" alt="2019 by ExperiensS" title="2019 by ExperiensS, Credit: Thierry Ehrmann" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.</p>&#13; &#13; <p>Of course many people die in car crashes every day 鈥 in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.</p>&#13; &#13; <p>Even so, the tragedy raised a pertinent question: how much do we understand 鈥 and trust 鈥 the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?</p>&#13; &#13; <p>We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.</p>&#13; &#13; <p>Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.</p>&#13; &#13; <p>Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?</p>&#13; &#13; <p>鈥淢achines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data,鈥 says Ghahramani. 鈥淏ut what is going on inside the 鈥榖lack box鈥? If the processes by which decisions were being made were more transparent, then trust would be less of an issue.鈥</p>&#13; &#13; <p>His team builds the algorithms that lie at the heart of these technologies (the 鈥渋nvisible bit鈥 as he refers to it). Trust and transparency are important themes in their work: 鈥淲e really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data 鈥 whether you are a baby learning a language or a scientist analysing some data 鈥 you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.</p>&#13; &#13; <p>鈥淲hen machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us.鈥</p>&#13; &#13; <p>One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.</p>&#13; &#13; <p>Two years ago, Ghahramani鈥檚 group launched the Automatic Statistician with funding from Google. 探花直播tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.</p>&#13; &#13; <p>鈥 探花直播difficulty with machine learning systems is you don鈥檛 really know what鈥檚 going on inside 鈥 and the answers they provide are not contextualised, like a human would do. 探花直播Automatic Statistician explains what it鈥檚 doing, in a human-understandable form.鈥</p>&#13; &#13; <p>Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.</p>&#13; &#13; <p>Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: 鈥淎 particular issue with new artificial intelligence (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand.鈥 His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments 鈥 an increasingly common occurrence.</p>&#13; &#13; <p>鈥淲e would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and 鈥 if they can no longer work reliably 鈥 then provide an alert and perhaps shift to a safety mode.鈥 A driverless car, for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.</p>&#13; &#13; <p>Weller鈥檚 theme of trust and transparency forms just one of the projects at the newly launched 拢10 million <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (CFI). Ghahramani, who is Deputy Director of the Centre, explains: 鈥淚t鈥檚 important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications 鈥 both the concerns and the benefits to society.鈥</p>&#13; &#13; <p>CFI brings together four of the world鈥檚 leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.</p>&#13; &#13; <p>Ghahramani describes the excitement felt across the machine learning field: 鈥淚t鈥檚 exploding in importance. It used to be an area of research that was very academic 鈥 but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.</p>&#13; &#13; <p>鈥淲e are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us.鈥</p>&#13; &#13; <p><em>Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it 鈥 according to a <a href="https://www.youtube.com/watch?v=_5XvDCjrdXs">speech </a>delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Computers that learn for themselves are with us now. As they become more common in 鈥榟igh-stakes鈥 applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can聽 trust them.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">As we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Zoubin Ghahramani</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/home_of_chaos/4166229638/in/photolist-7ma1Vu-9jXRQ7-3FjPcz-bx8BcX-cs65bN-dPTAqE-48Dezu-nurxVW-mC75rT-dXxh8b-jR9gc-3KwLDC-5akwi9-75MGSi-fEbbTT-f1ab86-6avjFJ-p7gc1-ofut47-rpxmKL-jbSp7-bmUQLy-q131sg-2QnpAH-bxmfEd-PweVq-qbFyNT-4L32qY-pZVBB9-2uinMh-6L3BZn-re23rM-jfvWFG-dXrAKP-9jXM4U-9jXQoh-qa8G7T-rvMSwj-qdMd23-HXVdh-2Q1fQU-8f9zmW-iAqVac-oy72re-9mi7oc-cs5QkS-oMRA8h-C4Lzp4-paUvZM-6i89ys" target="_blank">Thierry Ehrmann</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">2019 by ExperiensS</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; 探花直播text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a></div></div></div> Thu, 20 Oct 2016 14:17:17 +0000 lw355 180122 at 鈥業ntelligent Trust鈥, ethno-religious relations and the rise of the food bank /research/discussion/intelligent-trust-ethno-religious-relations-and-the-rise-of-the-food-bank <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/140210christmas-dinnercredit-infinite-jeffjpg.jpg?itok=16mktGJB" alt="Christmas dinner" title="Christmas dinner, Credit: Infinite Jeff" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In December 2011, when economic turmoil was sweeping through Europe, the Woolf Institute and the Cardinal Bea Centre for Judaic Studies of the Pontifical Gregorian 探花直播 in Rome organised a meeting between the former Chief Rabbi, Lord Sacks, and Emeritus Pope Benedict XVI.</p>&#13; <p>Following the Papal Audience, Lord Sacks delivered a lecture and stated that, 鈥渨hen Europe recovers its soul, it will recover its wealth-creating energies. But first it must remember: humanity was not created to serve markets. Markets were created to serve humankind.鈥 He identified the breakdown of trust as a cause of the economic crisis and pointed out that that the key words in the financial markets are spiritual: 鈥渃redit鈥 (from 鈥渃redo鈥) and 鈥渃onfidence鈥 (from 鈥渃onfidere鈥).</p>&#13; <p>In the months that followed the papal audience, Woolf Institute staff, led by Drs Shana Cohen and Ed Kessler, began to prepare a European-wide research project to address public and academic concerns related to trustworthiness; in particular, the aim was to explore the practical importance of trust and its placement within social relations, especially across ethno-religious differences. 探花直播title 鈥業ntelligent Trust鈥 was adopted from a concept put forward by philosopher Baroness Onora O鈥橬eill and her argument that 鈥渢rustworthiness rather than trust should be our first concern.鈥</p>&#13; <p> 探花直播economic crisis in Europe since 2007 has provoked substantial discussion within the public sphere regarding the decline of trust in the State and major private institutions like banks. Institutions are now charged with 鈥榬estoring confidence鈥. For instance, banks should refrain from aggressive sales tactics to push high-risk products, which prioritise self-interest over the benefit of consumers. 探花直播implication here is that, to become trustworthy again, commercial institutions should prioritise the interests of those who rely upon them over (or even to the exclusion of) profits.</p>&#13; <p>In contrast to public concern for the institutional practice of trustworthiness, academic research and philosophical debate have focused on more abstract, or non-contextual, questions of how individuals place trust (or mistrust) within interpersonal relations. Here, the individual trusting decides whether the trustee (i.e. the person trusted) will perform to expectations in the particular area in question (financial transaction, taking care of the children, and so on). 探花直播individual placing trust takes a risk and elects to become vulnerable to the trustee鈥檚 consequent actions.</p>&#13; <p>In the project, we asked if and how community and faith-based initiatives in London, Paris, Berlin and Rome integrate trust and trustworthiness in their activities to improve their practical effectiveness. Across the four cities, the project compared the role of trustworthiness and trust among three different types of initiatives aimed at increasing local social and economic resources, individual aspirations and personal growth: interreligious understanding, social action and business associations. 探花直播research identified and investigated the significance of qualities associated with trustworthiness 鈥 for instance, reliability and honesty 鈥 demonstrating trustworthiness, and placing trust to the functioning, sustainability and impact of each type of initiative.</p>&#13; <p>Our project addressed a gap in ethnographic research on the practical role of trust and trustworthiness at a critical moment for understanding how individuals of different ethno-religious backgrounds in Europe learn to trust each other and how community-building initiatives in deprived areas enhance individual growth.</p>&#13; <p>In Europe, the far right is becoming stronger politically and anti-immigrant rhetoric is becoming more pervasive. Marginalisation of religious practice in public space, particularly regarding Islam, has also become more prominent across the region. At the same time, public sector cuts and increasing deprivation and unemployment in Europe have resulted in clergy and lay leaders becoming more prominent advocates for vulnerable populations, and community and faith-based social action has become vital in addressing basic human and social needs 鈥 demonstrated by the dramatic expansion of church-run food banks in the UK, for example.</p>&#13; <p>Our preliminary research suggests that community-level responses to austerity are making trust and trustworthiness an integral part of their operations and aims, emphasising honesty, reliability and competence. In providing this kind of data on the practice and practical importance of trust at a local level, the project should prove valuable to community leaders and policy makers seeking to improve the effectiveness of local cooperation not only in the areas included in the study but also beyond.</p>&#13; <p>In emphasising the relation between character development and the integration of trust and trustworthiness into organisational practises, the research may also demonstrate that changes to practises in other sectors, like banking, may have profound implications for the development of individual qualities like honesty and reliability.</p>&#13; <p>Our hope as well is that the research project will shed light both on how relations between different ethno-religious groups are evolving in communities under economic pressure and the practical importance of trustworthiness and trust within community responses to these pressures. By integrating analysis of attitudes and behaviour between individuals of different faiths (and none) with community-based work in an era of austerity, the project may indicate ways to advance simultaneously interfaith relations and individual opportunities and welfare at a local level. In addition, by including theology in the multidisciplinary project, the Intelligent Trust research project will contribute to efforts to regain momentum towards a genuine interfaith conversation.</p>&#13; <p><em>Drs Shana Cohen and Ed Kessler are at the Woolf Institute (<a href="https://www.woolf.cam.ac.uk/">www.woolf.cam.ac.uk/</a>), which is dedicated to the study of relations between Jews, Christians and Muslims.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Shana Cohen and Ed Kessler discuss how individuals of different ethno-religious backgrounds in Europe can learn to trust each other, and how community-building initiatives in deprived areas can enhance the resilience of society.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Our preliminary research suggests that community-level responses to austerity are making trust and trustworthiness an integral part of their operations and aims</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Shana Cohen and Ed Kessler</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/infinitejeff/77855778/" target="_blank">Infinite Jeff</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Christmas dinner</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-noncommercial-sharealike">Attribution-Noncommercial-ShareAlike</a></div></div></div> Fri, 14 Feb 2014 12:30:30 +0000 lw355 119032 at