ֱ̽ of Cambridge - Haydn Belfield /taxonomy/people/haydn-belfield en Aim policies at ‘hardware’ to ensure AI safety, say experts /stories/hardware-ai-safety <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Chips and datacentres – the “compute” driving the AI revolution – may be the most effective targets for risk-reducing AI policies, according to a new report.</p> </p></div></div></div> Wed, 14 Feb 2024 11:28:30 +0000 fpjl2 244461 at Risky business /stories/open-cambridge-existential-risk-map <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Launched during Open Cambridge, a new self-guided trail, created by researchers at Cambridge’s Centre for the Study of Existential Risk (CSER), takes the public on an altogether different tour of the city.</p> </p></div></div></div> Thu, 07 Sep 2023 15:16:50 +0000 zs332 241661 at Community of ethical hackers needed to prevent AI’s looming ‘crisis of trust’ /research/news/community-of-ethical-hackers-needed-to-prevent-ais-looming-crisis-of-trust <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/hackers.jpg?itok=-dK5OSOt" alt="" title="Human face in the algorithm, Credit: Getty" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽Artificial Intelligence industry should create a global community of hackers and 'threat modellers' dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.</p> <p>This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the ֱ̽ of Cambridge’s Centre for the Study of Existential Risk (CSER), who have authored a new 'call to action' published <a href="https://www.science.org/doi/10.1126/science.abi7176">in the journal <em>Science</em></a>.</p> <p>They say that companies building intelligent technologies should harness techniques such as 'red team' hacking, audit trails and 'bias bounties' – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public.   </p> <p>Otherwise, the industry faces a 'crisis of trust' in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil.</p> <p> ֱ̽novelty and 'black box' nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr Shahar Avin of CSER. </p> <p> ֱ̽experts argue that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its own future – and trust is fraying.    </p> <p> ֱ̽new publication puts forward a series of 'concrete' measures that they say should be adopted by AI developers.</p> <p>“There are critical gaps in the processes required to create AI that has earned public trust. Some of these gaps have enabled questionable behavior that is now tarnishing the entire field,” said Avin.</p> <p>“We are starting to see a public backlash against technology. This ‘tech-lash’ can be all encompassing: either all AI is good or all AI is bad.</p> <p>“Governments and the public need to be able to easily identify the trustworthy, the snake-oil salesmen, and the clueless,” Avin said. “Once you can do that, there is a real incentive to be trustworthy. But while you can’t tell them apart, there is a lot of pressure to cut corners.”</p> <p>Co-author and CSER researcher Haydn Belfield said: “Most AI developers want to act responsibly and safely, but it’s been unclear what concrete steps they can take until now. Our report fills in some of these gaps.”</p> <p> ֱ̽idea of AI 'red teaming' – sometimes known as white-hat hacking – takes its cue from cyber-security.</p> <p>“Red teams are ethical hackers playing the role of malign external agents,” said Avin. “They would be called in to attack any new AI, or strategise on how to use it for malicious purposes, in order to reveal any weaknesses or potential for harm.”</p> <p>While a few big companies have internal capacity to “red team” – which comes with its own ethical conflicts – the report calls for a third-party community, one that can independently interrogate new AI and share any findings for the benefit of all developers.</p> <p>A global resource could also offer high quality red teaming to the small start-up companies and research labs developing AI that could become ubiquitous. </p> <p> ֱ̽new report, a concise update of <a href="http://www.towardtrustworthyai.com/">more detailed recommendations</a> published by a group of 59 experts last year, also highlights the potential for bias and safety “bounties” to increase openness and public trust in AI.</p> <p>This means financially rewarding any researcher who uncovers flaws in AI that have the potential to compromise public trust or safety – such as racial or socioeconomic biases in algorithms used for medical or recruitment purposes.</p> <p>Earlier this year, Twitter began offering bounties to those who could identify biases in their image-cropping algorithm.</p> <p>Companies would benefit from these discoveries, say researchers, and be given time to address them before they are publicly revealed. Avin points out that, currently, much of this “pushing and prodding” is done on a limited, ad-hoc basis by academics and investigative journalists.</p> <p> ֱ̽report also calls for auditing by trusted external agencies – and for open standards on how to document AI to make such auditing possible – along with platforms dedicated to sharing “incidents”: cases of undesired AI behavior that could cause harm to humans.</p> <p>These, along with meaningful consequences for failing an external audit, would significantly contribute to an “ecosystem of trust” say the researchers.</p> <p>“Some may question whether our recommendations conflict with commercial interests, but other safety-critical industries, such as the automotive or pharmaceutical industry, manage it perfectly well,” said Belfield.</p> <p>“Lives and livelihoods are ever more reliant on AI that is closed to scrutiny, and that is a recipe for a crisis of trust. It’s time for the industry to move beyond well-meaning ethical principles and implement real-world mechanisms to address this,” he said.</p> <p>Added Avin: "We are grateful to our collaborators who have highlighted a range of initiatives aimed at tackling these challenges, but we need policy and public support to create an ecosystem of trust for AI.”</p> <p> </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A global hacker 'red team' and rewards for hunting algorithmic biases are just some of the recommendations from experts who argue that AI faces a 'tech-lash' unless firm measures are taken to increase public trust.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">We need policy and public support to create an ecosystem of trust for AI</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Shahar Avin</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Getty</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Human face in the algorithm</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 10 Dec 2021 09:55:23 +0000 fpjl2 228691 at Opinion: Climate change, pandemics, biodiversity loss – no country is sufficiently prepared /research/news/opinion-climate-change-pandemics-biodiversity-loss-no-country-is-sufficiently-prepared <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/conv.jpg?itok=N2-vs8k7" alt="Banner from a climate strike in Erlangen, Germany" title="Banner from a climate strike in Erlangen, Germany, Credit: Markus Spiske" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>There’s little that the left and the right agree on these days. But surely one thing is beyond question: that national governments must protect citizens from the gravest threats and risks they face. Although our government, wherever we are in the world, may not be able to save everyone from a pandemic or protect people and infrastructure from a devastating cyberattack, surely they have thought through these risks in advance and have well-funded, adequately practiced plans?</p>&#13; &#13; <p>Unfortunately, the answer to this question is an emphatic no.</p>&#13; &#13; <p>Not all policy areas are subject to this challenge. National defence establishments, for example, often have the frameworks and processes that facilitate policy decisions for extreme risks. But more often than not, and on more issues than not, governments fail to imagine how worst-case scenarios can come about – much less plan for them. Governments have never been able to divert significant attention from the here and happening to the future and uncertain.</p>&#13; &#13; <p>A <a href="https://www.gcrpolicy.com/understand-overview">recent report</a> published by Cambridge ֱ̽’s Centre for the Study of Existential Risk argues that this needs to change. If even only one catastrophic risk manifests – whether through nature, accident or intention – it would harm human security, prosperity and potential on a scale never before seen in human history. There are <a href="https://www.gcrpolicy.com/the-policy-options">concrete steps</a> governments can take to address this, but they are currently being neglected.</p>&#13; &#13; <p> ֱ̽risks that we face today are many and varied. They include:</p>&#13; &#13; <ul>&#13; <li><a href="https://theconversation.com/what-climate-tipping-points-are-and-how-they-could-suddenly-change-our-planet-49405">Tipping points</a> in the environmental system due to climate change or mass <a href="https://theconversation.com/tipping-point-huge-wildlife-loss-threatens-the-life-support-of-our-small-planet-106037">biodiversity loss</a>.</li>&#13; <li>Malicious, or accidentally harmful, use of <a href="https://theconversation.com/ai-could-be-a-force-for-good-but-were-currently-heading-for-a-darker-future-124941">artificial intelligence</a>.</li>&#13; <li>Malicious use of, or unintended consequences from, advanced <a href="https://theconversation.com/the-good-the-bad-and-the-deadly-the-dark-side-of-biotechnology-890">biotechnologies</a>.</li>&#13; <li>A natural or engineered global pandemic.</li>&#13; <li>Intentional, miscalculated, or accidental use of <a href="https://theconversation.com/even-a-minor-nuclear-war-would-be-an-ecological-disaster-felt-throughout-the-world-82288">nuclear weapons</a>.</li>&#13; </ul>&#13; &#13; <p>Each of these global catastrophic risks could cause unprecedented harm. A pandemic, for example, could speed around our hyper-connected world, threatening hundreds of millions – potentially billions – of people. In this globalised world of just-in-time delivery and global supply chains, we are more vulnerable to disruption than ever before. And the secondary effects of instability, mass migration and unrest may be comparably destructive. If any of these events occurred, we would pass on a diminished, fearful and wounded world to our descendants.</p>&#13; &#13; <p>So how did we come to be so woefully unprepared, and what, if anything, can our governments do to make us safer?</p>&#13; &#13; <p><strong>A modern problem</strong></p>&#13; &#13; <p>Dealing with catastrophic risks on a global scale is a particularly modern problem. ֱ̽risks themselves are a result of modern trends in population, information, politics, warfare, technology, climate and environmental damage.</p>&#13; &#13; <p>These risks are a problem for governments that are set up around traditional threats. Defence forces were built to protect from external menaces, mostly foreign invading forces. Domestic security agencies became increasingly significant in the 20th century, as threats to sovereignty and security – such as organised crime, domestic terrorism, extreme political ideologies and sophisticated espionage – increasingly came from inside national borders.</p>&#13; &#13; <p>Unfortunately, these traditional threats are no longer the greatest concern today. Risks arising from the domains of technology, environment, biology and warfare don’t fall neatly into government’s view of the world. Instead, they are varied, global, complex and catastrophic.</p>&#13; &#13; <p>As a result, these risks are currently not a priority for governments. Individually, they are quite unlikely. And such low-probability high-impact events are difficult to mobilise a response to. In addition, their unprecedented nature means we haven’t yet been taught a sharp lesson in the need to prepare for them. Many of the risks could take decades to arise, which conflicts with typical political time scales.</p>&#13; &#13; <p>Governments, and the bureaucracies that support them, are not positioned to handle what’s coming. They don’t have the right incentives or skill sets to manage extreme risks, at least beyond natural disasters and military attacks. They are often stuck on old problems, and struggle to be agile to what’s new or emerging. Risk management as a practice is not a government’s strength. And technical expertise, especially on these challenging problem sets, tends to reside outside government.</p>&#13; &#13; <p>Perhaps most troubling is the fact that any attempt to tackle these risks is not nationally confined: it would benefit everyone in the world – and indeed future generations. When the benefits are dispersed and the costs immediate, it is tempting to coast and hope others will pick up the slack.</p>&#13; &#13; <p><strong>Time to act</strong></p>&#13; &#13; <p>Despite these daunting challenges, governments have the capability and responsibility to increase national readiness for extreme events.</p>&#13; &#13; <p> ֱ̽first step is for governments to improve their own understanding of the risks. Developing a better understanding of extreme risks is not as simple as conducting better analysis or more research. It requires a whole-of-government framework with explicit strategies for understanding the types of risks we face, as well as their causes, impacts, probabilities and time scales.</p>&#13; &#13; <p>With this plan, governments can chart more secure and prosperous futures for their citizens, even if the most catastrophic possibilities never come to pass.</p>&#13; &#13; <p>Governments around the world are already working towards improving their understanding of risk. For example, the United Kingdom is a world leader in applying an all-hazard <a href="https://post.parliament.uk/research-briefings/post-pb-0031/">national risk assessment process</a>. This assessment ensures governments understand all the hazards – natural disasters, pandemics, cyber attacks, space weather, infrastructure collapse – that their country faces. It helps local first responders to prepare for the most damaging scenarios.</p>&#13; &#13; <p>Finland’s <a href="https://www.eduskunta.fi/EN/lakiensaataminen/valiokunnat/tulevaisuusvaliokunta/Pages/default.aspx">Committee for the Future</a>, meanwhile, is an example of a parliamentary select committee that injects a dose of much-needed long-term thinking into domestic policy. It acts as a think tank for futures, science and technology policy and provides advice on legislation coming forward that has an impact on Finland’s long-range future.</p>&#13; &#13; <p>And Singapore’s <a href="https://www.csf.gov.sg/who-we-are/">Centre for Strategic Futures</a> is leading in “horizon scanning”, a set of methods that helps people think about the future and potential scenarios. This is not prediction. It’s thinking about what might be coming around the corner, and using that knowledge to inform policy.</p>&#13; &#13; <p>But these actions are few and far between.</p>&#13; &#13; <p>We need all governments to put more energy towards understanding the risks, and acting on that knowledge. Some countries may even need grand changes to their political and economic systems, a level of change that typically only occurs after a catastrophe. We cannot – and do not have to – wait for these structural changes or for a global crisis. Forward-leaning leaders must act now to better understand the risks that their countries face.<!-- Below is ֱ̽Conversation's page counter tag. Please DO NOT REMOVE. --><img alt=" ֱ̽Conversation" height="1" src="https://counter.theconversation.com/content/123466/count.gif?distributor=republish-lightbox-basic" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important; text-shadow: none !important" width="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. ֱ̽page counter does not collect any personal data. More info: http://theconversation.com/republishing-guidelines --></p>&#13; &#13; <p><em><strong><span><a href="https://wintoncentre.maths.cam.ac.uk/about/people/gabriel-recchia/">Gabriel Recchia</a>, Research Associate, Winton Centre for Risk and Evidence Communication, and <a href="https://www.cser.ac.uk/team/haydn-belfield/">Haydn Belfield</a>, Research Associate, Centre for the Study of Existential Risk.</span></strong></em></p>&#13; &#13; <p><em>This article is republished from <a href="https://theconversation.com/"> ֱ̽Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/climate-change-pandemics-biodiversity-loss-no-country-is-sufficiently-prepared-123466">original article</a>.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Two Cambridge risk researchers discuss how national governments are still stuck on "old problems", and run through the things that should be keeping our leaders awake at night. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Risks arising from the domains of technology, environment, biology and warfare don’t fall neatly into government’s view of the world</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Gabriel Recchia and Haydn Belfield</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Markus Spiske</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Banner from a climate strike in Erlangen, Germany</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 01 Nov 2019 14:53:32 +0000 Anonymous 208602 at