ֱ̽ of Cambridge - Stephen Cave /taxonomy/people/stephen-cave en Cambridge launches Institute for Technology and Humanity /stories/institute-technology-humanity-launch <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A major interdisciplinary initiative has been launched that aims to meet the challenges and opportunities of new technologies as they emerge, today and far into the future.</p> </p></div></div></div> Tue, 21 Nov 2023 09:13:02 +0000 fpjl2 243351 at Cinema has helped 'entrench' gender inequality in AI /stories/whomakesAI <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Study finds that just 8% of all depictions of AI professionals from a century of film are women – and half of these are shown as subordinate to men.</p> </p></div></div></div> Mon, 13 Feb 2023 10:17:06 +0000 fpjl2 236801 at Cambridge awarded €1.9m to stop AI undermining ‘core human values’ /research/news/cambridge-awarded-eu1-9m-to-stop-ai-undermining-core-human-values <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aistorythis.jpg?itok=IDNjdEhP" alt="Artificial intelligence " title="Artificial intelligence , Credit: Getty images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Artificial intelligence is transforming society as algorithms increasingly dictate access to jobs and insurance, justice, medical treatments, as well as our daily interactions with friends and family. </p>&#13; &#13; <p>As these technologies race ahead, we are starting to see unintended social consequences: algorithms that promote everything from racial bias in healthcare to the misinformation eroding faith in democracies.   </p>&#13; &#13; <p>Researchers at the ֱ̽ of Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (LCFI) have now been awarded nearly two million Euros to build a better understanding of how AI can undermine “core human values”.</p>&#13; &#13; <p> ֱ̽grant will allow LCFI and its partners to work with the AI industry to develop anti-discriminatory design principles that put ethics at the heart of technological progress. </p>&#13; &#13; <p> ֱ̽LCFI team will create toolkits and training for AI developers to prevent existing structural inequalities – from gender to class and race – from becoming embedded into emerging technology, and sending such social injustices into hyperdrive.      </p>&#13; &#13; <p> ֱ̽donation, from German philanthropic foundation Stiftung Mercator, is part of a package of close to €4 million that will see the Cambridge team – including social scientists and philosophers as well as technology designers – working with the ֱ̽ of Bonn.   </p>&#13; &#13; <p> ֱ̽new research project, “Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures”, comes as the European Commission negotiates its Artificial Intelligence Act, which has ambitions to ensure AI becomes more “trustworthy” and “human-centric”. ֱ̽Act will require AI systems to be assessed for their impact on fundamental rights and values. </p>&#13; &#13; <p>“There is a huge knowledge gap,” said Dr Stephen Cave, Director of LCFI. “No one currently knows what the impact of these new systems will be on core values, from democratic rights to the rights of minorities, or what measures will help address such threats.” </p>&#13; &#13; <p>“Understanding the potential impact of algorithms on human dignity will mean going beyond the code and drawing on lessons from history and political science,” Cave said.</p>&#13; &#13; <p>LCFI made the headlines last year when it launched the world’s only <a href="https://www.lcfi.ac.uk/master-ai-ethics/">Masters programme</a> dedicated to teaching AI ethics to industry professionals. This grant will allow it to develop new research strands, such as investigations of human dignity in the “digital age”. “AI technologies are leaving the door open for dangerous and long-discredited pseudoscience,” said Cave. </p>&#13; &#13; <p>He points to facial recognition software that claims to identify “criminal faces”, arguing such assertions are akin to Victorian ideas of phrenology – that a person’s character could be detected by skull shape – and associated scientific racism.  </p>&#13; &#13; <p>Dr Kanta Dihal, who will co-lead the project, is to investigate whose voices actually shape society’s visions of a future with AI. “Currently our ideas of AI around the world are conjured by Hollywood and a small rich elite,” she said. </p>&#13; &#13; <p> ֱ̽LCFI team will include Cambridge researchers Dr Kerry Mackereth and Dr Eleanor Drage, co-hosts of the podcast “<a href="https://www.gender.cam.ac.uk/technology-gender-and-intersectionality-research-project/the-good-robot-podcast"> ֱ̽Good Robot</a>”, which explores whether or not we can have ‘good’ technology and why feminism matters in the tech space.  </p>&#13; &#13; <p>Mackereth will be working on a project that explores the relationship between anti-Asian racism and AI, while Drage will be looking at the use of AI for recruitment and workforce management. </p>&#13; &#13; <p>"AI tools are going to revolutionize hiring and shape the future of work in the 21st century. Now that millions of workers are exposed to these tools, we need to make sure that they do justice to each candidate, and don’t perpetuate the racist pseudoscience of 19th century hiring practices,” says Drage. </p>&#13; &#13; <p>“It’s great that governments are now taking action to ensure AI is developed responsibly,” said Cave. “But legislation won’t mean much unless we really understand how these technologies are impacting on fundamental human rights and values.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Work at the Leverhulme Centre for the Future of Intelligence will aim to prevent the embedding of existing inequalities – from gender to class and race – in emerging technologies.  </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">AI technologies are leaving the door open for dangerous and long-discredited pseudoscience</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stephen Cave</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Getty images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Artificial intelligence </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 09 Feb 2022 08:52:03 +0000 fpjl2 229781 at Use of AI to fight COVID-19 risks harming 'disadvantaged groups', experts warn /research/news/use-of-ai-to-fight-covid-19-risks-harming-disadvantaged-groups-experts-warn <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/martin-sanchez-j2c7yf223mk-unsplash.jpg?itok=XUi8V6jv" alt="COVID-19 world map " title="COVID-19 world map , Credit: Martin Sanchez" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This is according to researchers at the ֱ̽ of Cambridge's <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (CFI) in two articles published in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain normalcy in 2021.</p>&#13; &#13; <p>"Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic," said Dr Stephen Cave, Director of CFI and lead author of <a href="https://www.bmj.com/content/372/bmj.n364">one of the articles</a>.</p>&#13; &#13; <p>" ֱ̽sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology."</p>&#13; &#13; <p><a href="https://www.bmj.com/content/372/bmj.n304">In a further paper</a>, co-authored by CFI's Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale - predicting deterioration rates of patients who might need ventilation, for example - if it does so based on biased data.</p>&#13; &#13; <p>Datasets used to "train" and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of "lower socioeconomic status".</p>&#13; &#13; <p>"COVID-19 has already had a disproportionate impact on vulnerable communities. We know these systems can discriminate, and any algorithmic bias in treating the disease could land a further brutal punch," Hagerty said.</p>&#13; &#13; <p>In December, protests ensued when Stanford Medical Centre's algorithm prioritized home-workers for vaccination over those on the Covid wards. "Algorithms are now used at a local, national and global scale to define vaccine allocation. In many cases, AI plays a central role in determining who is best placed to survive the pandemic," said Hagerty.</p>&#13; &#13; <p>"In a health crisis of this magnitude, the stakes for fairness and equity are extremely high."</p>&#13; &#13; <p>Along with colleagues, Hagerty highlights the well-established "discrimination creep" found in AI that uses "natural language processing" technology to pick up symptom profiles from medical records - reflecting and exacerbating biases against minorities already in the case notes.</p>&#13; &#13; <p>They point out that some hospitals already use these technologies to extract diagnostic information from a range of records, and some are now using this AI to identify symptoms of COVID-19 infection.</p>&#13; &#13; <p>Similarly, the use of track-and-trace apps creates the potential for biased datasets. ֱ̽researchers write that, in the UK, over 20% of those aged over 15 lack essential digital skills, and up to 10% of some population "sub-groups" don't own smartphones.</p>&#13; &#13; <p>"Whether originating from medical records or everyday technologies, biased datasets applied in a one-size-fits-all manner to tackle COVID-19 could prove harmful for those already disadvantaged," said Hagerty.</p>&#13; &#13; <p>In the BMJ articles, the researchers point to examples such as the fact that a lack of data on skin colour makes it almost impossible for AI models to produce accurate large-scale computation of blood-oxygen levels. Or how an algorithmic tool used by the US prison system to calibrate reoffending - and proven to be racially biased - has been repurposed to manage its COVID-19 infection risk.</p>&#13; &#13; <p> ֱ̽Leverhulme Centre for the Future of Intelligence recently launched the UK's <a href="https://www.lcfi.ac.uk/master-ai-ethics/">first Master's course for ethics in AI</a>. For Cave and colleagues, machine learning in the Covid era should be viewed through the prism of biomedical ethics - in particular the "four pillars".</p>&#13; &#13; <p> ֱ̽first is beneficence. "Use of AI is intended to save lives, but that should not be used as a blanket justification to set otherwise unwelcome precedents, such as widespread use of facial recognition software," said Cave.</p>&#13; &#13; <p>In India, biometric identity programs can be linked to vaccination distribution, raising concerns for data privacy and security. Other vaccine allocation algorithms, including some used by the COVAX alliance, are driven by privately owned AI, says Hagerty. "Proprietary algorithms make it hard to look into the 'black box', and see how they determine vaccine priorities."</p>&#13; &#13; <p> ֱ̽second is 'non-maleficence', or avoiding needless harm. A system programmed solely to preserve life will not consider rates of 'long covid', for example. Thirdly, human autonomy must be part of the calculation. Professionals need to trust technologies, and designers should consider how systems affect human behaviour - from personal precautions to treatment decisions.</p>&#13; &#13; <p>Finally, data-driven AI must be underpinned by ideals of social justice. "We need to involve diverse communities, and consult a range of experts, from engineers to frontline medical teams. We must be open about the values and trade-offs inherent in these systems," said Cave.</p>&#13; &#13; <p>"AI has the potential to help us solve global problems, and the pandemic is unquestionably a major one. But relying on powerful AI in this time of crisis brings ethical challenges that must be considered to secure public trust."</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">In a health crisis of this magnitude, the stakes for fairness and equity are extremely high</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Alexa Hagerty</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/@martinsanchez?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" target="_blank">Martin Sanchez</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">COVID-19 world map </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Tue, 16 Mar 2021 10:47:18 +0000 fpjl2 222951 at First Master’s programme on managing the risks of AI launched by Cambridge /research/news/first-masters-programme-on-managing-the-risks-of-ai-launched-by-cambridge <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/ai.jpg?itok=kyROYe-I" alt="Motherboard" title="Motherboard, Credit: Michael Dziedzic" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Artificial Intelligence is already a part of our everyday lives in forms like Alexa, Amazon’s virtual assistant, facial identification, and Google maps. Thinking machines have huge potential to greatly enhance life for billions of people, but the technology also has huge potential downsides.</p>&#13; &#13; <p>It can embed sexism, as when an algorithm for ranking job applicants automatically downgraded women; or be used for intrusive surveillance using facial recognition algorithms that decide who is a ‘potential criminal’.</p>&#13; &#13; <p> ֱ̽<a href="https://www.lcfi.ac.uk/education/mst">new degree in AI Ethics</a> aims to teach professionals in all areas of life — from engineers and policymakers to health administrators and HR managers — how to use AI for good, not ill.</p>&#13; &#13; <p> ֱ̽programme is led by the Leverhulme Centre for the Future of Intelligence (CFI), an interdisciplinary research centre based at the ֱ̽ of Cambridge. Over the past four years, it has established itself at the forefront of AI ethics research worldwide, working in partnership with the ֱ̽ of Oxford, Imperial College London, and UC Berkeley. </p>&#13; &#13; <p>CFI is partnering with the ֱ̽ of Cambridge’s <a href="https://www.ice.cam.ac.uk/">Institute for Continuing Education</a>, which provides flexible and accessible higher education courses for adults, to deliver the 2-year part-time Master’s degree.</p>&#13; &#13; <p>Executive Director of CFI, Dr Stephen Cave, said: “Everyone is familiar with the idea of AI rising up against us. It’s been a staple of many celebrated films like Terminator in the 1980s, 2001: A Space Odyssey in the 1960s, and Westworld in the 1970s, and more recently in the popular TV adaptation.</p>&#13; &#13; <p>“But there are lots of risks posed by AI that are much more immediate than a robot revolt. There have been several examples which have featured prominently in the news, showing how it can be used in ways that exacerbate bias and injustice.</p>&#13; &#13; <p>“It's crucial that future leaders are trained to manage these risks so we can make the most of this amazing technology. This pioneering new course aims to do just that.”</p>&#13; &#13; <p>While society’s understanding of AI ethics has grown fast, bridges from research to real-life applications are scarce, and access to rigorous qualifications in responsible AI are sorely lacking.</p>&#13; &#13; <p>Dr Cave says the new degree will address those concerns. “People are using AI in different ways across every industry, and they are asking themselves, ‘How can we do this in a way that broadly benefits society?’</p>&#13; &#13; <p>“We have brought together cutting-edge knowledge on the responsible and beneficial use of AI, and want to impart that to the developers, policymakers, businesspeople and others who are making decisions right now about how to use these technologies.”</p>&#13; &#13; <p>AI has already demonstrated a range of benefits for humanity. ֱ̽COVID-19 pandemic has seen artificial intelligence rushed into experimental use at scale, bringing the importance of ethical AI competence into even greater relief. For example, AI has been deployed to fight the pandemic in the development of vaccines, early diagnosis and contact tracing.</p>&#13; &#13; <p>But its use has also caused concern, when governments used artificial intelligence to track citizens and prevent them from leaving their homes.</p>&#13; &#13; <p> ֱ̽‘Master of studies in AI Ethics and Society’ promises to develop leaders who can confidently tackle the most pressing AI challenges facing their workplaces. These include issues of privacy, surveillance, justice, fairness, algorithmic bias, misinformation, microtargeting, Big Data, responsible innovation and data governance.</p>&#13; &#13; <p> ֱ̽curriculum spans a wide range of academic areas including philosophy, machine learning, policy, race theory, design, computer science, engineering, and law. Run by a specialist research centre, the course will include the latest subject research taught by world-leading experts.  </p>&#13; &#13; <p>Dedicated to meeting the practical needs of professionals, the course will address concrete questions such as:</p>&#13; &#13; <p>·     How can I tell if an AI product is trustworthy? </p>&#13; &#13; <p>·     How can I anticipate and mitigate possible negative impacts of a technology?</p>&#13; &#13; <p>·     How can I design a process of responsible innovation for my business?</p>&#13; &#13; <p>·     How do I safeguard against algorithmic bias?</p>&#13; &#13; <p>·     How do I keep data private, secure, and properly managed?</p>&#13; &#13; <p>·     How can I involve diverse stakeholders in AI decision-making?</p>&#13; &#13; <p> ֱ̽hybrid programme will consist of online classes, and intensive week-long residentials at a ֱ̽ of Cambridge college. It’s been designed in such a flexible format to maximise the opportunities for working professionals to join the course.</p>&#13; &#13; <p>Dr James Gazzard said: " ֱ̽Institute of Continuing Education is delighted to be a partner in this distinctive Master's course. Our role is to provide adult students with access to cutting edge knowledge and skills. </p>&#13; &#13; <p>“As we all consider a post COVID-19 future, we know that the Fourth Industrial Revolution will see the acceleration of the opportunities and threats presented by AI and this course is well placed to support adults to re-skill and up-skill in this important emerging field."</p>&#13; &#13; <p>In addition to its 800-year history of innovation and leadership in technology and the humanities, the ֱ̽ of Cambridge is set within the renowned ‘Silicon Fen’, a hub of AI innovation home to tech giants and start-ups from Microsoft, and Amazon, to ARM, and Apple. </p>&#13; &#13; <p>In gathering professionals from across the country and internationally, the course will build diverse networks of professionals, researchers and government leaders dedicated to responsible AI. This will help position the UK as a global leader in beneficial AI, now and into the future.</p>&#13; &#13; <p>Applications for the new degree close on 31st March 2021, with the first cohort commencing in October 2021. For further information about the course, please visit: <a href="https://www.lcfi.ac.uk/master-ai-ethics/">http://lcfi.ac.uk/master-ai-ethics/ </a></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽UK’s first Master’s degree in the responsible use of artificial intelligence (AI) is being launched by the ֱ̽ of Cambridge.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">There are lots of risks posed by AI that are much more immediate than a robot revolt</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stephen Cave</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/red-and-black-abstract-illustration-aQYgUYwnCsM" target="_blank">Michael Dziedzic</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Motherboard</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Mon, 07 Dec 2020 11:55:17 +0000 Anonymous 220371 at Whiteness of AI erases people of colour from our ‘imagined futures’, researchers argue /research/news/whiteness-of-ai-erases-people-of-colour-from-our-imagined-futures-researchers-argue <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/34327888294c17b7bd3833k.jpg?itok=rNwtPbuT" alt="Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva" title="Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva, Credit: ITU/R.Farrell" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This is according to experts at the ֱ̽ of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a “racially homogenous” workforce of aspiring technologists, building machines with bias baked into their algorithms.</p>&#13; &#13; <p>They say that cultural depictions of AI as White need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased.</p>&#13; &#13; <p> ֱ̽researchers, from Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence (CFI)</a>, say that AI, like other science fiction tropes, has always reflected the racial thinking in our society.</p>&#13; &#13; <p>They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials – from the "orientalised" alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.</p>&#13; &#13; <p>But artificial intelligence is portrayed as White because, unlike species from other planets, AI has attributes used to "justify colonialism and segregation" in the past: superior intelligence, professionalism and power.</p>&#13; &#13; <p>“Given that society has, for centuries, promoted the association of intelligence with White Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a White machine,” said Dr Kanta Dihal, who leads CFI’s ‘<a href="https://www.lcfi.ac.uk/research/project/decolonising-ai">Decolonising AI</a>’ initiative.</p>&#13; &#13; <p>“People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialised as White that could have dangerous consequences for humans that are not,” she said.</p>&#13; &#13; <p>Together with her colleague Dr Stephen Cave, Dihal is the author of a new paper on the case for decolonising AI, published today in the journal <a href="https://dx.doi.org/10.1007/s13347-020-00415-6"><em>Philosophy and Technology</em></a>.</p>&#13; &#13; <p> ֱ̽paper brings together recent research from a range of fields, including Human-Computer Interaction and Critical Race Theory, to demonstrate that machines can be racialised, and that this perpetuates "real world" racial biases.</p>&#13; &#13; <p>This includes work on how robots are seen to have distinct racial identities, with Black robots receiving more online abuse, and a study showing that people feel closer to virtual agents when they perceive shared racial identity.  </p>&#13; &#13; <p>“One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard White middle-class English,” said Dihal. “Ideas of adding Black dialects have been dismissed as too controversial or outside the target market.”</p>&#13; &#13; <p> ֱ̽researchers conducted their own investigation into search engines, and found that all non-abstract results for AI had either Caucasian features or were literally the colour white.</p>&#13; &#13; <p>A typical example of AI imagery adorning book covers and mainstream media articles is Sophia: the hyper-Caucasian humanoid declared an “innovation champion” by the UN development programme. But this is just a recent iteration say researchers.</p>&#13; &#13; <p>“Stock imagery for AI distills the visualizations of intelligent machines in western popular culture as it has developed over decades,” said Cave, Executive Director of CFI.</p>&#13; &#13; <p>“From Terminator to Blade Runner, Metropolis to Ex Machina, all are played by White actors or are visibly White onscreen. Androids of metal or plastic are given white features, such as in I, Robot. Even disembodied AI – from HAL-9000 to Samantha in Her – have White voices. Only very recently have a few TV shows, such as Westworld, used AI characters with a mix of skin tones.”</p>&#13; &#13; <p>Cave and Dihal point out that even works clearly based on slave rebellion, such as Blade Runner, depict their AIs as White. “AI is often depicted as outsmarting and surpassing humanity,” said Dihal. “White culture can’t imagine being taken over by superior beings resembling races it has historically framed as inferior.”</p>&#13; &#13; <p>“Images of AI are not generic representations of human-like machines: their Whiteness is a proxy for their status and potential,” added Dihal.</p>&#13; &#13; <p>“Portrayals of AI as White situate machines in a power hierarchy above currently marginalized groups, and relegate people of colour to positions below that of machines. As machines become increasingly central to automated decision-making in areas such as employment and criminal justice, this could be highly consequential.”</p>&#13; &#13; <p>“ ֱ̽perceived Whiteness of AI will make it more difficult for people of colour to advance in the field. If the developer demographic does not diversify, AI stands to exacerbate racial inequality.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽overwhelming ‘Whiteness’ of artificial intelligence – from stock images and cinematic robots to the dialects of virtual assistants – removes people of colour from humanity's visions of its high-tech future.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">If the developer demographic does not diversify, AI stands to exacerbate racial inequality</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Kanta Dihal</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/itupictures/34327888294" target="_blank"> ITU/R.Farrell</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Thu, 06 Aug 2020 07:08:06 +0000 fpjl2 216922 at AI: Life in the age of intelligent machines /research/news/ai-life-in-the-age-of-intelligent-machines <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/aititle-image-002cropped.jpg?itok=VQzzjSBs" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>We are said to be standing on the brink of a fourth industrial revolution – one that will see new forms of artificial intelligence (AI) underpinning almost every aspect of our lives. ֱ̽new technologies will help us to tackle some of the greatest challenges that face our world.</p>&#13; &#13; <p>In fact AI is already very much part of our daily lives, says <a href="https://www.cl.cam.ac.uk/~mj201/">Dr Mateja Jamnik</a>, one of the experts who appear in the film. “Clever algorithms are being executed in clever ways all around us... and we are only a decade away from a future where we are able to converse across multiple languages, where doctors will be able to diagnose better, where drivers will be able to drive more safely.”</p>&#13; &#13; <p>Ideas around AI “are being dreamt up by thousands of people all over the world – imaginative young people who see a problem and think about how they can solve it using AI… whether it’s recommending a song you’ll like or curing us of cancer,” says <a href="https://www.lcfi.ac.uk/team/stephen-cave/">Professor Stephen Cave</a>.</p>&#13; &#13; <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p>&#13; &#13; <p>Much of the excitement relates to being able to leverage the power of Big Data, says <a href="https://www.eng.cam.ac.uk/profiles/zg201">Professor Zoubin Ghahramani</a>. Without AI, how else could we make sense of the vastly complex interconnected systems we now have at our fingertips?</p>&#13; &#13; <p>But what do we think about AI and the future it promises? Our perceptions are shaped by our cultural prehistory, stretching right back to Homer, says <a href="https://www.lcfi.ac.uk/team/sarah-dillon/">Dr Sarah Dillon</a>. How we feel about the dawning of a new technology is linked to centuries-old thinking about robotics, automatons and intelligence beyond our own.</p>&#13; &#13; <p>And what happens when we come to rely on the tools we are empowering to do these amazing things? <a href="https://www.cser.ac.uk/team/martin-rees/">Professor Lord Martin Rees</a> reflects on the transition to a future of AI-aided jobs: what will this look like? How will we ensure that the wealth created by AI will benefit wider society and avoid worsening inequality?</p>&#13; &#13; <p>Our researchers are asking fundamental questions about the ethics, trust and humanity of AI system design. “It can’t simply be enough for the leading scientists as brilliant as they are to be pushing ahead as quickly as possible,” says <a href="https://www.cser.ac.uk/team/sean-o-heigeartaigh/">Dr Seán Ó hÉigeartaigh</a>. “We also need there to be ongoing conversations and collaborations with the people who are thinking about the ethical impacts of the technology.</p>&#13; &#13; <p>“ ֱ̽idea that AI can help us understand ourselves and the universe at a much deeper level is about as far reaching a goal for AI as could be.”</p>&#13; &#13; <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; <a href="/system/files/issue_35_research_horizons_new.pdf">download</a> a pdf; <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">view</a> on Issuu.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>In a new film, leading Cambridge ֱ̽ researchers discuss the far-reaching advances offered by artificial intelligence – and consider the consequences of developing systems that think far beyond human abilities.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽idea that AI can help us understand ourselves and the universe at a much deeper level is about as far reaching a goal for AI as could be</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Seán Ó hÉigeartaigh</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-145042" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/145042">AI: Humanity&#039;s Last Invention?</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/MK31E4mSbXw?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/team/stephen-cave/">Leverhulme Centre for the Future of Intelligence</a></div><div class="field-item odd"><a href="https://www.cser.ac.uk/">Centre for the Study of Existential Risk</a></div></div></div> Fri, 22 Feb 2019 14:00:18 +0000 lw355 203402 at Artificial intelligence is growing up fast: what’s next for thinking machines? /research/features/artificial-intelligence-is-growing-up-fast-whats-next-for-thinking-machines <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/p26-27whatsnext.jpg?itok=K-rQlbow" alt="" title="Artificial intelligence, Credit: ֱ̽District" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>We are well on the way to a world in which many aspects of our daily lives will depend on AI systems.</p> <p>Within a decade, machines might diagnose patients with the learned expertise of not just one doctor but thousands. They might make judiciary recommendations based on vast datasets of legal decisions and complex regulations. And they will almost certainly know exactly what’s around the corner in autonomous vehicles.</p> <p>“Machine capabilities are growing,” says Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI). “Machines will perform the tasks that we don’t want to: the mundane jobs, the dangerous jobs. And they’ll do the tasks we aren’t capable of – those involving too much data for a human to process, or where the machine is simply faster, better, cheaper.”</p> <p>Dr Mateja Jamnik, AI expert at the Department of Computer Science and Technology, agrees: “Everything is going in the direction of augmenting human performance – helping humans, cooperating with humans, enabling humans to concentrate on the areas where humans are intrinsically better such as strategy, creativity and empathy.”</p> <p>Part of the attraction of AI requires that future technologies perform tasks autonomously, without humans needing to monitor activities every step of the way. In other words, machines of the future will need to think for themselves. But, although computers today outperform humans on many tasks, including learning from data and making decisions, they can still trip up on things that are really quite trivial for us.</p> <p>Take, for instance, working out the formula for the area of a parallelogram. Humans might use a diagram to visualise how cutting off the corners and reassembling it as a rectangle simplifies the problem. Machines, however, may “use calculus or integrate a function. This works, but it’s like using a sledgehammer to crack a nut,” says Jamnik, who was recently appointed Specialist Adviser to the House of Lords Select Committee on AI.</p> <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p> <p>“When I was a child, I was fascinated by the beauty and elegance of mathematical solutions. I wondered how people came up with such intuitive answers. Today, I work with neuroscientists and experimental psychologists to investigate this human ability to reason and think flexibly, and to make computers do the same.”</p> <p>Jamnik believes that AI systems that can choose so-called heuristic approaches – employing practical, often visual, approaches to problem solving – in a similar way to humans will be an essential component of human-like computers. They will be needed, for instance, so that machines can explain their workings to humans – an important part of the transparency of decision-making that we will require of AI.</p> <p>With funding from the Engineering and Physical Sciences Research Council and the Leverhulme Trust, she is building systems that have begun to reason like humans through diagrams. Her aim now is to enable them to move flexibly between different “modalities of reasoning”, just as humans have the agility to switch between methods when problem solving. </p> <p> Being able to model one aspect of human intelligence in computers raises the question of what other aspects would be useful. And in fact how ‘human-like’ would we want AI systems to be? This is what interests Professor José Hernandez-Orallo, from the Universitat Politècnica de València in Spain and Visiting Fellow at the CFI.</p> <p>“We typically put humans as the ultimate goal of AI because we have an anthropocentric view of intelligence that places humans at the pinnacle of a monolith,” says Hernandez-Orallo. “But human intelligence is just one of many kinds. Certain human skills, such as reasoning, will be important in future systems. But perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all.</p> <p>“I believe that future machines can be more powerful than humans not just because they are faster but because they can have cognitive functionalities that are inherently not human.” This raises a difficulty, says Hernandez-Orallo: “How do we measure the intelligence of the systems that we build? Any definition of intelligence needs to be linked to a way of measuring it, otherwise it’s like trying to define electricity without a way of showing it.”</p> <p> ֱ̽intelligence tests we use today – such as psychometric tests or animal cognition tests – are not suitable for measuring intelligence of a new kind, he explains. Perhaps the most famous test for AI is that devised by 1950s Cambridge computer scientist Alan Turing. To pass the Turing Test, a computer must fool a human into believing it is human. “Turing never meant it as a test of the sort of AI that is becoming possible – apart from anything else, it’s all or nothing and cannot be used to rank AI,” says Hernandez-Orallo.</p> <p>In his recently published book ֱ̽Measure of all Minds, he argues for the development of “universal tests of intelligence” – those that measure the same skill or capability independently of the subject, whether it’s a robot, a human or an octopus.</p> <p>His work at the CFI as part of the ‘Kinds of Intelligence’ project, led by Dr Marta Halina, is asking not only what these tests might look like but also how their measurement can be built into the development of AI. Hernandez-Orallo sees a very practical application of such tests: the future job market. “I can imagine a time when universal tests would provide a measure of what’s needed to accomplish a job, whether it’s by a human or a machine.”</p> <p>Cave is also interested in the impact of AI on future jobs, discussing this in a <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69702.pdf">report</a> on the ethics and governance of AI recently submitted to the House of Lords Select Committee on AI on behalf of researchers at Cambridge, Oxford, Imperial College and the ֱ̽ of California at Berkeley. “AI systems currently remain narrow in their range of abilities by comparison with a human. But the breadth of their capacities is increasing rapidly in ways that will pose new ethical and governance challenges – as well as create new opportunities,” says Cave. “Many of these risks and benefits will be related to the impact these new capacities will have on the economy, and the labour market in particular.”</p> <p>Hernandez-Orallo adds: “Much has been written about the jobs that will be at risk in the future. This happens every time there is a major shift in the economy. But just as some machines will do tasks that humans currently carry out, other machines will help humans do what they currently cannot – providing enhanced cognitive assistance or replacing lost functions such as memory, hearing or sight.”</p> <p>Jamnik also sees opportunities in the age of intelligent machines: “As with any revolution, there is change. Yes some jobs will become obsolete. But history tells us that there will be jobs appearing. These will capitalise on inherently human qualities. Others will be jobs that we can’t even conceive of – memory augmentation practitioners, data creators, data bias correctors, and so on. That’s one reason I think this is perhaps the most exciting time in the history of humanity.”</p> <p><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/MK31E4mSbXw" width="560"></iframe></p> <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; download a <a href="/system/files/issue_35_research_horizons_new.pdf">pdf</a>; view on <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">Issuu</a>.</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Our lives are already enhanced by AI – or at least an AI in its infancy – with technologies using algorithms that help them to learn from our behaviour. As AI grows up and starts to think, not just to learn, we ask how human-like do we want their intelligence to be and what impact will machines have on our jobs? </p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">José Hernandez-Orallo</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank"> ֱ̽District</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Artificial intelligence</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 06 Feb 2018 09:11:12 +0000 cjb250 195052 at