ֱ̽ of Cambridge - Leverhulme Centre for the Future of Intelligence /taxonomy/affiliations/leverhulme-centre-for-the-future-of-intelligence en Coming AI-driven economy will sell your decisions before you take them, researchers warn /research/news/coming-ai-driven-economy-will-sell-your-decisions-before-you-take-them-researchers-warn <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aichat.jpg?itok=mafVi05H" alt="Young woman talking with AI voice virtual assistant on smartphone" title="Young woman talking with AI voice virtual assistant on smartphone, Credit: Getty/d3sign" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing ‘intentions’ in real-time to companies that can meet the need – even before we have made up our minds.</p> <p>This is according to AI ethicists from the ֱ̽ of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the Intention Economy.</p> <p>Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of ‘persuasive technologies’ – one hinted at in recent corporate announcements by tech giants.</p> <p>‘Anthropomorphic’ AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue.</p> <p>This AI will combine knowledge of our online habits with an uncanny ability to attune to us in ways we find comforting – mimicking personalities and anticipating desired responses – to build levels of trust and understanding that allow for social manipulation on an industrial scale, say researchers.</p> <p>“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, said LCFI Visiting Scholar Dr Yaqub Chaudhary.</p> <p>“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions”</p> <p>“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”</p> <p>Dr Jonnie Penn, an historian of technology from Cambridge’s LCFI, said: “For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”</p> <p>“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”</p> <p>“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”</p> <p>In a new <em><a href="https://doi.org/10.1162/99608f92.21e6bbaa">Harvard Data Science Review</a></em> paper, Penn and Chaudhary write that the intention economy will be the attention economy ‘plotted in time’: profiling how user attention and communicative style connects to patterns of behaviour and the choices we end up making.</p> <p>“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.</p> <p>In an intention economy, Large Language Models or LLMs could be used to target, at low cost, a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, write the researchers.</p> <p>This information-gathering would be linked with brokered bidding networks to maximize the likelihood of achieving a given aim, such as selling a cinema trip (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”).</p> <p>This could include steering conversations in the service of particular platforms, advertisers, businesses, and even political organisations, argue Penn and Chaudhary.</p> <p>While researchers say the intention economy is currently an ‘aspiration’ for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players.</p> <p>These include an open call for ‘data that expresses human intention… across any language, topic, and format’ in a 2023 OpenAI blogpost, while the director of product at Shopify – an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.</p> <p>Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released ‘Intentonomy’ research, a dataset for human intent understanding, back in 2021.</p> <p>In 2024, Apple’s new ‘App Intents’ developer framework for connecting apps to Siri (Apple’s voice-controlled personal assistant), includes protocols to “predict actions someone might take in future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide”.</p> <p>“AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary.</p> <p>“These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”</p> <p>Penn points out that these developments are not necessarily bad, but have the potential to be destructive. “Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” he said.</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Conversational AI agents may develop the ability to covertly influence our intentions, creating a new commercial frontier that researchers call the “intention economy”.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Public awareness of what is coming is the key to ensuring we don’t go down the wrong path</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Jonnie Penn</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Getty/d3sign</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Young woman talking with AI voice virtual assistant on smartphone</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 30 Dec 2024 09:57:19 +0000 fpjl2 248626 at Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones /research/news/call-for-safeguards-to-prevent-unwanted-hauntings-by-ai-chatbots-of-dead-loved-ones <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/manana-web.jpg?itok=d_MW0MpN" alt="A visualisation of one of the design scenarios highlighted in the latest paper" title="A visualisation of one of the design scenarios highlighted in the latest paper, Credit: Tomasz Hollanek" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally 'haunting' those left behind without design safety standards, according to ֱ̽ of Cambridge researchers. </p> <p>‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of “postmortem presence”.</p> <p>AI ethicists from Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> outline three design scenarios for platforms that could emerge as part of the developing “digital afterlife industry”, to show the potential consequences of careless design in an area of AI they describe as “high risk”.</p> <p> ֱ̽research, <a href="https://link.springer.com/article/10.1007/s13347-024-00744-w">published in the journal <em>Philosophy and Technology</em></a>, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.</p> <p>When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.</p> <p>Even those who take initial comfort from a ‘deadbot’ may get drained by daily interactions that become an “overwhelming emotional weight”, argue researchers, yet may also be powerless to have an AI simulation suspended if their now-deceased loved one signed a lengthy contract with a digital afterlife service. </p> <p>“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).</p> <p>“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.</p> <p>“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. ֱ̽rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”</p> <p>Platforms offering to recreate the dead with AI for a small fee already exist, such as ‘Project December’, which started out harnessing GPT models before developing its own systems, and apps including ‘HereAfter’. Similar services have also begun to emerge in China.</p> <p>One of the potential scenarios in the new paper is “MaNana”: a conversational AI service allowing people to create a deadbot simulating their deceased grandmother without consent of the “data donor” (the dead grandparent). </p> <p> ֱ̽hypothetical scenario sees an adult grandchild who is initially impressed and comforted by the technology start to receive advertisements once a “premium trial” finishes. For example, the chatbot suggesting ordering from food delivery services in the voice and style of the deceased.</p> <p> ֱ̽relative feels they have disrespected the memory of their grandmother, and wishes to have the deadbot turned off, but in a meaningful way – something the service providers haven’t considered.</p> <p>“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also from Cambridge’s LCFI.</p> <p>“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”</p> <p>“We recommend design protocols that prevent deadbots being utilised in disrespectful ways, such as for advertising or having an active presence on social media.”</p> <p>While Hollanek and Nowaczyk-Basińska say that designers of re-creation services should actively seek consent from data donors before they pass, they argue that a ban on deadbots based on non-consenting donors would be unfeasible.</p> <p>They suggest that design processes should involve a series of prompts for those looking to “resurrect” their loved ones, such as ‘have you ever spoken with X about how they would like to be remembered?’, so the dignity of the departed is foregrounded in deadbot development.    </p> <p>Another scenario featured in the paper, an imagined company called “Paren’t”, highlights the example of a terminally ill woman leaving a deadbot to assist her eight-year-old son with the grieving process.</p> <p>While the deadbot initially helps as a therapeutic aid, the AI starts to generate confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter.</p> <p> ֱ̽researchers recommend age restrictions for deadbots, and also call for “meaningful transparency” to ensure users are consistently aware that they are interacting with an AI. These could be similar to current warnings on content that may cause seizures, for example.</p> <p> ֱ̽final scenario explored by the study – a fictional company called “Stay” – shows an older person secretly committing to a deadbot of themselves and paying for a twenty-year subscription, in the hopes it will comfort their adult children and allow their grandchildren to know them.</p> <p>After death, the service kicks in. One adult child does not engage, and receives a barrage of emails in the voice of their dead parent. Another does, but ends up emotionally exhausted and wracked with guilt over the fate of the deadbot. Yet suspending the deadbot would violate the terms of the contract their parent signed with the service company.</p> <p>“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said Hollanek.</p> <p>“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. ֱ̽potential psychological effect, particularly at an already difficult time, could be devastating.”</p> <p> ֱ̽researchers call for design teams to prioritise opt-out protocols that allow potential users to terminate their relationships with deadbots in ways that provide emotional closure.</p> <p>Added Nowaczyk-Basińska: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”    </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Cambridge researchers lay out the need for design safety protocols that prevent the emerging “digital afterlife industry” causing social and psychological harm. </p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Tomasz Hollanek</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">A visualisation of one of the design scenarios highlighted in the latest paper</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 09 May 2024 07:06:41 +0000 fpjl2 245891 at Aim policies at ‘hardware’ to ensure AI safety, say experts /stories/hardware-ai-safety <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Chips and datacentres – the “compute” driving the AI revolution – may be the most effective targets for risk-reducing AI policies, according to a new report.</p> </p></div></div></div> Wed, 14 Feb 2024 11:28:30 +0000 fpjl2 244461 at Cambridge launches Institute for Technology and Humanity /stories/institute-technology-humanity-launch <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A major interdisciplinary initiative has been launched that aims to meet the challenges and opportunities of new technologies as they emerge, today and far into the future.</p> </p></div></div></div> Tue, 21 Nov 2023 09:13:02 +0000 fpjl2 243351 at Opinion: ֱ̽AI Summit was a promising start – but momentum must be maintained /stories/ai-summit-promising-start <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Given the frenetic pace of AI development, the international consensus demonstrated at the AI Summit is much-needed progress, says AI expert Dr Seán Ó hÉigeartaigh. </p> </p></div></div></div> Wed, 08 Nov 2023 13:41:59 +0000 fpjl2 243111 at Cinema has helped 'entrench' gender inequality in AI /stories/whomakesAI <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Study finds that just 8% of all depictions of AI professionals from a century of film are women – and half of these are shown as subordinate to men.</p> </p></div></div></div> Mon, 13 Feb 2023 10:17:06 +0000 fpjl2 236801 at Claims AI can boost workplace diversity are ‘spurious and dangerous’, researchers argue /research/news/claims-ai-can-boost-workplace-diversity-are-spurious-and-dangerous-researchers-argue <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/drage-ai-pic.jpg?itok=UJHmvSb5" alt="Co-author Dr Eleanor Drage testing the &#039;personality machine&#039; built by Cambridge undergraduates." title="Co-author Dr Eleanor Drage testing the &amp;#039;personality machine&amp;#039; built by Cambridge undergraduates., Credit: Eleanor Drage" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce, from use of chatbots and CV scrapers to line up prospective candidates, through to analysis software for video interviews. </p> <p>Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns and even facial micro-expressions to assess huge pools of job applicants for the right personality type and 'culture fit'.</p> <p>However, in a new report <a href="https://link.springer.com/article/10.1007/s13347-022-00543-1">published in <em>Philosophy and Technology</em></a>, researchers from Cambridge’s Centre for Gender Studies argue these claims make some uses of AI in hiring little better than an 'automated pseudoscience' reminiscent of physiognomy or phrenology: the discredited beliefs that personality can be deduced from facial features or skull shape.</p> <p>They say it is a dangerous example of 'technosolutionism': turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.</p> <p>In fact, the researchers have worked with a team of Cambridge computer science undergraduates to debunk these new hiring techniques by building <a href="https://personal-ambiguator-frontend.vercel.app/">an AI tool modelled on the technology</a>, available online.</p> <p> ֱ̽‘Personality Machine’ demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings – and so could make the difference between rejection and progression for a generation of job seekers vying for graduate positions.            </p> <p> ֱ̽Cambridge team say that use of AI to narrow candidate pools may ultimately increase uniformity rather than diversity in the workforce, as the technology is calibrated to search for the employer’s fantasy 'ideal candidate'.</p> <p>This could see those with the right training and background "win over the algorithms" by replicating behaviours the AI is programmed to identify, and taking those attitudes into the workplace, say the researchers.  </p> <p>Additionally, as algorithms are honed using past data, they argue that candidates considered the best fit are likely to end up those that most closely resembling the current workforce.</p> <p>“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr Eleanor Drage.</p> <p>“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”</p> <p> ֱ̽researchers point out that these AI recruitment tools are often proprietary – or 'black box' – so how they work is a mystery.</p> <p>“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage. “As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”</p> <p>Despite some pushback – the EU’s proposed AI Act classifies AI-powered hiring software as 'high risk', for example – researchers say that tools made by companies such as Retorio and myInterview are deployed with little regulation, and point to surveys suggesting use of AI in hiring is snowballing.</p> <p>A 2020 study of 500 organisations across various industries in five countries found 24% of businesses have implemented AI for recruitment purposes and 56% of hiring managers planned to adopt it in the next year.</p> <p>Another poll of 334 leaders in human resources, conducted in April 2020, as the pandemic took hold, found that 86% of organisations were incorporating new virtual technology into hiring practices.  </p> <p>“This trend was in already in place as the pandemic began, and the accelerated shift to online working caused by COVID-19 is likely to see greater deployment of AI tools by HR departments in future,” said co-author Dr Kerry Mackereth, who presents <a href="https://www.gender.cam.ac.uk/technology-gender-and-intersectionality-research-project/the-good-robot-podcast">the Good Robot podcast</a> with Drage, in which the duo explore the ethics of technology. </p> <p>COVID-19 is not the only factor, according to HR operatives the researchers have interviewed. “Volume recruitment is increasingly untenable for human resources teams that are desperate for software to cut costs as well as numbers of applicants needing personal attention,” said Mackereth.</p> <p>Drage and Mackereth say many companies now use AI to analyse videos of candidates, interpreting personality by assessing regions of a face – similar to lie-detection AI – and scoring for the 'big five' personality tropes: extroversion, agreeableness, openness, conscientiousness, and neuroticism.</p> <p> ֱ̽undergraduates behind the ‘Personality Machine’, which uses a similar technique to expose its flaws, say that while their tool may not help users beat the algorithm, it will give job seekers a flavour of the kinds of AI scrutiny they might be under – perhaps even without their knowledge.</p> <p>“All too often, the hiring process is oblique and confusing,” said Euan Ong, one of the student developers. “We want to give people a visceral demonstration of the sorts of judgements that are now being made about them automatically".</p> <p>“These tools are trained to predict personality based on common patterns in images of people they’ve previously seen, and often end up finding spurious correlations between personality and apparently unrelated properties of the image, like brightness. We made a toy version of the sorts of models we believe are used in practice, in order to experiment with it ourselves,” Ong said. </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Research highlights growing market in AI-powered recruitment tools that claim to bypass human bias to remove discrimination from hiring. </p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">While companies may not be acting in bad faith, there is little accountability for how these products are built or tested</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Eleanor Drage</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Eleanor Drage</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Co-author Dr Eleanor Drage testing the &#039;personality machine&#039; built by Cambridge undergraduates.</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 10 Oct 2022 08:18:21 +0000 fpjl2 234601 at Queen's Birthday Honours 2022 /stories/Birthday-Honours-2022 <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Leaders in fields from chemistry to cancer research and computing are among the Cambridge academics recognised today.</p> </p></div></div></div> Wed, 01 Jun 2022 21:00:04 +0000 cjb250 232591 at