ֱ̽ of Cambridge - Institute for Technology and Humanity /taxonomy/affiliations/institute-for-technology-and-humanity en Opinion: Humans should be at the heart of AI /stories/anna-korhonen-ai-and-humans <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>With the right development and application, AI could become a transformative force for good. What's missing in current technologies is human insight, says Anna Korhonen.</p> </p></div></div></div> Thu, 03 Apr 2025 16:48:27 +0000 lw355 248829 at Coming AI-driven economy will sell your decisions before you take them, researchers warn /research/news/coming-ai-driven-economy-will-sell-your-decisions-before-you-take-them-researchers-warn <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aichat.jpg?itok=mafVi05H" alt="Young woman talking with AI voice virtual assistant on smartphone" title="Young woman talking with AI voice virtual assistant on smartphone, Credit: Getty/d3sign" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing ‘intentions’ in real-time to companies that can meet the need – even before we have made up our minds.</p> <p>This is according to AI ethicists from the ֱ̽ of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the Intention Economy.</p> <p>Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of ‘persuasive technologies’ – one hinted at in recent corporate announcements by tech giants.</p> <p>‘Anthropomorphic’ AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue.</p> <p>This AI will combine knowledge of our online habits with an uncanny ability to attune to us in ways we find comforting – mimicking personalities and anticipating desired responses – to build levels of trust and understanding that allow for social manipulation on an industrial scale, say researchers.</p> <p>“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, said LCFI Visiting Scholar Dr Yaqub Chaudhary.</p> <p>“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions”</p> <p>“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”</p> <p>Dr Jonnie Penn, an historian of technology from Cambridge’s LCFI, said: “For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”</p> <p>“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”</p> <p>“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”</p> <p>In a new <em><a href="https://doi.org/10.1162/99608f92.21e6bbaa">Harvard Data Science Review</a></em> paper, Penn and Chaudhary write that the intention economy will be the attention economy ‘plotted in time’: profiling how user attention and communicative style connects to patterns of behaviour and the choices we end up making.</p> <p>“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.</p> <p>In an intention economy, Large Language Models or LLMs could be used to target, at low cost, a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, write the researchers.</p> <p>This information-gathering would be linked with brokered bidding networks to maximize the likelihood of achieving a given aim, such as selling a cinema trip (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”).</p> <p>This could include steering conversations in the service of particular platforms, advertisers, businesses, and even political organisations, argue Penn and Chaudhary.</p> <p>While researchers say the intention economy is currently an ‘aspiration’ for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players.</p> <p>These include an open call for ‘data that expresses human intention… across any language, topic, and format’ in a 2023 OpenAI blogpost, while the director of product at Shopify – an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.</p> <p>Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released ‘Intentonomy’ research, a dataset for human intent understanding, back in 2021.</p> <p>In 2024, Apple’s new ‘App Intents’ developer framework for connecting apps to Siri (Apple’s voice-controlled personal assistant), includes protocols to “predict actions someone might take in future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide”.</p> <p>“AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary.</p> <p>“These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”</p> <p>Penn points out that these developments are not necessarily bad, but have the potential to be destructive. “Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” he said.</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Conversational AI agents may develop the ability to covertly influence our intentions, creating a new commercial frontier that researchers call the “intention economy”.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Public awareness of what is coming is the key to ensuring we don’t go down the wrong path</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Jonnie Penn</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Getty/d3sign</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Young woman talking with AI voice virtual assistant on smartphone</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 30 Dec 2024 09:57:19 +0000 fpjl2 248626 at Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones /research/news/call-for-safeguards-to-prevent-unwanted-hauntings-by-ai-chatbots-of-dead-loved-ones <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/manana-web.jpg?itok=d_MW0MpN" alt="A visualisation of one of the design scenarios highlighted in the latest paper" title="A visualisation of one of the design scenarios highlighted in the latest paper, Credit: Tomasz Hollanek" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally 'haunting' those left behind without design safety standards, according to ֱ̽ of Cambridge researchers. </p> <p>‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of “postmortem presence”.</p> <p>AI ethicists from Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> outline three design scenarios for platforms that could emerge as part of the developing “digital afterlife industry”, to show the potential consequences of careless design in an area of AI they describe as “high risk”.</p> <p> ֱ̽research, <a href="https://link.springer.com/article/10.1007/s13347-024-00744-w">published in the journal <em>Philosophy and Technology</em></a>, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.</p> <p>When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.</p> <p>Even those who take initial comfort from a ‘deadbot’ may get drained by daily interactions that become an “overwhelming emotional weight”, argue researchers, yet may also be powerless to have an AI simulation suspended if their now-deceased loved one signed a lengthy contract with a digital afterlife service. </p> <p>“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).</p> <p>“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.</p> <p>“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. ֱ̽rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”</p> <p>Platforms offering to recreate the dead with AI for a small fee already exist, such as ‘Project December’, which started out harnessing GPT models before developing its own systems, and apps including ‘HereAfter’. Similar services have also begun to emerge in China.</p> <p>One of the potential scenarios in the new paper is “MaNana”: a conversational AI service allowing people to create a deadbot simulating their deceased grandmother without consent of the “data donor” (the dead grandparent). </p> <p> ֱ̽hypothetical scenario sees an adult grandchild who is initially impressed and comforted by the technology start to receive advertisements once a “premium trial” finishes. For example, the chatbot suggesting ordering from food delivery services in the voice and style of the deceased.</p> <p> ֱ̽relative feels they have disrespected the memory of their grandmother, and wishes to have the deadbot turned off, but in a meaningful way – something the service providers haven’t considered.</p> <p>“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also from Cambridge’s LCFI.</p> <p>“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”</p> <p>“We recommend design protocols that prevent deadbots being utilised in disrespectful ways, such as for advertising or having an active presence on social media.”</p> <p>While Hollanek and Nowaczyk-Basińska say that designers of re-creation services should actively seek consent from data donors before they pass, they argue that a ban on deadbots based on non-consenting donors would be unfeasible.</p> <p>They suggest that design processes should involve a series of prompts for those looking to “resurrect” their loved ones, such as ‘have you ever spoken with X about how they would like to be remembered?’, so the dignity of the departed is foregrounded in deadbot development.    </p> <p>Another scenario featured in the paper, an imagined company called “Paren’t”, highlights the example of a terminally ill woman leaving a deadbot to assist her eight-year-old son with the grieving process.</p> <p>While the deadbot initially helps as a therapeutic aid, the AI starts to generate confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter.</p> <p> ֱ̽researchers recommend age restrictions for deadbots, and also call for “meaningful transparency” to ensure users are consistently aware that they are interacting with an AI. These could be similar to current warnings on content that may cause seizures, for example.</p> <p> ֱ̽final scenario explored by the study – a fictional company called “Stay” – shows an older person secretly committing to a deadbot of themselves and paying for a twenty-year subscription, in the hopes it will comfort their adult children and allow their grandchildren to know them.</p> <p>After death, the service kicks in. One adult child does not engage, and receives a barrage of emails in the voice of their dead parent. Another does, but ends up emotionally exhausted and wracked with guilt over the fate of the deadbot. Yet suspending the deadbot would violate the terms of the contract their parent signed with the service company.</p> <p>“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said Hollanek.</p> <p>“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. ֱ̽potential psychological effect, particularly at an already difficult time, could be devastating.”</p> <p> ֱ̽researchers call for design teams to prioritise opt-out protocols that allow potential users to terminate their relationships with deadbots in ways that provide emotional closure.</p> <p>Added Nowaczyk-Basińska: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”    </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Cambridge researchers lay out the need for design safety protocols that prevent the emerging “digital afterlife industry” causing social and psychological harm. </p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Tomasz Hollanek</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">A visualisation of one of the design scenarios highlighted in the latest paper</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 09 May 2024 07:06:41 +0000 fpjl2 245891 at