ֱ̽ of Cambridge - digital technology /taxonomy/subjects/digital-technology en Harmful effects of digital tech – the science ‘needs fixing’, experts argue /research/news/harmful-effects-of-digital-tech-the-science-needs-fixing-experts-argue <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/orbenpic.jpg?itok=QpXCMz5s" alt="Illustration representing potential online harms" title="Illustration representing potential online harms, Credit: Nuthawut Somsuk via Getty" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Scientific research on the harms of digital technology is stuck in a ‘failing cycle’ that moves too slowly to allow governments and society to hold tech companies to account, according to two leading researchers in a new report published in the journal <a href="https://doi.org/10.1126/science.adt6807"><em>Science</em></a>.</p> <p>Dr Amy Orben from the ֱ̽ of Cambridge and Dr J Nathan Matias from Cornell ֱ̽ say the pace at which new technology is deployed to billions of people has put unbearable strain on the scientific systems trying to evaluate its effects.</p> <p>They argue that big tech companies effectively outsource research on the safety of their products to independent scientists at universities and charities who work with a fraction of the resources – while firms also obstruct access to essential data and information. This is in contrast to other industries where safety testing is largely done ‘in house’.</p> <p>Orben and Matias call for an overhaul of ‘evidence production’ assessing the impact of technology on everything from mental health to discrimination.</p> <p>Their recommendations include accelerating the research process, so that policy interventions and safer designs are tested in parallel with initial evidence gathering, and creating registries of tech-related harms informed by the public.</p> <p>“Big technology companies increasingly act with perceived impunity, while trust in their regard for public safety is fading,” said Orben, of Cambridge’s MRC Cognition and Brain Sciences Unit. “Policymakers and the public are turning to independent scientists as arbiters of technology safety.”</p> <p>“Scientists like ourselves are committed to the public good, but we are asked to hold to account a billion-dollar industry without appropriate support for our research or the basic tools to produce good quality evidence quickly.”</p> <p>“We must urgently fix this science and policy ecosystem so we can better understand and manage the potential risks posed by our evolving digital society,” said Orben.</p> <h3><strong>'Negative feedback cycle'</strong></h3> <p><a href="https://doi.org/10.1126/science.adt6807">In the latest <em>Science </em>paper</a>, the researchers point out that technology companies often follow policies of rapidly deploying products first and then looking to ‘debug’ potential harms afterwards. This includes distributing generative AI products to millions before completing basic safety tests, for example.</p> <p>When tasked with understanding potential harms of new technologies, researchers rely on ‘routine science’ which – having driven societal progress for decades – now lags the rate of technological change to the extent that it is becoming at times ‘unusable’.</p> <p>With many citizens pressuring politicians to act on digital safety, Orben and Matias argue that technology companies use the slow pace of science and lack of hard evidence to resist policy interventions and “minimize their own responsibility”.</p> <p>Even if research gets appropriately resourced, they note that researchers will be faced with understanding products that evolve at an unprecedented rate.</p> <p>“Technology products change on a daily or weekly basis, and adapt to individuals. Even company staff may not fully understand the product at any one time, and scientific research can be out of date by the time it is completed, let alone published,” said Matias, who leads Cornell’s Citizens and Technology (CAT) Lab.</p> <p>“At the same time, claims about the inadequacy of science can become a source of delay in technology safety when science plays the role of gatekeeper to policy interventions,” Matias said.</p> <p>“Just as oil and chemical industries have leveraged the slow pace of science to deflect the evidence that informs responsibility, executives in technology companies have followed a similar pattern. Some have even allegedly refused to commit substantial resources to safety research without certain kinds of causal evidence, which they also decline to fund.”</p> <p> ֱ̽researchers lay out the current ‘negative feedback cycle’:</p> <p>Tech companies do not adequately resource safety research, shifting the burden to independent scientists who lack data and funding. This means high-quality causal evidence is not produced in required timeframes, which weakens government’s ability to regulate – further disincentivising safety research, as companies are let off the hook.</p> <p>Orben and Matias argue that this cycle must be redesigned, and offer ways to do it.</p> <h3><strong>Reporting digital harms</strong></h3> <p>To speed up the identification of harms caused by online technologies, policymakers or civil society could construct registries for incident reporting, and encourage the public to contribute evidence when they experience harms.</p> <p>Similar methods are already used in fields such as environmental toxicology where the public reports on polluted waterways, or vehicle crash reporting programs that inform automotive safety, for example.</p> <p>“We gain nothing when people are told to mistrust their lived experience due to an absence of evidence when that evidence is not being compiled,” said Matias.</p> <p>Existing registries, from mortality records to domestic violence databases, could also be augmented to include information on the involvement of digital technologies such as AI.</p> <p> ֱ̽paper’s authors also outline a ‘minimum viable evidence’ system, in which policymakers and researchers adjust the ‘evidence threshold’ required to show potential technological harms before starting to test interventions.</p> <p>These evidence thresholds could be set by panels made up of affected communities, the public, or ‘science courts’: expert groups assembled to make rapid assessments.</p> <p>“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” said Orben.</p> <p>“Yet intervention testing can be used to scope ways to help individuals and society, and pinpoint potential harms in the process. We need to move from a sequential system to an agile, parallelised one.”</p> <p>Under a minimum viable evidence system, if a company obstructs or fails to support independent research, and is not transparent about their own internal safety testing, the amount of evidence needed to start testing potential interventions would be decreased.</p> <p>Orben and Matias also suggest learning from the success of ‘Green Chemistry’, which sees an independent body hold lists of chemical products ranked by potential for harm, to help incentivise markets to develop safer alternatives.</p> <p>“ ֱ̽scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” Orben said.</p> <p>“Scientists and policymakers must acknowledge the failures of this system and help craft a better one before the age of AI further exposes society to the risks of unchecked technological change.”</p> <p>Added Matias: “When science about the impacts of new technologies is too slow, everyone loses.”</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>From social media to AI, online technologies are changing too fast for the scientific infrastructure used to gauge their public health harms, say two leaders in the field.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dr Amy Orben</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Nuthawut Somsuk via Getty</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Illustration representing potential online harms</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 10 Apr 2025 18:01:05 +0000 fpjl2 249318 at Forcing UK creatives to ‘opt out’ of AI training risks stifling new talent, Cambridge experts warn /research/news/forcing-uk-creatives-to-opt-out-of-ai-training-risks-stifling-new-talent-cambridge-experts-warn <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/kyle-loftus-3ucqtxsva88-unsplash-copy.jpg?itok=uG3F4ETE" alt="Videographer in studio with a model" title="Credit: Kal Visuals - Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽UK government should resist allowing AI companies to scrape all copyrighted works unless the holder has actively ‘opted out’, as it puts an unfair burden on up-and-coming creative talents who lack the skills and resources to meet legal requirements.</p> <p><a href="https://www.mctd.ac.uk/policy-brief-ai-copyright-productivity-uk-creative-industries/">This is according to a new report</a> from ֱ̽ of Cambridge experts in economics, policy and machine learning, who also argue the UK government should clearly state that only a human author can hold copyright – even when AI has been heavily involved.</p> <p>A collaboration between three Cambridge initiatives – the Minderoo Centre for Technology and Democracy, the Bennett Institute for Public Policy, and ai@cam – the report argues that unregulated use of generative AI will not guarantee economic growth, and risks damaging the UK’s thriving creative sector. </p> <p>If the UK adopts the <a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence#c-our-proposed-approach">proposed ‘rights reservation’ for AI data mining</a>, rather than maintaining the legal foundation that automatically safeguards copyright, it will compromise the livelihoods of many in the sector, particularly those just starting out, say researchers.</p> <p>They argue that it risks allowing artistic content produced in the UK to be scraped for endless reuse by offshore companies.</p> <p>“Going the way of an opt-out model is telling Britain’s artists, musicians, and writers that tech industry profitability is more valuable than their creations,” said Prof Gina Neff, Executive Director at the Minderoo Centre for Technology and Democracy.</p> <p>“Ambitions to strengthen the creative sector, bolster the British economy and spark innovation using GenAI in the UK can be achieved – but we will only get results that benefit all of us if we put people’s needs before tech companies.”</p> <p><strong>'Ingested' by technologies</strong></p> <p>Creative industries contribute around £124.6 billion or 5.7% to the UK’s economy, and have a deep connection to the tech industry. For example, the UK video games industry is the largest in Europe, and contributed £5.12 billion to the UK economy in 2019.</p> <p>While AI could lead to a new generation of creative companies and products, the researchers say that little is currently known about how AI is being adopted within these industries, and where the skills gaps lie.</p> <p>“ ֱ̽Government ought to commission research that engages directly with creatives, understanding where and how AI is benefiting and harming them, and use it to inform policies for supporting the sector’s workforce,” said Neil Lawrence, DeepMind Professor of Machine Learning and Chair of ai@cam.</p> <p>“Uncertainty about copyright infringement is hindering the development of Generative AI for public benefit in the UK. For AI to be trusted and widely deployed, it should not make creative work more difficult.”</p> <p>In the UK, copyright is vested in the creator automatically if it meets the legal criteria. Some AI companies have tried to exploit ‘fair dealing’ – a loophole based around use for research or reporting – but this is undermined by the commercial nature of most AI.</p> <p>Now, some AI companies are brokering licensing agreements with publishers, and the report argues this is a potential way to ensure creative industries are compensated.</p> <p>While rights of performers, from singers to actors, currently cover reproductions of live performances, AI uses composites harvested from across a performer’s oeuvre, so rights relating to specific performances are unlikely to apply, say researchers.</p> <p>Further clauses in older contracts mean performers are having their work ‘ingested’ by technologies that didn’t exist when they signed on the dotted line.</p> <p> ֱ̽researchers call on the government to fully adopt the Beijing Treaty on Audio Visual Performance, which the UK signed over a decade ago but is yet to implement, as it gives performers economic rights over all reproduction, distribution and rental.</p> <p>" ֱ̽current lack of clarity about the licensing and regulation of training data use is a lose-lose situation. Creative professionals aren't fairly compensated for their work being used to train AI models, while AI companies are hesitant to fully invest in the UK due to unclear legal frameworks,” said Prof Diane Coyle, the Bennett Professor of Public Policy.</p> <p>“We propose mandatory transparency requirements for AI training data and standardised licensing agreements that properly value creative works. Without these guardrails, we risk undermining our valuable creative sector in the pursuit of uncertain benefits from AI."</p> <p><strong>'Spirit of copyright law'</strong></p> <p> ֱ̽Cambridge experts also look at questions of copyright for AI-generated work, and the extent to which ‘prompting’ AI can constitute ownership. They conclude that AI cannot itself hold copyright, and the UK government should develop guidelines on compensation for artists whose work and name feature in prompts instructing AI.</p> <p>When it comes to the proposed ‘opt-out’ solution, the experts it is not “in the spirit of copyright law” and is difficult to enforce. Even if creators do opt out, it is not clear how that data will be identified, labelled, and compensated, or even erased.</p> <p>It may be seen as giving ‘carte blanche’ to foreign-owned and managed AI companies to benefit from British copyrighted works without a clear mechanism for creators to receive fair compensation.</p> <p>“Asking copyright reform to solve structural problems with AI is not the solution,” said Dr Ann Kristin Glenster, Senior Policy Advisor at the Minderoo Centre for Technology and lead author of the report.</p> <p>“Our research shows that the business case has yet to be made for an opt-out regime that will promote growth and innovation of the UK creative industries.</p> <p>“Devising policies that enable the UK creative industries to benefit from AI should be the Government’s priority if it wants to see growth of both its creative and tech industries,” Glenster said.</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽UK government’s proposed ‘rights reservation’ model for AI data mining tells British artists, musicians, and writers that “tech industry profitability is more valuable than their creations” say leading academics.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">We will only get results that benefit all of us if we put people’s needs before tech companies</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Gina Neff</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/man-in-green-and-brown-camouflage-jacket-holding-black-video-camera-3UcQtXSvA88" target="_blank">Kal Visuals - Unsplash</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Thu, 20 Feb 2025 07:56:32 +0000 fpjl2 248711 at Coming AI-driven economy will sell your decisions before you take them, researchers warn /research/news/coming-ai-driven-economy-will-sell-your-decisions-before-you-take-them-researchers-warn <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aichat.jpg?itok=mafVi05H" alt="Young woman talking with AI voice virtual assistant on smartphone" title="Young woman talking with AI voice virtual assistant on smartphone, Credit: Getty/d3sign" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing ‘intentions’ in real-time to companies that can meet the need – even before we have made up our minds.</p> <p>This is according to AI ethicists from the ֱ̽ of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the Intention Economy.</p> <p>Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of ‘persuasive technologies’ – one hinted at in recent corporate announcements by tech giants.</p> <p>‘Anthropomorphic’ AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue.</p> <p>This AI will combine knowledge of our online habits with an uncanny ability to attune to us in ways we find comforting – mimicking personalities and anticipating desired responses – to build levels of trust and understanding that allow for social manipulation on an industrial scale, say researchers.</p> <p>“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, said LCFI Visiting Scholar Dr Yaqub Chaudhary.</p> <p>“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions”</p> <p>“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”</p> <p>Dr Jonnie Penn, an historian of technology from Cambridge’s LCFI, said: “For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”</p> <p>“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”</p> <p>“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”</p> <p>In a new <em><a href="https://doi.org/10.1162/99608f92.21e6bbaa">Harvard Data Science Review</a></em> paper, Penn and Chaudhary write that the intention economy will be the attention economy ‘plotted in time’: profiling how user attention and communicative style connects to patterns of behaviour and the choices we end up making.</p> <p>“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.</p> <p>In an intention economy, Large Language Models or LLMs could be used to target, at low cost, a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, write the researchers.</p> <p>This information-gathering would be linked with brokered bidding networks to maximize the likelihood of achieving a given aim, such as selling a cinema trip (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”).</p> <p>This could include steering conversations in the service of particular platforms, advertisers, businesses, and even political organisations, argue Penn and Chaudhary.</p> <p>While researchers say the intention economy is currently an ‘aspiration’ for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players.</p> <p>These include an open call for ‘data that expresses human intention… across any language, topic, and format’ in a 2023 OpenAI blogpost, while the director of product at Shopify – an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.</p> <p>Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released ‘Intentonomy’ research, a dataset for human intent understanding, back in 2021.</p> <p>In 2024, Apple’s new ‘App Intents’ developer framework for connecting apps to Siri (Apple’s voice-controlled personal assistant), includes protocols to “predict actions someone might take in future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide”.</p> <p>“AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary.</p> <p>“These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”</p> <p>Penn points out that these developments are not necessarily bad, but have the potential to be destructive. “Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” he said.</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Conversational AI agents may develop the ability to covertly influence our intentions, creating a new commercial frontier that researchers call the “intention economy”.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Public awareness of what is coming is the key to ensuring we don’t go down the wrong path</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Jonnie Penn</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Getty/d3sign</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Young woman talking with AI voice virtual assistant on smartphone</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 30 Dec 2024 09:57:19 +0000 fpjl2 248626 at Wildlife monitoring technologies used to intimidate and spy on women, study finds /research/news/wildlife-monitoring-technologies-used-to-intimidate-and-spy-on-women-study-finds <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/2-trishant-simlai-interviewing-a-local-woman-885x428px.jpg?itok=gI1QPw6t" alt="Researcher interviewing a local woman in India" title="Researcher interviewing a local woman in India, Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Remotely operated camera traps, sound recorders and drones are increasingly being used in conservation science to monitor wildlife and natural habitats, and to keep watch on protected natural areas.</p> <p>But Cambridge researchers studying a forest in northern India have found that the technologies are being deliberately misused by local government and male villagers to keep watch on women without their consent.</p> <p>Cambridge researcher Dr Trishant Simlai spent 14 months interviewing 270 locals living around the Corbett Tiger Reserve, a national park in northern India, including many women from nearby villages.</p> <p>His report, published today in the journal <a href="https://journals.sagepub.com/doi/10.1177/26349825241283837"><em>Environment and Planning F</em></a>, reveals how forest rangers in the national park deliberately fly drones over local women to frighten them out of the forest, and stop them collecting natural resources despite it being their legal right to do so.</p> <p> ֱ̽women, who previously found sanctuary in the forest away from their male-dominated villages, told Simlai they feel watched and inhibited by camera traps, so talk and sing much more quietly. This increases the chance of surprise encounters with potentially dangerous wildlife like elephants and tigers. One woman he interviewed has since been killed in a tiger attack.</p> <p> ֱ̽study reveals a worst-case scenario of deliberate human monitoring and intimidation. But the researchers say people are being unintentionally recorded by wildlife monitoring devices without their knowledge in many other places - even national parks in the UK. </p> <p>“Nobody could have realised that camera traps put in the Indian forest to monitor mammals actually have a profoundly negative impact on the mental health of local women who use these spaces,” said Dr Trishant Simlai, a researcher on the 'Smart Forests' project in the ֱ̽ of Cambridge’s Department of Sociology and lead author of the report.</p> <p>“These findings have caused quite a stir amongst the conservation community. It’s very common for projects to use these technologies to monitor wildlife, but this highlights that we really need to be sure they’re not causing unintended harm,” said Professor Chris Sandbrook, Director of the ֱ̽ of Cambridge’s Masters in Conservation Leadership programme, who was also involved in the report.</p> <p>He added: “Surveillance technologies that are supposed to be tracking animals can easily be used to watch people instead – invading their privacy and altering the way they behave.”</p> <p>Many areas of conservation importance overlap with areas of human use. ֱ̽researchers call for conservationists to think carefully about the social implications of using remote monitoring technologies – and whether less invasive methods like surveys could provide the information they need instead.</p> <p><strong><em>Intimidation and deliberate humiliation</em></strong></p> <p> ֱ̽women living near India’s Corbett Tiger Reserve use the forest daily in ways that are central to their lives: from gathering firewood and herbs to sharing life’s difficulties through traditional songs.</p> <p>Domestic violence and alcoholism are widespread problems in this rural region and many women spend long hours in forest spaces to escape difficult home situations.</p> <p> ֱ̽women told Simlai that new technologies, deployed under the guise of wildlife monitoring projects, are being used to intimidate and exert power over them - by monitoring them too. </p> <p>“A photograph of a woman going to the toilet in the forest – captured on a camera trap supposedly for wildlife monitoring - was circulated on local Facebook and WhatsApp groups as a means of deliberate harassment,” said Simlai. </p> <p>He added: “I discovered that local women form strong bonds while working together in the forest, and they sing while collecting firewood to deter attacks by elephants and tigers. When they see camera traps they feel inhibited because they don’t know who’s watching or listening to them – and as a result they behave differently - often being much quieter, which puts them in danger.”</p> <p>In places like northern India, the identity of local women is closely linked to their daily activities and social roles within the forest. ֱ̽researchers say that understanding the various ways local women use forests is vital for effective forest management strategies.</p> <p><em><strong>Reference: </strong>Simlai, T. et al: ‘<a href="https://journals.sagepub.com/doi/10.1177/26349825241283837"> ֱ̽Gendered Forest: Digital Surveillance Technologies for Conservation and Gender-Environment relationships</a>.’ November 2024. DOI:10.17863/CAM.111664</em><br />  </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Camera traps and drones deployed by government authorities to monitor a forest in India are infringing on the privacy and rights of local women.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Nobody could have realised that camera traps put in the Indian forest to monitor mammals actually have a profoundly negative impact on the mental health of local women who use these spaces.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Trishant Simlai</div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Researcher interviewing a local woman in India</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 25 Nov 2024 00:01:44 +0000 jg533 248568 at Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones /research/news/call-for-safeguards-to-prevent-unwanted-hauntings-by-ai-chatbots-of-dead-loved-ones <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/manana-web.jpg?itok=d_MW0MpN" alt="A visualisation of one of the design scenarios highlighted in the latest paper" title="A visualisation of one of the design scenarios highlighted in the latest paper, Credit: Tomasz Hollanek" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally 'haunting' those left behind without design safety standards, according to ֱ̽ of Cambridge researchers. </p> <p>‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of “postmortem presence”.</p> <p>AI ethicists from Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> outline three design scenarios for platforms that could emerge as part of the developing “digital afterlife industry”, to show the potential consequences of careless design in an area of AI they describe as “high risk”.</p> <p> ֱ̽research, <a href="https://link.springer.com/article/10.1007/s13347-024-00744-w">published in the journal <em>Philosophy and Technology</em></a>, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.</p> <p>When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.</p> <p>Even those who take initial comfort from a ‘deadbot’ may get drained by daily interactions that become an “overwhelming emotional weight”, argue researchers, yet may also be powerless to have an AI simulation suspended if their now-deceased loved one signed a lengthy contract with a digital afterlife service. </p> <p>“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).</p> <p>“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.</p> <p>“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. ֱ̽rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”</p> <p>Platforms offering to recreate the dead with AI for a small fee already exist, such as ‘Project December’, which started out harnessing GPT models before developing its own systems, and apps including ‘HereAfter’. Similar services have also begun to emerge in China.</p> <p>One of the potential scenarios in the new paper is “MaNana”: a conversational AI service allowing people to create a deadbot simulating their deceased grandmother without consent of the “data donor” (the dead grandparent). </p> <p> ֱ̽hypothetical scenario sees an adult grandchild who is initially impressed and comforted by the technology start to receive advertisements once a “premium trial” finishes. For example, the chatbot suggesting ordering from food delivery services in the voice and style of the deceased.</p> <p> ֱ̽relative feels they have disrespected the memory of their grandmother, and wishes to have the deadbot turned off, but in a meaningful way – something the service providers haven’t considered.</p> <p>“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also from Cambridge’s LCFI.</p> <p>“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”</p> <p>“We recommend design protocols that prevent deadbots being utilised in disrespectful ways, such as for advertising or having an active presence on social media.”</p> <p>While Hollanek and Nowaczyk-Basińska say that designers of re-creation services should actively seek consent from data donors before they pass, they argue that a ban on deadbots based on non-consenting donors would be unfeasible.</p> <p>They suggest that design processes should involve a series of prompts for those looking to “resurrect” their loved ones, such as ‘have you ever spoken with X about how they would like to be remembered?’, so the dignity of the departed is foregrounded in deadbot development.    </p> <p>Another scenario featured in the paper, an imagined company called “Paren’t”, highlights the example of a terminally ill woman leaving a deadbot to assist her eight-year-old son with the grieving process.</p> <p>While the deadbot initially helps as a therapeutic aid, the AI starts to generate confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter.</p> <p> ֱ̽researchers recommend age restrictions for deadbots, and also call for “meaningful transparency” to ensure users are consistently aware that they are interacting with an AI. These could be similar to current warnings on content that may cause seizures, for example.</p> <p> ֱ̽final scenario explored by the study – a fictional company called “Stay” – shows an older person secretly committing to a deadbot of themselves and paying for a twenty-year subscription, in the hopes it will comfort their adult children and allow their grandchildren to know them.</p> <p>After death, the service kicks in. One adult child does not engage, and receives a barrage of emails in the voice of their dead parent. Another does, but ends up emotionally exhausted and wracked with guilt over the fate of the deadbot. Yet suspending the deadbot would violate the terms of the contract their parent signed with the service company.</p> <p>“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said Hollanek.</p> <p>“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. ֱ̽potential psychological effect, particularly at an already difficult time, could be devastating.”</p> <p> ֱ̽researchers call for design teams to prioritise opt-out protocols that allow potential users to terminate their relationships with deadbots in ways that provide emotional closure.</p> <p>Added Nowaczyk-Basińska: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”    </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Cambridge researchers lay out the need for design safety protocols that prevent the emerging “digital afterlife industry” causing social and psychological harm. </p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Tomasz Hollanek</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">A visualisation of one of the design scenarios highlighted in the latest paper</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 09 May 2024 07:06:41 +0000 fpjl2 245891 at Making the digital world a safer place /stories/improving-computer-security <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>New technology developed by Cambridge researchers and Arm to make our computers more secure is being put through its paces by tech companies in the UK and around the world. </p> </p></div></div></div> Wed, 25 May 2022 09:49:36 +0000 skbf2 232371 at Mind Over Chatter: What is the future of wellbeing? /research/about-research/podcasts/mind-over-chatter-what-is-the-future-of-wellbeing <div class="field field-name-field-content-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-885x432/public/research/logo-for-uni-website_2.jpeg?itok=8HCx9ezW" width="885" height="432" alt="Mind Over Chatter podcast logo" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><h2>Season 2, episode 3</h2> <p>Our wellbeing is essential to our overall quality of life. But what is wellbeing? Why is it so hard to pin down? How is it different from mental health, and what can we do to understand, measure and improve it? </p> <p>In this episode of Mind Over Chatter, we talked with psychologist and neuroscientist Dr Amy Orben, who examines how digital technologies affect adolescent psychological well-being and mental health, psychiatrist Professor Tamsin Ford, who specialises in children's mental health, and welfare economist Dr Mark Fabian, whose research focuses on how policymakers and citizens understand well-being.  </p> <p>In doing so, we learnt about the negative (and positive!) effects of the pandemic, how wellbeing differs for children and adults, and the influence of ever-evolving technology on our wellbeing. </p> <p><a class="cam-primary-cta" href="https://mind-over-chatter.captivate.fm/listen">Subscribe to Mind Over Chatter</a></p> <div style="width: 100%; height: 170px; margin-bottom: 20px; border-radius: 10px; overflow:hidden;"><iframe frameborder="no" scrolling="no" seamless="" src="https://player.captivate.fm/episode/17c509e9-6c56-4a6d-868c-c5a7217b7ccd" style="width: 100%; height: 170px;"></iframe></div> <div style="width: 100%; height: 170px; margin-bottom: 20px; border-radius: 10px; overflow:hidden;"> <h2>Key points</h2> <p>03:00 - What's the difference between well being and mental health?</p> <p>06:30 - Wellbeing and economics. How do we think about wellbeing outside of psychology? </p> <p>15:01 - We’ve reached the recap point</p> <p>19:04 - Can wellbeing be factored into factors that measure societal progress, like productivity GDP?  </p> <p>35:35 - How do we react to technological change as a society? ֱ̽debate around screen time. </p> <p>37:20 - Time for another recap! </p> <p>50:05 - How is this new thinking about well being going to shape our lives in the future?For individuals and for governments and policymakers? </p> <p> </p> <p> </p> <p> </p> </div> </div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Mind Over Chatter: ֱ̽Cambridge ֱ̽ Podcast</div></div></div> Thu, 27 May 2021 13:08:22 +0000 ns480 224401 at Global evidence for how EdTech can support pupils with disabilities is ‘thinly spread’, report finds /research/news/global-evidence-for-how-edtech-can-support-pupils-with-disabilities-is-thinly-spread-report-finds <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/untitled-2_4.jpg?itok=O9xlZK2j" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Despite widespread optimism that educational technology, or ‘EdTech’, can help to level the playing field for young people with disabilities, <a href="https://docs.edtechhub.org/lib/XJ42VUQG">the study</a> found a significant shortage of evidence about which innovations are best-positioned to help which children, and why; specifically in low-income contexts.</p>&#13; &#13; <p> ֱ̽review also found that many teachers lack training on how to use new technology, or are reluctant to do so.</p>&#13; &#13; <p> ֱ̽study was carried out for the <a href="https://edtechhub.org/">EdTech Hub</a> partnership, by researchers from the Universities of Cambridge, Glasgow and York. They conducted a detailed search for publications reporting trials or evaluations about how EdTech is being used to help primary school-age children with disabilities in low- and middle-income countries. Despite screening 20,000 documents, they found just 51 relevant papers from the past 14 years – few of which assessed any impact on children’s learning outcomes.</p>&#13; &#13; <p>Their report describes the paucity of evidence as ‘astonishing’, given the importance of educational technologies to support the learning of children with disabilities. According to the <a href="https://www.worldbank.org/en/topic/socialsustainability/brief/inclusive-education-initiative-transforming-education-for-children-with-disabilities">Inclusive Education Initiative</a>, as many as half the estimated 65 million school-age children with disabilities worldwide were out of school even before the COVID-19 pandemic, and most face ongoing, significant barriers to attending or participating in education.</p>&#13; &#13; <p>EdTech is widely seen as having the potential to reverse this trend, and numerous devices have been developed to support the education of young people with disabilities. ֱ̽study itself identifies a kaleidoscopic range of devices to support low vision, sign language programmes, mobile apps which teach braille, and computer screen readers.</p>&#13; &#13; <p>It also suggests, however, that there have been very few systematic attempts to test the effectiveness of these devices. Dr Paul Lynch, from the School of Education, ֱ̽ of Glasgow, said: “ ֱ̽evidence for EdTech’s potential to support learners with disabilities is worryingly thin. Even though we commonly hear of interesting innovations taking place across the globe, these are not being rigorously evaluated or documented.”</p>&#13; &#13; <p>Professor Nidhi Singal, from the Faculty of Education, ֱ̽ of Cambridge, said: “There is an urgent need to know which technology works best for children with disabilities, where, and in response to which specific needs. ֱ̽lack of evidence is a serious problem if we want EdTech to fulfil its potential to improve children’s access to learning, and to increase their independence and agency as they progress through school.”</p>&#13; &#13; <p> ֱ̽report identifies numerous ‘glaring omissions’ in the evaluations that researchers did manage to uncover. Around half were for devices designed to support children with hearing or vision difficulties; hardly any addressed the learning needs of children with autism, dyslexia, or physical disabilities. Most were from trials in Asia or Africa, while South America was underrepresented.</p>&#13; &#13; <p>Much of the evidence also concerned EdTech projects which Dr Gill Francis, from the ֱ̽ of York and a co-author, described as ‘in their infancy’. Most focused on whether children liked the tools, or found them easy to use, rather than whether they actually improved curriculum delivery, learner participation and outcomes. Attention was also rarely given to whether the devices could be scaled up – for example, in remote and rural areas where resources such as electricity are often lacking. Few studies appeared to have taken into account the views or experiences of parents or carers, or of learners themselves.</p>&#13; &#13; <p> ֱ̽studies reviewed also suggest that many teachers lack experience with educational technology. For example, one study in Nigeria found that teachers lacked experience of assistive technologies for students with a range of disabilities. Another, undertaken at 10 schools for the blind in Delhi, found that the uptake of modern low-vision devices was extremely limited, because teachers were unaware of their benefits.</p>&#13; &#13; <p>Despite the shortage of information overall, the study did uncover some clear evidence about how technology – particularly portable devices – is transforming opportunities for children with disabilities. Deaf and hard-of-hearing pupils, for instance, are increasingly using SMS and social media to access information about lessons and communicate with peers; while visually-impaired pupils have been able to use tablet computers, in particular, to magnify and read learning materials.</p>&#13; &#13; <p>Based on this, the report recommends that efforts to support children with disabilities in low- and middle-income countries should focus on the provision of mobile and portable devices, and that strategies should be put in place to ensure that these are sustainable and affordable for parents and schools – as cost was another concern that emerged from the studies cited.</p>&#13; &#13; <p>Critically, however, the report states that more structured evidence-gathering is urgently needed to ensure EdTech meets the UN’s stated goal to ‘ensure inclusive and equitable quality education and promote lifelong learning for all’. ֱ̽authors suggest that there is a need to adopt more robust research designs, which should address a full range of disabilities, and involve pupils, carers and teachers in the process.</p>&#13; &#13; <p>“There is no one-size-fits-all solution when working with children with disabilities,” Singal added. “That is why the current lack of substantive evidence is such a concern. It needs to be addressed so that teachers, parents and learners are enabled to make informed judgements about which technological interventions work, and what might work best for them.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>An 'astonishing' deficit of data about how the global boom in educational technology could help pupils with disabilities in low and middle-income countries has been highlighted in a new report.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">There is an urgent need to know which technology works best for children with disabilities, where, and in response to which specific needs</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Nidhi Singal</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 26 Mar 2021 09:34:13 +0000 tdk25 223151 at