ֱ̽ of Cambridge - social media /taxonomy/subjects/social-media en Harmful effects of digital tech – the science ‘needs fixing’, experts argue /research/news/harmful-effects-of-digital-tech-the-science-needs-fixing-experts-argue <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/orbenpic.jpg?itok=QpXCMz5s" alt="Illustration representing potential online harms" title="Illustration representing potential online harms, Credit: Nuthawut Somsuk via Getty" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Scientific research on the harms of digital technology is stuck in a ‘failing cycle’ that moves too slowly to allow governments and society to hold tech companies to account, according to two leading researchers in a new report published in the journal <a href="https://doi.org/10.1126/science.adt6807"><em>Science</em></a>.</p> <p>Dr Amy Orben from the ֱ̽ of Cambridge and Dr J Nathan Matias from Cornell ֱ̽ say the pace at which new technology is deployed to billions of people has put unbearable strain on the scientific systems trying to evaluate its effects.</p> <p>They argue that big tech companies effectively outsource research on the safety of their products to independent scientists at universities and charities who work with a fraction of the resources – while firms also obstruct access to essential data and information. This is in contrast to other industries where safety testing is largely done ‘in house’.</p> <p>Orben and Matias call for an overhaul of ‘evidence production’ assessing the impact of technology on everything from mental health to discrimination.</p> <p>Their recommendations include accelerating the research process, so that policy interventions and safer designs are tested in parallel with initial evidence gathering, and creating registries of tech-related harms informed by the public.</p> <p>“Big technology companies increasingly act with perceived impunity, while trust in their regard for public safety is fading,” said Orben, of Cambridge’s MRC Cognition and Brain Sciences Unit. “Policymakers and the public are turning to independent scientists as arbiters of technology safety.”</p> <p>“Scientists like ourselves are committed to the public good, but we are asked to hold to account a billion-dollar industry without appropriate support for our research or the basic tools to produce good quality evidence quickly.”</p> <p>“We must urgently fix this science and policy ecosystem so we can better understand and manage the potential risks posed by our evolving digital society,” said Orben.</p> <h3><strong>'Negative feedback cycle'</strong></h3> <p><a href="https://doi.org/10.1126/science.adt6807">In the latest <em>Science </em>paper</a>, the researchers point out that technology companies often follow policies of rapidly deploying products first and then looking to ‘debug’ potential harms afterwards. This includes distributing generative AI products to millions before completing basic safety tests, for example.</p> <p>When tasked with understanding potential harms of new technologies, researchers rely on ‘routine science’ which – having driven societal progress for decades – now lags the rate of technological change to the extent that it is becoming at times ‘unusable’.</p> <p>With many citizens pressuring politicians to act on digital safety, Orben and Matias argue that technology companies use the slow pace of science and lack of hard evidence to resist policy interventions and “minimize their own responsibility”.</p> <p>Even if research gets appropriately resourced, they note that researchers will be faced with understanding products that evolve at an unprecedented rate.</p> <p>“Technology products change on a daily or weekly basis, and adapt to individuals. Even company staff may not fully understand the product at any one time, and scientific research can be out of date by the time it is completed, let alone published,” said Matias, who leads Cornell’s Citizens and Technology (CAT) Lab.</p> <p>“At the same time, claims about the inadequacy of science can become a source of delay in technology safety when science plays the role of gatekeeper to policy interventions,” Matias said.</p> <p>“Just as oil and chemical industries have leveraged the slow pace of science to deflect the evidence that informs responsibility, executives in technology companies have followed a similar pattern. Some have even allegedly refused to commit substantial resources to safety research without certain kinds of causal evidence, which they also decline to fund.”</p> <p> ֱ̽researchers lay out the current ‘negative feedback cycle’:</p> <p>Tech companies do not adequately resource safety research, shifting the burden to independent scientists who lack data and funding. This means high-quality causal evidence is not produced in required timeframes, which weakens government’s ability to regulate – further disincentivising safety research, as companies are let off the hook.</p> <p>Orben and Matias argue that this cycle must be redesigned, and offer ways to do it.</p> <h3><strong>Reporting digital harms</strong></h3> <p>To speed up the identification of harms caused by online technologies, policymakers or civil society could construct registries for incident reporting, and encourage the public to contribute evidence when they experience harms.</p> <p>Similar methods are already used in fields such as environmental toxicology where the public reports on polluted waterways, or vehicle crash reporting programs that inform automotive safety, for example.</p> <p>“We gain nothing when people are told to mistrust their lived experience due to an absence of evidence when that evidence is not being compiled,” said Matias.</p> <p>Existing registries, from mortality records to domestic violence databases, could also be augmented to include information on the involvement of digital technologies such as AI.</p> <p> ֱ̽paper’s authors also outline a ‘minimum viable evidence’ system, in which policymakers and researchers adjust the ‘evidence threshold’ required to show potential technological harms before starting to test interventions.</p> <p>These evidence thresholds could be set by panels made up of affected communities, the public, or ‘science courts’: expert groups assembled to make rapid assessments.</p> <p>“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” said Orben.</p> <p>“Yet intervention testing can be used to scope ways to help individuals and society, and pinpoint potential harms in the process. We need to move from a sequential system to an agile, parallelised one.”</p> <p>Under a minimum viable evidence system, if a company obstructs or fails to support independent research, and is not transparent about their own internal safety testing, the amount of evidence needed to start testing potential interventions would be decreased.</p> <p>Orben and Matias also suggest learning from the success of ‘Green Chemistry’, which sees an independent body hold lists of chemical products ranked by potential for harm, to help incentivise markets to develop safer alternatives.</p> <p>“ ֱ̽scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” Orben said.</p> <p>“Scientists and policymakers must acknowledge the failures of this system and help craft a better one before the age of AI further exposes society to the risks of unchecked technological change.”</p> <p>Added Matias: “When science about the impacts of new technologies is too slow, everyone loses.”</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>From social media to AI, online technologies are changing too fast for the scientific infrastructure used to gauge their public health harms, say two leaders in the field.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dr Amy Orben</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Nuthawut Somsuk via Getty</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Illustration representing potential online harms</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 10 Apr 2025 18:01:05 +0000 fpjl2 249318 at News article or big oil ad? /research/news/news-article-or-big-oil-ad <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/misinfo-dp.jpg?itok=sq4jgPmm" alt="Fueling the Fire of Misinformation - stock photo" title="Fueling the Fire of Misinformation - stock photo, Credit: rob dobi vai Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In the battle against climate disinformation, native advertising is a fierce foe. A study published in the journal npj Climate Action by researchers from Boston ֱ̽ (BU) and the ֱ̽ of Cambridge, evaluates two promising tools to fight misleading native advertising campaigns put forth by big oil companies.</p> <p>Many major news organisations now offer corporations the opportunity to pay for articles that mimic in tone and format the publication’s regular reported content. These ‘native advertisements’ are designed to camouflage seamlessly into their surroundings, containing only subtle disclosure messages often overlooked or misunderstood by readers. Fossil fuel companies are spending tens of millions of dollars to shape public perceptions of the climate crisis.</p> <p>“Because these ads appear on reputable, trusted news platforms, and are formatted like reported pieces, they often come across to readers as genuine journalism,” said lead author Michelle Amazeen from BU’s College of Communication. “Research has shown native ads are really effective at swaying readers’ opinions.”</p> <p> ֱ̽study is the first to investigate how two mitigation strategies — disclosures and inoculations — may reduce climate misperceptions caused by exposure to native advertising from the fossil fuel industry. ֱ̽authors found that when participants were shown a real native ad from ExxonMobil, disclosure messages helped them recognise advertising, while inoculations helped reduce their susceptibility to misleading claims.</p> <p>“As fossil fuel companies invest in disguising their advertisements, this study furthers our understanding of how to help readers recognise when commercial content is masquerading as news and spreading climate misperceptions,” said co-author Benjamin Sovacool, also from BU.</p> <p>“Our study showed that communication-led climate action is possible and scalable by countering covert greenwashing campaigns, such as native advertising, at the source,” said co-author Dr Ramit Debnath from Cambridge’s Department of Architecture. “ ֱ̽insights we’ve gained from this work will help us design better interventions for climate misinformation.”</p> <p> ֱ̽research builds on a growing body of work assessing how people recognise and respond to covert misinformation campaigns. By better understanding these processes, the researchers hope that they can prevent misinformation from taking root and changing people’s beliefs and actions on important issues like climate change.</p> <h2>‘ ֱ̽Future of Energy’ ad</h2> <p>Starting in 2018, readers of ֱ̽New York Times website encountered what appeared to be an article, titled “<a href="https://www.nytimes.com/paidpost/exxonmobil/the-future-of-energy-it-may-come-from-where-you-least-expect.html"> ֱ̽Future of Energy</a>,” describing efforts by oil and gas giant ExxonMobil to invest in algae-based biofuels. Because it appeared beneath the Times’ masthead, in the outlet’s typical formatting and font, many readers likely missed the small banner at the top of the page mentioning that it was an ad sponsored by ExxonMobil.</p> <p> ֱ̽ad, part of a $5-million-dollar campaign, neglected to mention the company’s staggering carbon footprint. It also omitted key context, <a href="https://theintercept.com/2019/10/31/exxon-mobil-massachusetts-climate-change-lawsuit-greenwashing/"><em> ֱ̽Intercept</em> reported</a>, like that the stated goal for algae-based biofuel production would represent only 0.2% of the company’s overall refinery capacity. In a lawsuit against ExxonMobil, Massachusetts cited the ad as evidence of the company’s “false and misleading” communications, with several states pursuing similar cases.</p> <h2>Putting two interventions to the test</h2> <p> ֱ̽researchers examined how more than a thousand participants responded to “ ֱ̽Future of Energy” ad in a simulated social media feed.</p> <p>Before viewing the ad, participants saw one, both, or neither of the following intervention messages:</p> <p>An inoculation message designed to psychologically ‘inoculate’ readers from future influence by broadly warning them of potential exposures to misleading paid content. In this study, the inoculation message was a fictitious social media post from United Nations Secretary-General Antonio Guterres reminding people to be wary of online misinformation.</p> <p>A disclosure message with a simple line of text appearing on a post. In this study, the text “Paid Post by ExxonMobil” accompanied the piece. Studies have shown that more often than not, when native ads are shared on social media, this disclosure disappears.</p> <h2>Bolstering psychological resilience to native ads</h2> <p> ֱ̽team found that the ad improved opinions of ExxonMobil’s sustainability across the study’s many participants, regardless of which messages they saw, but that the interventions helped to reduce this effect. Some of the key findings include:</p> <p> ֱ̽presence of a disclosure more than doubled the likelihood that a participant recognised the content as an ad. However, the participants who had seen a disclosure and those who had not were equally likely to agree with the statement “companies like ExxonMobil are investing heavily in becoming more environmentally friendly.”</p> <p>Inoculation messages were much more effective than disclosures at protecting people’s existing beliefs on climate change, decreasing the likelihood that participants would agree with misleading claims presented in the ad.</p> <p>“Disclosures helped people recognise advertising. However, they didn’t help them recognise that the material was biased and misleading,” said Amazeen. “Inoculation messaging provides general education that can be used to fill in that gap and help people resist its persuasive effects. Increasing general awareness about misinformation strategies used by self-interested actors, combined with clearer labels on sponsored content, will help people distinguish native ads from reported content.”</p> <h2>Reference</h2> <p><em>Michelle A Amazeen et al. ‘<a href="https://www.nature.com/articles/s44168-025-00209-6"> ֱ̽“Future of Energy”? Building resilience to ExxonMobil’s disinformation through disclosures and inoculation</a>.’ npj climate action (2025). DOI: 10.1038/s44168-025-00209-6</em></p> <p><em>Adapted from a <a href="https://www.bu.edu/igs/2025/03/04/news-article-or-big-oil-ad-as-native-advertisements-mislead-readers-on-climate-change-boston-university-experts-identify-interventions/">Boston ֱ̽ story</a>.</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A sneaky form of advertising favoured by oil giants influences public opinion with climate action misperceptions, but researchers are studying potential solutions.</p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.gettyimages.co.uk/detail/photo/fueling-the-fire-of-misinformation-royalty-free-image/2193893519?phrase=misinformation&amp;searchscope=image,film&amp;adppopup=true" target="_blank">rob dobi vai Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Fueling the Fire of Misinformation - stock photo</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 06 Mar 2025 16:43:33 +0000 sc604 248750 at Cambridge leads governmental project to understand impact of smartphones and social media on young people /research/news/cambridge-leads-governmental-project-to-understand-impact-of-smartphones-and-social-media-on-young <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-523088250-web_1.jpg?itok=iv6j932n" alt="Teenager holding a smartphone" title="Teenager holding a smartphone, Credit: Owen Franken" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽work has been commissioned by the UK government’s Department for Science, Innovation and Technology after a review by the UK Chief Medical Officer in 2019 found the evidence base around the links to children’s mental health were insufficient to provide strong conclusions suitable to inform policy.</p> <p> ֱ̽project – led by a team at the ֱ̽ of Cambridge, in collaboration with researchers at several leading UK universities – is aimed at improving policymakers’ understanding of the relationship between children’s wellbeing and smartphone use, including social media and messaging. It will help direct future government action in this area.</p> <p>Project lead Dr Amy Orben from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBU) at the ֱ̽ of Cambridge said: “There is huge concern about the impact of smartphone use on children's health, but the evidence base remains fairly limited. While the government is under substantial time pressure to make decisions, these will undoubtedly be better if based on improved evidence.</p> <p>“This is a complex and rapidly evolving issue, with both potential harms and benefits associated with smartphone use. Technology is changing by the day, and scientific evidence creation needs to evolve and innovate to keep up.</p> <p>“Our focus will be on deepening our causal understanding of the effects of new technologies, particularly over short timescales, to ensure that decisions are informed, timely and evidence-based.”</p> <p>Dr Orben will lead a Project Delivery Team, with Consortium Members from the universities of Bath, Birmingham, Bristol, Glasgow, Manchester, Nottingham, Oxford and York and the London School of Economics. It will aim to identify which research methods and data sources will be most effective at identifying potential causal relationships between social media, smartphones, and the health and development of children and young people</p> <p>Deputy project lead Dr Amrit Kaur Purba, also from the MRC CBU at Cambridge, said: “ ֱ̽impact of social media on young people is a pressing issue, and our project will ensure the research community is in a strong position to provide policymakers with the causal and high-quality insights they need. While we don’t expect this to be straightforward, our research will leverage diverse expertise from across the UK to deliver a comprehensive and informed response to make recommendations for how research in this area should be supported in future.”</p> <p> ֱ̽researchers will review and summarise existing research on the impact of smartphones and social media on children and young people’s mental health, wellbeing, physical health, lifestyle and health behaviours, and educational attainment. ֱ̽review will recognise the diversity of perspectives that exist in this area and consider where further research could add valuable new insights to the evidence base. </p> <p>They will assess the various methods and data available to understand the causal impacts, including recognising that online habits and emerging technologies are changing at a rapid pace, and considering how the experiences of vulnerable children and young people – for example, LGBTQ+ young people and those with special needs or mental health issues – can be captured in future research projects.</p> <p>This will allow the team to recommend and outline how future research studies could deliver robust and causal evidence on the impact of smartphones and social media on child development factors in the next two to three years.</p> <p>Technology Secretary Peter Kyle, said: " ֱ̽online world offers immense opportunities for young people to connect and learn. Ensuring they can do so in an environment which puts their safety first is my priority and will guide this government’s action on online safety.  </p> <p>“That’s why we have launched new research, led by the ֱ̽ of Cambridge with support from other top UK universities, to better understand the complex relationship between technology and young people's wellbeing.</p> <p>“This vital research will build a trusted evidence base for future action, helping us to protect and empower the next generation towards a safer and more positive digital future."</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Cambridge researchers are leading the first phase of a new research project that will lay the groundwork for future studies into the impact on children of smartphone and social media use.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">This is a complex and rapidly evolving issue, with both potential harms and benefits associated with smartphone use. Technology is changing by the day, and scientific evidence creation needs to evolve and innovate to keep up</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Amy Orben</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.gettyimages.co.uk/detail/photo/teenager-holding-a-smartphone-royalty-free-image/523088250" target="_blank">Owen Franken</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Teenager holding a smartphone</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 16 Jan 2025 00:01:37 +0000 cjb250 248641 at Time alone heightens ‘threat alert’ in teenagers – even when connecting online /research/news/time-alone-heightens-threat-alert-in-teenagers-even-when-connecting-on-social-media <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/worriedteen.jpg?itok=avCf2eVP" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>People in their late teens experience an increased sensitivity to threats after just a few hours left in a room on their own – an effect that endures even if they are interacting online with friends and family.</p> <p>This is according to <a href="https://royalsocietypublishing.org/doi/10.1098/rsos.240101">latest findings</a> from a cognitive neuroscience experiment conducted at the ֱ̽ of Cambridge, which saw 40 young people aged 16-19 undergo testing before and after several hours alone – both with and without their smartphones.</p> <p>Many countries have declared an epidemic of loneliness*. ֱ̽researchers set out to “induce” loneliness in teenagers and study the effects through a series of tests, from a Pavlovian task to electrodes that measure sweat. </p> <p>Scientists found that periods of isolation, including those in which participants could use their phones, led to an increased threat response – the sensing of and reacting to potential dangers. This alertness can cause people to feel anxious and uneasy.</p> <p> ֱ̽authors of the study say that isolation and loneliness might lead to excessive “threat vigilance”, even when plugged in online, which could negatively impact adolescent mental health over time.</p> <p>They say it could contribute to the persistent and exaggerated fear responses typical of anxiety disorders on the rise among young people around the world.</p> <p>While previous studies show isolation leads to anxious behaviour and threat responses in rodents, this is believed to be the first study to demonstrate these effects through experiments involving humans.</p> <p> ֱ̽findings are published today in the journal <em><a href="https://royalsocietypublishing.org/doi/10.1098/rsos.240101">Royal Society Open Science</a></em>.</p> <p>“We detected signs of heightened threat vigilance after a few hours of isolation, even when the adolescents had been connected through smartphones and social media,” said Emily Towner, study lead author from Cambridge’s Department of Psychology.</p> <p>“This alertness to perceived threats might be the same mechanism that leads to the excessive worry and inability to feel safe which characterises anxiety,” said Towner, a Gates Cambridge Scholar.   </p> <p>“It makes evolutionary sense that being alone increases our vigilance to potential threats. These threat response mechanisms undergo a lot of changes in adolescence, a stage of life marked by increasing independence and social sensitivity.”</p> <p>"Our experiment suggests that periods of isolation in adolescents might increase their vulnerability to the development of anxiety, even when they are connected virtually.”</p> <p>Researchers recruited young people from the local area in Cambridge, UK, conducting extensive screening to create a pool of 18 boys and 22 girls who had good social connections and no history of mental health issues.</p> <p>Participants were given initial tests and questionnaires to establish a “baseline”. These included the Pavlovian threat test, in which they were shown a series of shapes on a screen, one of which was paired with a harsh noise played through headphones, so the shape became associated with a feeling of apprehension.</p> <p>Electrodes attached to fingers monitored “electrodermal activity” – a physiological marker of stress – throughout this test.**</p> <p>Each participant returned for two separate stints of around four hours isolated in a room in Cambridge ֱ̽’s Psychology Department, after which the tests were completed again. There was around a month, on average, between sessions.</p> <p>All participants underwent two isolation sessions. One was spent with a few puzzles to pass the time, but no connection to the outside world. For the other, participants were allowed smartphones and given wi-fi codes, as well as music and novels. ֱ̽only major rule in both sessions was they had to stay awake.***</p> <p>“We set out to replicate behaviour in humans that previous animal studies had found after isolation,” said Towner. “We wanted to know about the experience of loneliness, and you can’t ask animals how lonely they feel.”</p> <p>Self-reported loneliness increased from baseline after both sessions. It was lower on average after isolation with social media, compared to full isolation.****</p> <p>However, participants found the threat cue – the shape paired with a jarring sound – more anxiety-inducing and unpleasant after both isolation sessions, with electrodes also measuring elevated stress activity.</p> <p>On average across the study, threat responses were 70% higher after the isolation sessions compared to the baseline, regardless of whether participants had been interacting digitally.</p> <p>“Although virtual social interactions helped our participants feel less lonely compared to total isolation, their heightened threat response remained,” said Towner.</p> <p>Previous studies have found a link between chronic loneliness and alertness to threats. ֱ̽latest findings support the idea that social isolation may directly contribute to heightened fear responses, say researchers.  </p> <p>Dr Livia Tomova, co-senior author and lecturer in Psychology at Cardiff ֱ̽, who conducted the work while at Cambridge, added: “Loneliness among adolescents around the world has nearly doubled in recent years. ֱ̽need for social interaction is especially intense during adolescence, but it is not clear whether online socialising can fulfil this need.</p> <p>“This study has shown that digital interactions might not mitigate some of the deep-rooted effects that isolation appears to have on teenagers.”</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Scientists say the findings might shed light on the link between loneliness and mental health conditions such as anxiety disorders, which are on the rise in young people.</p> </p></div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Notes</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>*For example, in 2023 the U.S. Surgeon General declared an epidemic of loneliness and isolation.</p> <p>**Electrodes placed on the fingers record small deflections in sweat and subsequent changes in electrical conductivity of the skin (electrodermal activity). Electrodermal activity is used to detect stress levels and increases with emotional or physical arousal.</p> <p>*** ֱ̽baseline tests were always taken first. ֱ̽order of the two isolation sessions was randomly allocated. For sessions with digital interactions allowed, most participants used social media (35 out of 40), with texting being the most common form of interaction (37 out of 40). Other popular platforms included Snapchat, Instagram, and WhatsApp. Participants mainly connected virtually with friends (38), followed by family (19), romantic partners (13), and acquaintances (4).</p> <p>**** Average self-reported loneliness more than doubled after the isolation session with social media compared to baseline and nearly tripled after the complete isolation session compared to baseline.</p> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 13 Nov 2024 09:03:21 +0000 fpjl2 248547 at Solidarity drives online virality in a nation under attack, study of Ukrainian social media reveals /stories/ukraine-social-media <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>While divisive social media posts get more traction in countries such as the US, a new study shows that celebrating national unity is the way to go viral in Ukraine.</p> </p></div></div></div> Tue, 01 Oct 2024 09:04:55 +0000 fpjl2 248041 at Emissions and evasions /stories/emissions-and-evasions <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>How Big Oil influences climate conversations on social media.</p> </p></div></div></div> Wed, 20 Dec 2023 15:58:09 +0000 plc32 243871 at ֱ̽Misinformation Susceptibility Test /stories/misinformation-susceptibility-test <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>New 2-minute test launched; developed using ChatGPT technology and validated by expert panel and series of experiments involving thousands of participants. YouGov used the test in US polling, and found Americans know real from fake headlines two-thirds of the time, but worst performers are under-30s who spend most time online. </p> </p></div></div></div> Thu, 29 Jun 2023 08:32:46 +0000 fpjl2 240351 at Rewarding accuracy instead of partisan pandering reduces political divisions over the truth /research/news/rewarding-accuracy-instead-of-partisan-pandering-reduces-political-divisions-over-the-truth <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/fakenews_0.jpg?itok=XFajw_eh" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Offering a tiny cash reward for accuracy, or even briefly appealing to personal integrity, can increase people’s ability to tell the difference between misinformation and the truth, according to a new study.</p>&#13; &#13; <p> ֱ̽findings suggest that fake news thrives on social media not only because people are tricked into believing it, but also due to a motivational imbalance: users have more incentive to get clicks and likes than to spread accurate content. </p>&#13; &#13; <p>Social psychologists from the ֱ̽ of Cambridge and New York ֱ̽ argue that their study, published in the journal <em><a href="https://dx.doi.org/10.1038/s41562-023-01540-w">Nature Human Behaviour</a></em>, highlights the “perverse incentives” driving shares on social media – particularly in “divisive political climates” such as the United States.</p>&#13; &#13; <p>They say the psychological pull of pandering to one’s own “in-group” by attacking the other side of a social and political divide is a significant – and often neglected – factor for why so many believe and choose to spread misinformation, or disbelieve accurate news.  </p>&#13; &#13; <p> ֱ̽study involved four experiments with a total of over 3,300 people from the United States, with equal numbers of Democrats and Republicans. ֱ̽researchers offered half of participants up to one US dollar if they correctly pointed out true or false headlines, and compared the results to those offered no incentive.</p>&#13; &#13; <p>This tiny sum was enough to make people 31% better at discerning true from fake news. ֱ̽best results came when participants were asked to identify accurate news that benefited the opposing political party.</p>&#13; &#13; <p>In fact, the financial incentive reduced partisan division between Republican and Democrat over the truthfulness of news by around 30%. ֱ̽majority of this shift occurred on the Republican side.</p>&#13; &#13; <p>For example, the offer of up to a dollar made Republicans 49% more likely to report that the accurate Associated Press headline ‘Facebook removes Trump ads with symbols once used by Nazis’ was indeed true. A dollar made Democrats 20% more likely to report the Reuters headline 'Plant a trillion trees: U.S. Republicans offer fossil-fuel friendly climate fix' as accurate.</p>&#13; &#13; <p>However, in another experiment, researchers inverted the set-up to “mirror the social media environment” by paying participants to identify the headlines likely to get the best reception from members of the same political party. ֱ̽ability to spot misinformation reduced by 16%.</p>&#13; &#13; <p>“This is not just about ignorance of facts among the public. It is about a social media business model that rewards the spread of divisive content regardless of accuracy,” said lead author Dr Steve Rathje, who conducted the work while he was a Gates Cambridge Scholar.</p>&#13; &#13; <p>“By motivating people to be accurate instead of appealing to those in the same political group, we found greater levels of agreement between Republicans and Democrats about what is actually true.”</p>&#13; &#13; <p><a href="https://www.pnas.org/doi/full/10.1073/pnas.2024292118">Previous research by the same team</a> has shown that attacking political rivals is one of the most effective ways to go viral on Twitter and Facebook.</p>&#13; &#13; <p>“Shifting the motivations to post on social media could help rebuild some of the shared reality lost to political polarisation in many nations, including the United States,” said senior author Prof Sander van der Linden, director of the ֱ̽ of Cambridge’s Social Decision-Making Lab.    </p>&#13; &#13; <p>In one of the study’s experiments, half the participants were simply exposed to a short piece of text reminding them that people value truth, and falsehoods can hurt reputations. They were also told they would receive feedback on accuracy rates.</p>&#13; &#13; <p>While this did not have the same effect as a small pay out, it still increased the perceived accuracy of true but politically inconvenient news by 25% compared to a control group.</p>&#13; &#13; <p>“A short piece of text nudging users to consider the social value of truth could be deployed at scale by social media corporations,” said van der Linden.   </p>&#13; &#13; <p>Jay Van Bavel, Professor of Psychology at New York ֱ̽ and co-author of the study, said: “It is not possible to pay everyone on the internet to share more accurate information. However, we can change aspects of social media platform design to help motivate people to share content they know to be accurate.”</p>&#13; &#13; <p>Providing incentives improved the accuracy of news judgements across the political spectrum, but had a much stronger effect on Republican voters.</p>&#13; &#13; <p> ֱ̽team point to previous research showing that Republicans tend to believe in and share more misinformation than Democrats. In the latest study, payment incentives brought Republicans far closer to the accuracy levels of Democrats – shrinking the political divide.  </p>&#13; &#13; <p>“Recent lawsuits have revealed that Fox News hosts shared false claims about ‘stolen’ elections to retain viewers, despite privately disavowing these conspiracy theories. Republican media ecosystems have proved more willing to harness misinformation for profit in recent years,” said Van der Linden, author of the new book <em><a href="/stories/foolproof">Foolproof: why we fall for misinformation and how to build immunity</a></em>.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers argue that the findings hold lessons for social media companies and the “perverse incentives” driving political polarisation online.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Shifting the motivations to post on social media could help rebuild some of the shared reality lost to political polarisation</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Sander van der Linden</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 06 Mar 2023 16:17:56 +0000 fpjl2 237441 at