ֱ̽ of Cambridge - neural network /taxonomy/subjects/neural-network en What’s going on in our brains when we plan? /research/news/whats-going-on-in-our-brains-when-we-plan <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-1742440400-crop.jpg?itok=oZfkQ3oc" alt="Digitally generated image of a young man" title="Metaverse portrait, Credit: Andriy Onufriyenko via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In pausing to think before making an important decision, we may imagine the potential outcomes of different choices we could make. While this ‘mental simulation’ is central to how we plan and make decisions in everyday life, how the brain works to accomplish this is not well understood. </p>&#13; &#13; <p>An international team of scientists has now uncovered neural mechanisms used in planning. Their <a href="https://www.nature.com/articles/s41593-024-01675-7">results</a>, published in the journal <em>Nature Neuroscience</em>, suggest that an interplay between the brain’s prefrontal cortex and hippocampus allows us to imagine future outcomes to guide our decisions.</p>&#13; &#13; <p>“ ֱ̽prefrontal cortex acts as a ‘simulator,’ mentally testing out possible actions using a cognitive map stored in the hippocampus,” said co-author Marcelo Mattar from New York ֱ̽. “This research sheds light on the neural and cognitive mechanisms of planning—a core component of human and animal intelligence. A deeper understanding of these brain mechanisms could ultimately improve the treatment of disorders affecting decision-making abilities.”</p>&#13; &#13; <p> ֱ̽roles of both the prefrontal cortex—used in planning and decision-making—and hippocampus—used in memory formation and storage—have long been established. However, their specific duties in deliberative decision-making, which are the types of decisions that require us to think before acting, are less clear.</p>&#13; &#13; <p>To illuminate the neural mechanisms of planning, Mattar and his colleagues—Kristopher Jensen from ֱ̽ College London and <a href="https://cbl.eng.cam.ac.uk/hennequin/">Professor Guillaume Hennequin</a> from Cambridge’s Department of Engineering —developed a computational model to predict brain activity during planning. They then analysed data from both humans and rats to confirm the validity of the model—a recurrent neural network (RNN), which learns patterns based on incoming information. </p>&#13; &#13; <p> ֱ̽model took into account existing knowledge of planning and added new layers of complexity, including ‘imagined actions,’ thereby capturing how decision-making involves weighing the impact of potential choices—similar to how a chess player envisions sequences of moves before committing to one. These mental simulations of potential futures, modelled as interactions between the prefrontal cortex and hippocampus, enable us to rapidly adapt to new environments, such as taking a detour after finding a road is blocked.</p>&#13; &#13; <p> ֱ̽scientists validated this computational model using both behavioural and neural data. To assess the model’s ability to predict behaviour, the scientists conducted an experiment measuring how humans navigated an online maze on a computer screen and how long they had to think before each step.</p>&#13; &#13; <p>To validate the model’s predictions about the role of the hippocampus in planning, they analysed neural recordings from rodents navigating a physical maze configured in the same way as in the human experiment. By giving a similar task to humans and rats, the researchers could draw parallels between the behavioural and neural data—an innovative aspect of this research.</p>&#13; &#13; <p>“Allowing neural networks to decide for themselves when to 'pause and think' was a great idea, and it was surprising to see that in situations where humans spend time pondering what to do next, so do these neural networks,” said Hennequin. </p>&#13; &#13; <p> ֱ̽experimental results were consistent with the computational model, showing an intricate interaction between the prefrontal cortex and hippocampus. In the human experiments, participants’ brain activity reflected more time thinking before acting in navigating the maze. In the experiments with laboratory rats, the animals’ neural responses in moving through the maze resembled the model’s simulations.</p>&#13; &#13; <p>“Overall, this work provides foundational knowledge on how these brain circuits enable us to think before we act in order to make better decisions,” said Mattar. “In addition, a method in which both human and animal experimental participants and RNNs were all trained to perform the same task offers an innovative and foundational way to gain insights into behaviours.”</p>&#13; &#13; <p>“This new framework will enable systematic studies of thinking at the neural level,” said Hennequin. “This will require a concerted effort from neurophysiologists and theorists, and I'm excited about the discoveries that lie ahead.” </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Kristopher T. Jensen, Guillaume Hennequin &amp; Marcelo G. Mattar. ‘<a href="https://www.nature.com/articles/s41593-024-01675-7">A recurrent network model of planning explains hippocampal replay and human behavior</a>.’ Nature Neuroscience (2024). DOI: 10.1038/s41593-024-01675-7</em></p>&#13; &#13; <p><em>Adapted from an <a href="https://www.nyu.edu/about/news-publications/news/2024/june/what-s-going-on-in-our-brains-when-we-plan-.html">NYU press release</a>.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Study uncovers how the brain simulates possible future actions by drawing from our stored memories.</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.gettyimages.co.uk/detail/photo/metaverse-portrait-royalty-free-image/1742440400?phrase=lateral thinking&amp;amp;adppopup=true" target="_blank">Andriy Onufriyenko via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Metaverse portrait</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 11 Jun 2024 09:46:21 +0000 sc604 246451 at Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality /research/news/machine-learning-gives-users-superhuman-ability-to-open-and-control-tools-in-virtual-reality <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/screenshot-2023-11-07-163538.jpg?itok=DJaBykvi" alt="Modelling a sailboat in virtual reality." title="Modelling a sailboat in virtual reality , Credit: ֱ̽ of Cambridge" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, used machine learning to develop ‘HotGestures’ – analogous to the hot keys used in many desktop applications.</p>&#13; &#13; <p>HotGestures give users the ability to build figures and shapes in virtual reality without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.</p>&#13; &#13; <p> ֱ̽idea of being able to open and control tools in virtual reality has been a movie trope for decades, but the researchers say that this is the first time such a ‘superhuman’ ability has been made possible. ֱ̽<a href="https://ieeexplore.ieee.org/document/10269004">results</a> are reported in the journal <em>IEEE Transactions on Visualization and Computer Graphics</em>.</p>&#13; &#13; <p>Virtual reality (VR) and related applications have been touted as game-changers for years, but outside of gaming, their promise has not fully materialised. “Users gain some qualities when using VR, but very few people want to use it for an extended period of time,” said <a href="https://pokristensson.com/">Professor Per Ola Kristensson</a> from Cambridge’s Department of Engineering, who led the research. “Beyond the visual fatigue and ergonomic issues, VR isn’t really offering anything you can’t get in the real world.”</p>&#13; &#13; <p>Most users of desktop software will be familiar with the concept of hot keys – command shortcuts such as ctrl-c to copy and ctrl-v to paste. While these shortcuts omit the need to open a menu to find the right tool or command, they rely on the user having the correct command memorised.</p>&#13; &#13; <p>“We wanted to take the concept of hot keys and turn it into something more meaningful for virtual reality – something that wouldn’t rely on the user having a shortcut in their head already,” said Kristensson, who is also co-Director of the <a href="https://www.chia.cam.ac.uk/">Centre for Human-Inspired Artificial Intelligence</a>.</p>&#13; &#13; <p>Instead of hot keys, Kristensson and his colleagues developed ‘HotGestures’, where users perform a gesture with their hand to open and control the tool they need in 3D virtual reality environments.</p>&#13; &#13; <p>For example, performing a cutting motion opens the scissor tool, and the spray motion opens the spray can tool. There is no need for the user to open a menu to find the tool they need, or to remember a specific shortcut. Users can seamlessly switch between different tools by performing different gestures during a task, without having to pause their work to browse a menu or to press a button on a controller or keyboard.</p>&#13; &#13; <p>“We all communicate using our hands in the real world, so it made sense to extend this form of communication to the virtual world,” said Kristensson.</p>&#13; &#13; <p>For the study, the researchers built a neural network gesture recognition system that can recognise gestures by performing predictions on an incoming hand joint data stream. ֱ̽system was built to recognise ten different gestures associated with building 3D models: pen, cube, cylinder, sphere, palette, spray, cut, scale, duplicate and delete.</p>&#13; &#13; <p> ֱ̽team carried out two small studies where participants used HotGestures, menu commands or a combination. ֱ̽gesture-based technique provided fast and effective shortcuts for tool selection and usage. Participants found HotGestures to be distinctive, fast, and easy to use while also complementing conventional menu-based interaction. ֱ̽researchers designed the system so that there were no false activations – the gesture-based system was able to correctly recognise what was a command and what was normal hand movement. Overall, the gesture-based system was faster than a menu-based system.</p>&#13; &#13; <p>“There is no VR system currently available that can do this,” said Kristensson. “If using VR is just like using a keyboard and a mouse, then what’s the point of using it? It needs to give you almost superhuman powers that you can’t get elsewhere.”</p>&#13; &#13; <p> ֱ̽researchers have made the source code and dataset publicly available so that designers of VR applications can incorporate it into their products.</p>&#13; &#13; <p>“We want this to be a standard way of interacting with VR,” said Kristensson. “We’ve had the tired old metaphor of the filing cabinet for decades. We need new ways of interacting with technology, and we think this is a step in that direction. When done right, VR can be like magic.”</p>&#13; &#13; <p> ֱ̽research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).</p>&#13; &#13; <p> </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Zhaomou Song; John J Dudley; Per Ola Kristensson. ‘<a href="https://ieeexplore.ieee.org/document/10269004">HotGestures: Complementing Command Selection and Use with Delimiter-Free Gesture-Based Shortcuts in Virtual Reality</a>.’ IEEE Transactions on Visualization and Computer Graphics (2023). DOI: 10.1109/TVCG.2023.3320257</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a virtual reality application where a range of 3D modelling tools can be opened and controlled using just the movement of a user’s hand. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> We need new ways of interacting with technology, and we think this is a step in that direction</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Per Ola Kristensson</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-215161" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/hotgestures-give-users-superhuman-ability-to-open-and-control-tools-in-virtual-reality">HotGestures give users ‘superhuman’ ability to open and control tools in virtual reality</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/3kNFvhU5ntU?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank"> ֱ̽ of Cambridge</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Modelling a sailboat in virtual reality </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 08 Nov 2023 07:44:16 +0000 sc604 243101 at Fitness levels accurately predicted using wearable devices – no exercise required /research/news/fitness-levels-can-be-accurately-predicted-using-wearable-devices-no-exercise-required <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/fitness-monitor.jpg?itok=wvdgtpK6" alt="Woman checking her smart watch and mobile phone after run" title="Woman checking her smart watch and mobile phone after run, Credit: Oscar Wong via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Normally, tests to accurately measure VO2max – a key measurement of overall fitness and an important predictor of heart disease and mortality risk – require expensive laboratory equipment and are mostly limited to elite athletes. ֱ̽new method uses machine learning to predict VO2max – the capacity of the body to carry out aerobic work – during everyday activity, without the need for contextual information such as GPS measurements.</p>&#13; &#13; <p>In what is by far the largest study of its kind, the researchers gathered activity data from more than 11,000 participants in the Fenland Study using wearable sensors, with a subset of participants tested again seven years later. ֱ̽researchers used the data to develop a model to predict VO2max, which was then validated against a third group that carried out a standard lab-based exercise test. ֱ̽model showed a high degree of accuracy compared to lab-based tests, and outperforms other approaches.</p>&#13; &#13; <p>Some smartwatches and fitness monitors currently on the market claim to provide an estimate of VO2max, but since the algorithms powering these predictions aren’t published and are subject to change at any time, it’s unclear whether the predictions are accurate, or whether an exercise regime is having any effect on an individual’s VO2max over time.</p>&#13; &#13; <p> ֱ̽Cambridge-developed model is robust, transparent and provides accurate predictions based on heart rate and accelerometer data only. Since the model can also detect fitness changes over time, it could also be useful in estimating fitness levels for entire populations and identifying the effects of lifestyle trends. <a href="https://www.nature.com/articles/s41746-022-00719-1"> ֱ̽results are reported in the journal <em>npj Digital Medicine</em></a>.</p>&#13; &#13; <p>A measurement of VO2max is considered the ‘gold standard’ of fitness tests. Professional athletes, for example, test their VO2max by measuring their oxygen consumption while they exercise to the point of exhaustion. There are other ways of measuring fitness in the laboratory, like heart rate response to exercise tests, but these require equipment like a treadmill or exercise bike. Additionally, strenuous exercise can be a risk to some individuals.</p>&#13; &#13; <p>“VO2max isn’t the only measurement of fitness, but it’s an important one for endurance, and is a strong predictor of diabetes, heart disease, and other mortality risks,” said co-author Dr Soren Brage from Cambridge’s Medical Research Council (MRC) Epidemiology Unit. “However, since most VO2max tests are done on people who are reasonably fit, it’s hard to get measurements from those who are not as fit and might be at risk of cardiovascular disease.”</p>&#13; &#13; <p>“We wanted to know whether it was possible to accurately predict VO2max using data from a wearable device, so that there would be no need for an exercise test,” said co-lead author Dr Dimitris Spathis from Cambridge’s Department of Computer Science and Technology. “Our central question was whether wearable devices can measure fitness in the wild. Most wearables provide metrics like heart rate, steps or sleeping time, which are proxies for health, but aren’t directly linked to health outcomes.”</p>&#13; &#13; <p> ֱ̽study was a collaboration between the two departments: the team from the MRC Epidemiology Unit provided expertise in population health and cardiorespiratory fitness and data from the Fenland Study – a long-running public health study in the East of England – while the team from the Department of Computer Science and Technology provided expertise in machine learning and artificial intelligence for mobile and wearable data.</p>&#13; &#13; <p>Participants in the study wore wearable devices continuously for six days. ֱ̽sensors gathered 60 values per second, resulting in an enormous amount of data before processing. “We had to design an algorithm pipeline and appropriate models that could compress this huge amount of data and use it to make an accurate prediction,” said Spathis. “ ֱ̽free-living nature of the data makes this prediction challenging because we’re trying to predict a high-level outcome (fitness) with noisy low-level data (wearable sensors).”</p>&#13; &#13; <p> ֱ̽researchers used an AI model known as a deep neural network to process and extract meaningful information from the raw sensor data and make predictions of VO2max from it. Beyond predictions, the trained models can be used for the identification of sub-populations in particular need of intervention related to fitness.</p>&#13; &#13; <p> ֱ̽baseline data from 11,059 participants in the Fenland Study was compared with follow-up data from seven years later, taken from a subset of 2,675 of the original participants. A third group of 181 participants from the UK Biobank Validation Study underwent lab-based VO2max testing to validate the accuracy of the algorithm. ֱ̽machine learning model had strong agreement with the measured VO2max scores at both baseline (82% agreement) and follow-up testing (72% agreement).</p>&#13; &#13; <p>“This study is a perfect demonstration of how we can leverage expertise across epidemiology, public health, machine learning and signal processing,” said co-lead author Dr Ignacio Perez-Pozuelo.</p>&#13; &#13; <p> ֱ̽researchers say that their results demonstrate how wearables can accurately measure fitness, but transparency needs to be improved if measurements from commercially available wearables are to be trusted.</p>&#13; &#13; <p>“It’s true in principle that many fitness monitors and smartwatches provide a measurement of VO2max, but it’s very difficult to assess the validity of those claims,” said Brage. “ ֱ̽models aren’t usually published, and the algorithms can change on a regular basis, making it difficult for people to determine if their fitness has actually improved or if it’s just being estimated by a different algorithm.”</p>&#13; &#13; <p>“Everything on your smartwatch related to health and fitness is an estimate,” said Spathis. “We’re transparent about our modelling and we did it at scale. We show that we can achieve better results with the combination of noisy data and traditional biomarkers. Also, all our algorithms and models are open-sourced and everyone can use them.”</p>&#13; &#13; <p>“We’ve shown that you don’t need an expensive test in a lab to get a real measurement of fitness – the wearables we use every day can be just as powerful, if they have the right algorithm behind them,” said senior author Professor Cecilia Mascolo from the Department of Computer Science and Technology. “Cardio-fitness is such an important health marker, but until now we did not have the means to measure it at scale. These findings could have significant implications for population health policies, so we can move beyond weaker health proxies such as the Body Mass Index (BMI).”</p>&#13; &#13; <p> ֱ̽research was supported in part by Jesus College, Cambridge and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Cecilia Mascolo is a Fellow of Jesus College, Cambridge.</p>&#13; &#13; <p> </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Dimitris Spathis et al. ‘<a href="https://www.nature.com/articles/s41746-022-00719-1">Longitudinal cardio-respiratory fitness prediction through wearables in free-living environments</a>.’ npj Digital Medicine (2022). DOI: 10.1038/s41746-022-00719-1</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Cambridge researchers have developed a method for measuring overall fitness accurately on wearable devices – and more robustly than current consumer smartwatches and fitness monitors – without the wearer needing to exercise.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">You don’t need an expensive test in a lab to get a real measurement of fitness – the wearables we use every day can be just as powerful, if they have the right algorithm behind them</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Cecilia Mascolo</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.gettyimages.co.uk/detail/photo/woman-checking-her-smart-watch-and-mobile-phone-royalty-free-image/1257794436?phrase=fitness monitor&amp;amp;adppopup=true" target="_blank">Oscar Wong via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Woman checking her smart watch and mobile phone after run</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 01 Dec 2022 10:00:18 +0000 sc604 235691 at Mathematical paradox demonstrates the limits of AI /research/news/mathematical-paradox-demonstrates-the-limits-of-ai <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/datawave.jpg?itok=vOvnoWrF" alt="A glowing particle and binary wave pattern on dark background." title="Binary data wave, Credit: Yuichiro Chino" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realise when it’s making a mistake than to produce a correct result.</p> <p>Researchers from the ֱ̽ of Cambridge and the ֱ̽ of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state-of-the-art tool in AI, roughly mimic the links between neurons in the brain. ֱ̽researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.</p> <p> ֱ̽researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their <a href="https://www.pnas.org/doi/10.1073/pnas.2107151119">results</a> are reported in the <em>Proceedings of the National Academy of Sciences</em>.</p> <p>Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and <a href="https://www.nature.com/articles/d41586-019-03013-5">easy to fool</a>.</p> <p>“Many AI systems are unstable, and it’s becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,” said co-author Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.”</p> <p> ֱ̽paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency.</p> <p>Decades later, the mathematician Steve Smale proposed a list of 18 unsolved mathematical problems for the 21st century. ֱ̽18th problem concerned the limits of intelligence for both humans and machines.</p> <p>“ ֱ̽paradox first identified by Turing and Gödel has now been brought forward into the world of AI by Smale and others,” said co-author Dr Matthew Colbrook from the Department of Applied Mathematics and Theoretical Physics. “There are fundamental limits inherent in mathematics and, similarly, AI algorithms can’t exist for certain problems.”</p> <p> ֱ̽researchers say that, because of this paradox, there are cases where good neural networks can exist, yet an inherently trustworthy one cannot be built. “No matter how accurate your data is, you can never get the perfect information to build the required neural network,” said co-author Dr Vegard Antun from the ֱ̽ of Oslo.</p> <p> ֱ̽impossibility of computing the good existing neural network is also true regardless of the amount of training data. No matter how much data an algorithm can access, it will not produce the desired network. “This is similar to Turing’s argument: there are computational problems that cannot be solved regardless of computing power and runtime,” said Hansen.</p> <p> ֱ̽researchers say that not all AI is inherently flawed, but it’s only reliable in specific areas, using specific methods. “ ֱ̽issue is with areas where you need a guarantee, because many AI systems are a black box,” said Colbrook. “It’s completely fine in some situations for an AI to make mistakes, but it needs to be honest about it. And that’s not what we’re seeing for many systems – there’s no way of knowing when they’re more confident or less confident about a decision.”</p> <p>“Currently, AI systems can sometimes have a touch of guesswork to them,” said Hansen.“You try something, and if it doesn’t work, you add more stuff, hoping it works. At some point, you’ll get tired of not getting what you want, and you’ll try a different method. It’s important to understand the limitations of different approaches. We are at the stage where the practical successes of AI are far ahead of theory and understanding. A program on understanding the foundations of AI computing is needed to bridge this gap.”</p> <p>“When 20th-century mathematicians identified different paradoxes, they didn’t stop studying mathematics. They just had to find new paths, because they understood the limitations,” said Colbrook. “For AI, it may be a case of changing paths or developing new ones to build systems that can solve problems in a trustworthy and transparent way, while understanding their limitations.”</p> <p> ֱ̽next stage for the researchers is to combine approximation theory, numerical analysis and foundations of computations to determine which neural networks can be computed by algorithms, and which can be made stable and trustworthy. Just as the paradoxes on the limitations of mathematics and computers identified by Gödel and Turing led to rich foundation theories — describing both the limitations and the possibilities of mathematics and computations — perhaps a similar foundations theory may blossom in AI.</p> <p>Matthew Colbrook is a Junior Research Fellow at Trinity College, Cambridge. Anders Hansen is a Fellow at Peterhouse, Cambridge. ֱ̽research was supported in part by the Royal Society.</p> <p> </p> <p><em><strong>Reference:</strong><br /> Matthew J Colbrook, Vegard Antun, and Anders C Hansen. ‘<a href="https://www.pnas.org/doi/10.1073/pnas.2107151119"> ֱ̽difficulty of computing stable and accurate neural networks – On the barriers of deep learning and Smale’s 18th problem</a>.’ Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2107151119</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">There are fundamental limits inherent in mathematics and, similarly, AI algorithms can’t exist for certain problems</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Matthew Colbrook</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Yuichiro Chino</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Binary data wave</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 17 Mar 2022 16:05:06 +0000 sc604 230711 at