A glowing particle and binary wave pattern on dark background.

Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.

There are fundamental limits inherent in mathematics and, similarly, AI algorithms can鈥檛 exist for certain problems

Matthew Colbrook

Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don鈥檛 know when they鈥檙e making mistakes. Sometimes it鈥檚 even more difficult for an AI system to realise when it鈥檚 making a mistake than to produce a correct result.

Researchers from the 探花直播 of Cambridge and the 探花直播 of Oslo say that instability is the Achilles鈥 heel of modern AI and that a mathematical paradox shows AI鈥檚 limitations. Neural networks, the state-of-the-art tool in AI, roughly mimic the links between neurons in the brain. 探花直播researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.

探花直播researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their are reported in the Proceedings of the National Academy of Sciences.

Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and .

鈥淢any AI systems are unstable, and it鈥檚 becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,鈥 said co-author Professor Anders Hansen from Cambridge鈥檚 Department of Applied Mathematics and Theoretical Physics. 鈥淚f AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.鈥

探花直播paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt G枚del. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and G枚del showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency.

Decades later, the mathematician Steve Smale proposed a list of 18 unsolved mathematical problems for the 21st century. 探花直播18th problem concerned the limits of intelligence for both humans and machines.

鈥 探花直播paradox first identified by Turing and G枚del has now been brought forward into the world of AI by Smale and others,鈥 said co-author Dr Matthew Colbrook from the Department of Applied Mathematics and Theoretical Physics. 鈥淭here are fundamental limits inherent in mathematics and, similarly, AI algorithms can鈥檛 exist for certain problems.鈥

探花直播researchers say that, because of this paradox, there are cases where good neural networks can exist, yet an inherently trustworthy one cannot be built. 鈥淣o matter how accurate your data is, you can never get the perfect information to build the required neural network,鈥 said co-author Dr Vegard Antun from the 探花直播 of Oslo.

探花直播impossibility of computing the good existing neural network is also true regardless of the amount of training data. No matter how much data an algorithm can access, it will not produce the desired network. 鈥淭his is similar to Turing鈥檚 argument: there are computational problems that cannot be solved regardless of computing power and runtime,鈥 said Hansen.

探花直播researchers say that not all AI is inherently flawed, but it鈥檚 only reliable in specific areas, using specific methods. 鈥 探花直播issue is with areas where you need a guarantee, because many AI systems are a black box,鈥 said Colbrook. 鈥淚t鈥檚 completely fine in some situations for an AI to make mistakes, but it needs to be honest about it. And that鈥檚 not what we鈥檙e seeing for many systems 鈥 there鈥檚 no way of knowing when they鈥檙e more confident or less confident about a decision.鈥

鈥淐urrently, AI systems can sometimes have a touch of guesswork to them,鈥 said Hansen.鈥淵ou try something, and if it doesn鈥檛 work, you add more stuff, hoping it works. At some point, you鈥檒l get tired of not getting what you want, and you鈥檒l try a different method. It鈥檚 important to understand the limitations of different approaches. We are at the stage where the practical successes of AI are far ahead of theory and understanding. A program on understanding the foundations of AI computing is needed to bridge this gap.鈥

鈥淲hen 20th-century mathematicians identified different paradoxes, they didn鈥檛 stop studying mathematics. They just had to find new paths, because they understood the limitations,鈥 said Colbrook. 鈥淔or AI, it may be a case of changing paths or developing new ones to build systems that can solve problems in a trustworthy and transparent way, while understanding their limitations.鈥

探花直播next stage for the researchers is to combine approximation theory, numerical analysis and foundations of computations to determine which neural networks can be computed by algorithms, and which can be made stable and trustworthy. Just as the paradoxes on the limitations of mathematics and computers identified by G枚del and Turing led to rich foundation theories 鈥 describing both the limitations and the possibilities of mathematics and computations 鈥 perhaps a similar foundations theory may blossom in AI.

Matthew Colbrook is a Junior Research Fellow at Trinity College, Cambridge. Anders Hansen is a Fellow at Peterhouse, Cambridge. 探花直播research was supported in part by the Royal Society.

Reference:
Matthew J听Colbrook, Vegard Antun, and Anders C听Hansen. 鈥.鈥 Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2107151119



探花直播text in this work is licensed under a . Images, including our videos, are Copyright 漏 探花直播 of Cambridge and licensors/contributors as identified.听 All rights reserved. We make our image and video content available in a number of ways 鈥 as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.