ֱ̽ of Cambridge - Payel Mukhopadhyay /taxonomy/people/payel-mukhopadhyay en New datasets will train AI models to think like scientists /research/news/new-datasets-will-train-ai-models-to-think-like-scientists <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/polymathic-ai.jpg?itok=J6Vf_9mh" alt="A mosaic of simulations included in the Well collection of datasets" title="A mosaic of simulations included in the Well collection of datasets, Credit: Alex Meng, Aaron Watters and the Well Collaboration" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽initiative, called <a href="https://polymathic-ai.org/">Polymathic AI</a>, uses technology like that powering large language models such as OpenAI’s ChatGPT or Google’s Gemini. But instead of ingesting text, the project’s models learn using scientific datasets from across astrophysics, biology, acoustics, chemistry, fluid dynamics and more, essentially giving the models cross-disciplinary scientific knowledge.</p> <p>“These datasets are by far the most diverse large-scale collections of high-quality data for machine learning training ever assembled for these fields,” said team member Michael McCabe from the Flatiron Institute in New York City. “Curating these datasets is a critical step in creating multidisciplinary AI models that will enable new discoveries about our universe.”</p> <p>On 2 December, the Polymathic AI team released two of its open-source training dataset collections to the public — a colossal 115 terabytes, from dozens of sources — for the scientific community to use to train AI models and enable new scientific discoveries. For comparison, GPT-3 used 45 terabytes of uncompressed, unformatted text for training, which ended up being around 0.5 terabytes after filtering.</p> <p> ֱ̽full datasets are available to download for free on <a href="https://huggingface.co/">HuggingFace</a>, a platform hosting AI models and datasets. ֱ̽Polymathic AI team provides further information about the datasets in <a href="https://nips.cc/virtual/2024/poster/97882">two</a> <a href="https://nips.cc/virtual/2024/poster/97791">papers</a> accepted for presentation at the <a href="https://neurips.cc/">NeurIPS</a> machine learning conference, to be held later this month in Vancouver, Canada.</p> <p>“Just as LLMs such as ChatGPT learn to use common grammatical structure across languages, these new scientific foundation models might reveal deep connections across disciplines that we’ve never noticed before,” said Cambridge team lead <a href="https://astroautomata.com/">Dr Miles Cranmer</a> from Cambridge’s Institute of Astronomy. “We might uncover patterns that no human can see, simply because no one has ever had both this breadth of scientific knowledge and the ability to compress it into a single framework.”</p> <p>AI tools such as machine learning are increasingly common in scientific research, and were recognised in two of this year’s <a href="/research/news/university-of-cambridge-alumnus-awarded-2024-nobel-prize-in-physics">Nobel</a> <a href="/research/news/university-of-cambridge-alumni-awarded-2024-nobel-prize-in-chemistry">Prizes</a>. Still, such tools are typically purpose-built for a specific application and trained using data from that field. ֱ̽Polymathic AI project instead aims to develop models that are truly polymathic, like people whose expert knowledge spans multiple areas. ֱ̽project’s team reflects intellectual diversity, with physicists, astrophysicists, mathematicians, computer scientists and neuroscientists.</p> <p> ֱ̽first of the two new training dataset collections focuses on astrophysics. Dubbed the Multimodal Universe, the dataset contains hundreds of millions of astronomical observations and measurements, such as portraits of galaxies taken by NASA’s James Webb Space Telescope and measurements of our galaxy’s stars made by the European Space Agency’s Gaia spacecraft.</p> <p> ֱ̽other collection — called the Well — comprises over 15 terabytes of data from 16 diverse datasets. These datasets contain numerical simulations of biological systems, fluid dynamics, acoustic scattering, supernova explosions and other complicated processes. Cambridge researchers played a major role in developing both dataset collections, working alongside PolymathicAI and other international collaborators.</p> <p>While these diverse datasets may seem disconnected at first, they all require the modelling of mathematical equations called partial differential equations. Such equations pop up in problems related to everything from quantum mechanics to embryo development and can be incredibly difficult to solve, even for supercomputers. One of the goals of the Well is to enable AI models to churn out approximate solutions to these equations quickly and accurately.</p> <p>“By uniting these rich datasets, we can drive advancements in artificial intelligence not only for scientific discovery, but also for addressing similar problems in everyday life,” said Ben Boyd, PhD student in the Institute of Astronomy.</p> <p>Gathering the data for those datasets posed a challenge, said team member Ruben Ohana from the Flatiron Institute. ֱ̽team collaborated with scientists to gather and create data for the project. “ ֱ̽creators of numerical simulations are sometimes sceptical of machine learning because of all the hype, but they’re curious about it and how it can benefit their research and accelerate scientific discovery,” he said.</p> <p> ֱ̽Polymathic AI team is now using the datasets to train AI models. In the coming months, they will deploy these models on various tasks to see how successful these well-rounded, well-trained AIs are at tackling complex scientific problems.</p> <p>“It will be exciting to see if the complexity of these datasets can push AI models to go beyond merely recognising patterns, encouraging them to reason and generalise across scientific domains,” said Dr Payel Mukhopadhyay from the Institute of Astronomy. “Such generalisation is essential if we ever want to build AI models that can truly assist in conducting meaningful science.”</p> <p>“Until now, haven’t had a curated scientific-quality dataset cover such a wide variety of fields,” said Cranmer, who is also a member of Cambridge’s Department of Applied Mathematics and Theoretical Physics. “These datasets are opening the door to true generalist scientific foundation models for the first time. What new scientific principles might we discover? We're about to find out, and that's incredibly exciting.”</p> <p> ֱ̽Polymathic AI project is run by researchers from the Simons Foundation and its Flatiron Institute, New York ֱ̽, the ֱ̽ of Cambridge, Princeton ֱ̽, the French Centre National de la Recherche Scientifique and the Lawrence Berkeley National Laboratory.</p> <p>Members of the Polymathic AI team from the ֱ̽ of Cambridge include PhD students, postdoctoral researchers and faculty across four departments: the Department of Applied Mathematics and Theoretical Physics, the Department of Pure Mathematics and Mathematical Statistics, the Institute of Astronomy and the Kavli Institute for Cosmology.</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>What can exploding stars teach us about how blood flows through an artery? Or swimming bacteria about how the ocean’s layers mix? A collaboration of researchers, including from the ֱ̽ of Cambridge, has reached a milestone toward training artificial intelligence models to find and use transferable knowledge between fields to drive scientific discovery.</p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://polymathic-ai.org/" target="_blank">Alex Meng, Aaron Watters and the Well Collaboration</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">A mosaic of simulations included in the Well collection of datasets</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 02 Dec 2024 15:59:08 +0000 sc604 248583 at