ֱ̽ of Cambridge - algorithm /taxonomy/subjects/algorithm en Using machine learning to monitor driver ‘workload’ could help improve road safety /research/news/using-machine-learning-to-monitor-driver-workload-could-help-improve-road-safety <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-166065769-dp.jpg?itok=Kiajf2DW" alt="Head up display of traffic information and weather as seen by the driver" title="Head up display of traffic information and weather as seen by the driver, Credit: Coneyl Jay via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver ‘workload’. Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.</p>&#13; &#13; <p> ֱ̽resulting algorithm is highly adaptable and can respond in near real-time to changes in the driver’s behaviour and status, road conditions, road type, or driver characteristics.</p>&#13; &#13; <p>This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others. Any driver-vehicle interaction can be then customised to prioritise safety and enhance the user experience, delivering adaptive human-machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. ֱ̽<a href="https://ieeexplore.ieee.org/document/10244092">results</a> are reported in the journal <em>IEEE Transactions on Intelligent Vehicles</em>.</p>&#13; &#13; <p>“More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety,” said co-first author Dr Bashar Ahmad from Cambridge’s Department of Engineering. “There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver.”</p>&#13; &#13; <p>A driver’s status – or workload – can change frequently. Driving in a new area, in heavy traffic or poor road conditions, for example, is usually more demanding than a daily commute.</p>&#13; &#13; <p>“If you’re in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display,” said Ahmad. “ ֱ̽issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them.”</p>&#13; &#13; <p>There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that’s available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able to consume and fuse different unsynchronised data streams that have different update rates, including from biometric sensors if available.</p>&#13; &#13; <p>To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car’s central air vent, next to a small LED ring light that would blink at regular intervals. Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.</p>&#13; &#13; <p>Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.</p>&#13; &#13; <p> ֱ̽on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver’s instantaneous workload, using several driving performance signals including steering and braking. ֱ̽framework combines macro and micro measures of workload where the former is the driver’s average workload profile and the latter is the instantaneous one.</p>&#13; &#13; <p>“For most machine learning applications like this, you would have to train it on a particular driver, but we’ve been able to adapt the models on the go using simple Bayesian filtering techniques,” said Ahmad. “It can easily adapt to different road types and conditions, or different drivers using the same car.”</p>&#13; &#13; <p> ֱ̽research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the ֱ̽ of Cambridge.</p>&#13; &#13; <p>“This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients,” said JLR’s Senior Technical Specialist of Human Machine Interface Dr Lee Skrypchuk. “These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys.”</p>&#13; &#13; <p> ֱ̽research at Cambridge was carried out by a team of researchers from the Signal Processing and Communications Laboratory (SigProC), Department of Engineering, under the supervision of Professor Simon Godsill. It was led by Dr Bashar Ahmad and included Nermin Caber (PhD student at the time) and Dr Jiaming Liang, who all worked on the project while based at Cambridge’s Department of Engineering.</p>&#13; &#13; <p> </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Nermin Caber et al. ‘<a href="https://ieeexplore.ieee.org/document/10244092">Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data</a>.’ IEEE Transactions on Intelligent Vehicles (2023). DOI: 10.1109/TIV.2023.3313419</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Bashar Ahmad</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Coneyl Jay via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Head up display of traffic information and weather as seen by the driver</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 07 Dec 2023 07:48:29 +0000 sc604 243581 at Machine learning models can produce reliable results even with limited training data /research/news/machine-learning-models-can-produce-reliable-results-even-with-limited-training-data <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-1421511938-dp.jpg?itok=q03E5_XB" alt="Digital generated image of multi coloured glowing data over landscape." title="Digital generated image of multi coloured glowing data over landscape., Credit: Andriy Onufriyenko via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge and Cornell ֱ̽, found that for partial differential equations – a class of physics equations that describe how things in the natural world evolve in space and time – machine learning models can produce reliable results even when they are provided with limited data.</p>&#13; &#13; <p>Their <a href="https://www.pnas.org/doi/10.1073/pnas.2303904120">results</a>, reported in the <em>Proceedings of the National Academy of Sciences</em>, could be useful for constructing more time- and cost-efficient machine learning models for applications such as engineering and climate modelling.</p>&#13; &#13; <p>Most machine learning models require large amounts of training data before they can begin returning accurate results. Traditionally, a human will annotate a large volume of data – such as a set of images, for example – to train the model.</p>&#13; &#13; <p>“Using humans to train machine learning models is effective, but it’s also time-consuming and expensive,” said first author Dr Nicolas Boullé, from the Isaac Newton Institute for Mathematical Sciences. “We’re interested to know exactly how little data we actually need to train these models and still get reliable results.”</p>&#13; &#13; <p>Other researchers have been able to train machine learning models with a small amount of data and get excellent results, but how this was achieved has not been well-explained. For their study, Boullé and his co-authors, Diana Halikias and Alex Townsend from Cornell ֱ̽, focused on partial differential equations (PDEs).</p>&#13; &#13; <p>“PDEs are like the building blocks of physics: they can help explain the physical laws of nature, such as how the steady state is held in a melting block of ice,” said Boullé, who is an INI-Simons Foundation Postdoctoral Fellow. “Since they are relatively simple models, we might be able to use them to make some generalisations about why these AI techniques have been so successful in physics.”</p>&#13; &#13; <p> ֱ̽researchers found that PDEs that model diffusion have a structure that is useful for designing AI models. “Using a simple model, you might be able to enforce some of the physics that you already know into the training data set to get better accuracy and performance,” said Boullé.</p>&#13; &#13; <p> ֱ̽researchers constructed an efficient algorithm for predicting the solutions of PDEs under different conditions by exploiting the short and long-range interactions happening. This allowed them to build some mathematical guarantees into the model and determine exactly how much training data was required to end up with a robust model.</p>&#13; &#13; <p>“It depends on the field, but for physics, we found that you can actually do a lot with a very limited amount of data,” said Boullé. “It’s surprising how little data you need to end up with a reliable model. Thanks to the mathematics of these equations, we can exploit their structure to make the models more efficient.”</p>&#13; &#13; <p> ֱ̽researchers say that their techniques will allow data scientists to open the ‘black box’ of many machine learning models and design new ones that can be interpreted by humans, although future research is still needed.</p>&#13; &#13; <p>“We need to make sure that models are learning the right things, but machine learning for physics is an exciting field – there are lots of interesting maths and physics questions that AI can help us answer,” said Boullé.</p>&#13; &#13; <p> </p>&#13; &#13; <h2>Reference</h2>&#13; &#13; <p><em>Nicolas Boullé, Diana Halikias, and Alex Townsend. ‘<a href="https://www.pnas.org/doi/10.1073/pnas.2303904120">Elliptic PDE learning is provably data-efficient</a>.’ PNAS (2023). DOI: 10.1073/pnas.2303904120</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have determined how to build reliable machine learning models that can understand complex equations in real-world situations while using far less training data than is normally expected.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">It’s surprising how little data you need to end up with a reliable model</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Nicolas Boullé</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Andriy Onufriyenko via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Digital generated image of multi coloured glowing data over landscape.</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 19 Sep 2023 10:03:16 +0000 sc604 241771 at Phone-based measurements provide fast, accurate information about the health of forests /research/news/phone-based-measurements-provide-fast-accurate-information-about-the-health-of-forests <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-1329369484-crop.jpg?itok=82uzxanr" alt="Treetops seen from a low angle" title="Treetops seen from a low angle, Credit: Baac3nes via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, developed the algorithm, which gives an accurate measurement of tree diameter, an important measurement used by scientists to monitor forest health and levels of carbon sequestration.</p>&#13; &#13; <p> ֱ̽algorithm uses low-cost, low-resolution LiDAR sensors that are incorporated into many mobile phones, and provides results that are just as accurate, but much faster, than manual measurement techniques. ֱ̽<a href="https://www.mdpi.com/2072-4292/15/3/772">results</a> are reported in the journal <em>Remote Sensing</em>.</p>&#13; &#13; <p> ֱ̽primary manual measurement used in forest ecology is tree diameter at chest height. These measurements are used to make determinations about the health of trees and the wider forest ecosystem, as well as how much carbon is being sequestered.</p>&#13; &#13; <p>While this method is reliable, since the measurements are taken from the ground, tree by tree, the method is time-consuming. In addition, human error can lead to variations in measurements.</p>&#13; &#13; <p>“When you’re trying to figure out how much carbon a forest is sequestering, these ground-based measurements are hugely valuable, but also time-consuming,” said first author Amelia Holcomb from Cambridge’s <a href="https://www.cst.cam.ac.uk/">Department of Computer Science and Technology</a>. “We wanted to know whether we could automate this process.”</p>&#13; &#13; <p>Some aspects of forest measurement can be carried out using expensive special-purpose LiDAR sensors, but Holcomb and her colleagues wanted to determine whether these measurements could be taken using cheaper, lower-resolution sensors, of the type that are used in some mobile phones for augmented reality applications.</p>&#13; &#13; <p>Other researchers have carried out some forest measurement studies using this type of sensor, however, this has been focused on highly-managed forests where trees are straight, evenly spaced and undergrowth is regularly cleared. Holcomb and her colleagues wanted to test whether these sensors could return accurate results for non-managed forests quickly, automatically, and in a single image.</p>&#13; &#13; <p>“We wanted to develop an algorithm that could be used in more natural forests, and that could deal with things like low-hanging branches, or trees with natural irregularities,” said Holcomb.</p>&#13; &#13; <p> ֱ̽researchers designed an algorithm that uses a smartphone LiDAR sensor to estimate trunk diameter automatically from a single image in realistic field conditions. ֱ̽algorithm was incorporated into a custom-built app for an Android smartphone and is able to return results in near real time.</p>&#13; &#13; <p>To develop the algorithm, the researchers first collected their own dataset by measuring trees manually and taking pictures. Using image processing and computer vision techniques, they were able to train the algorithm to differentiate trunks from large branches, determine which direction trees were leaning in, and other information that could help it refine the information about forests.</p>&#13; &#13; <p> ֱ̽researchers tested the app in three different forests – one each in the UK, US and Canada – in spring, summer and autumn. ֱ̽app was able to detect 100% of tree trunks and had a mean error rate of 8%, which is comparable to the error rate when measuring by hand. However, the app sped up the process significantly and was about four and a half times faster than measuring trees manually.</p>&#13; &#13; <p>“I was surprised the app works as well as it does,” said Holcomb. “Sometimes I like to challenge it with a particularly crowded bit of forest, or a particularly oddly-shaped tree, and I think there’s no way it will get it right, but it does.”</p>&#13; &#13; <p>Since their measurement tool requires no specialised training and uses sensors that are already incorporated into an increasing number of phones, the researchers say that it could be an accurate, low-cost tool for forest measurement, even in complex forest conditions.</p>&#13; &#13; <p> ֱ̽researchers plan to make their app publicly available for Android phones later this spring.</p>&#13; &#13; <p> ֱ̽research was supported in part by the David Cheriton Graduate Scholarship, the Canadian National Research Council, and the Harding Distinguished Postgraduate Scholarship.</p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Amelia Holcomb, Linzhe Tong, and Srinivasan Keshav. ‘<a href="https://www.mdpi.com/2072-4292/15/3/772">Robust Single-Image Tree Diameter Estimation with Mobile Phones</a>.’ Remote Sensing (2023). DOI: 10.3390/rs15030772</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed an algorithm that uses computer vision techniques to accurately measure trees almost five times faster than traditional, manual methods.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Ground-based measurements are hugely valuable, but also time-consuming. We wanted to know whether we could automate this process.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Amelia Holcomb</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Baac3nes via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Treetops seen from a low angle</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/social-media/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 07 Mar 2023 01:21:40 +0000 sc604 237431 at Machine learning algorithm predicts how to get the most out of electric vehicle batteries /research/news/machine-learning-algorithm-predicts-how-to-get-the-most-out-of-electric-vehicle-batteries <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/car-charging.jpg?itok=BFjKv9sq" alt="People charging their electric cars at charging station" title="People charging their electric cars at charging station in York, Credit: Monty Rakusen via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, say their algorithm could help drivers, manufacturers and businesses get the most out of the batteries that power electric vehicles by suggesting routes and driving patterns that minimise battery degradation and charging times.</p> <p> ֱ̽team developed a non-invasive way to probe batteries and get a holistic view of battery health. These results were then fed into a machine learning algorithm that can predict how different driving patterns will affect the future health of the battery.</p> <p>If developed commercially, the algorithm could be used to recommend routes that get drivers from point to point in the shortest time without degrading the battery, for example, or recommend the fastest way to charge the battery without causing it to degrade. ֱ̽<a href="https://www.nature.com/articles/s41467-022-32422-w">results</a> are reported in the journal <em>Nature Communications</em>.</p> <p> ֱ̽health of a battery, whether it’s in a smartphone or a car, is far more complex than a single number on a screen. “Battery health, like human health, is a multi-dimensional thing, and it can degrade in lots of different ways,” said first author Penelope Jones, from Cambridge’s Cavendish Laboratory. “Most methods of monitoring battery health assume that a battery is always used in the same way. But that’s not how we use batteries in real life. If I’m streaming a TV show on my phone, it’s going to run down the battery a whole lot faster than if I’m using it for messaging. It’s the same with electric cars – how you drive will affect how the battery degrades.”</p> <p>“Most of us will replace our phones well before the battery degrades to the point that it’s unusable, but for cars, the batteries need to last for five, ten years or more,” said <a href="https://www.alpha-lee.com/">Dr Alpha Lee</a>, who led the research. “Battery capacity can change drastically over that time, so we wanted to come up with a better way of checking battery health.”</p> <p> ֱ̽researchers developed a non-invasive probe that sends high-dimensional electrical pulses into a battery and measures the response, providing a series of ‘biomarkers’ of battery health. This method is gentle on the battery and doesn’t cause it to degrade any further.</p> <p> ֱ̽electrical signals from the battery were converted into a description of the battery’s state, which was fed into a machine learning algorithm. ֱ̽algorithm was able to predict how the battery would respond in the next charge-discharge cycle, depending on how quickly the battery was charged and how fast the car would be going the next time it was on the road. Tests with 88 commercial batteries showed that the algorithm did not require any information about previous usage of the battery to make an accurate prediction.</p> <p> ֱ̽experiment focused on lithium cobalt oxide (LCO) cells, which are widely used in rechargeable batteries, but the method is generalisable across the different types of battery chemistries used in electric vehicles today.</p> <p>“This method could unlock value in so many parts of the supply chain, whether you’re a manufacturer, an end user, or a recycler, because it allows us to capture the health of the battery beyond a single number, and because it’s predictive,” said Lee. “It could reduce the time it takes to develop new types of batteries, because we’ll be able to predict how they will degrade under different operating conditions.”</p> <p> ֱ̽researchers say that in addition to manufacturers and drivers, their method could be useful for businesses that operate large fleets of electric vehicles, such as logistics companies. “ ֱ̽framework we’ve developed could help companies optimise how they use their vehicles to improve the overall battery life of the fleet,” said Lee. “There’s so much potential with a framework like this.”</p> <p>“It’s been such an exciting framework to build because it could solve so many of the challenges in the battery field today,” said Jones. “It’s a great time to be involved in the field of battery research, which is so important in helping address climate change by transitioning away from fossil fuels.”</p> <p> ֱ̽researchers are now working with battery manufacturers to accelerate the development of safer, longer-lasting next-generation batteries. They are also exploring how their framework could be used to develop optimal fast charging protocols to reduce electric vehicle charging times without causing degradation.</p> <p> ֱ̽research was supported by the Winton Programme for the Physics of Sustainability, the Ernest Oppenheimer Fund, ֱ̽Alan Turing Institute and the Royal Society.</p> <p><br /> <em><strong>Reference:</strong><br /> Penelope K Jones, Ulrich Stimming &amp; Alpha A Lee. ‘<a href="https://www.nature.com/articles/s41467-022-32422-w">Impedance-based forecasting of lithium-ion battery performance amid uneven usage</a>.’ Nature Communications (2022). DOI: 10.1038/s41467-022-32422-w</em></p> <p><em><strong>For more information on energy-related research in Cambridge, please visit <a href="https://www.energy.cam.ac.uk/">Energy IRC</a>, which brings together Cambridge’s research knowledge and expertise, in collaboration with global partners, to create solutions for a sustainable and resilient energy landscape for generations to come. </strong></em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a machine learning algorithm that could help reduce charging times and prolong battery life in electric vehicles by predicting how different driving patterns affect battery performance, improving safety and reliability.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">This method could unlock value in so many parts of the supply chain, whether you’re a manufacturer, an end user, or a recycler, because it allows us to capture the health of the battery beyond a single number</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Alpha Lee</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.gettyimages.co.uk/detail/photo/york-people-charging-their-electric-cars-at-royalty-free-image/1351964126?adppopup=true" target="_blank">Monty Rakusen via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">People charging their electric cars at charging station in York</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 23 Aug 2022 09:01:34 +0000 sc604 233851 at Algorithm learns to correct 3D printing errors for different parts, materials and systems /research/news/algorithm-learns-to-correct-3d-printing-errors-for-different-parts-materials-and-systems <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/3d-printer.jpg?itok=IYAmAWWV" alt="Example image of the 3D printer nozzle used by the machine learning algorithm to detect and correct errors in real time. " title="Example image of the 3D printer nozzle used by the machine learning algorithm to detect and correct errors in real time. , Credit: Douglas Brion" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽engineers, from the ֱ̽ of Cambridge, developed a machine learning algorithm that can detect and correct a wide variety of different errors in real time, and can be easily added to new or existing machines to enhance their capabilities. 3D printers using the algorithm could also learn how to print new materials by themselves. <a href="https://www.nature.com/articles/s41467-022-31985-y">Details</a> of their low-cost approach are reported in the journal <em>Nature Communications</em>.</p>&#13; &#13; <p>3D printing has the potential to revolutionise the production of complex and customised parts, such as aircraft components, personalised medical implants, or even intricate sweets, and could also transform manufacturing supply chains. However, it is also vulnerable to production errors, from small-scale inaccuracies and mechanical weaknesses through to total build failures.</p>&#13; &#13; <p>Currently, the way to prevent or correct these errors is for a skilled worker to observe the process. ֱ̽worker must recognise an error (a challenge even for the trained eye), stop the print, remove the part, and adjust settings for a new part. If a new material or printer is used, the process takes more time as the worker learns the new setup. Even then, errors may be missed as workers cannot continuously observe multiple printers at the same time, especially for long prints.</p>&#13; &#13; <p>“3D printing is challenging because there's a lot that can go wrong, and so quite often 3D prints will fail,” said <a href="https://www.sebastianpattinson.com/">Dr Sebastian Pattinson</a> from Cambridge’s Department of Engineering, the paper’s senior author. “When that happens, all of the material and time and energy that you used is lost.”</p>&#13; &#13; <p>Engineers have been developing automated 3D printing monitoring, but existing systems can only detect a limited range of errors in one part, one material and one printing system.</p>&#13; &#13; <p>“What’s really needed is a ‘driverless car’ system for 3D printing,” said first author <a href="http://douglasbrion.com/">Douglas Brion</a>, also from the Department of Engineering. “A driverless car would be useless if it only worked on one road or in one town – it needs to learn to generalise across different environments, cities, and even countries. Similarly, a ‘driverless’ printer must work for multiple parts, materials, and printing conditions.”</p>&#13; &#13; <p>Brion and Pattinson say the algorithm they’ve developed could be the ‘driverless car’ engineers have been looking for.</p>&#13; &#13; <p>“What this means is that you could have an algorithm that can look at all of the different printers that you're operating, constantly monitoring and making changes as needed – basically doing what a human can't do,” said Pattinson.</p>&#13; &#13; <p> ֱ̽researchers trained a deep learning computer vision model by showing it around 950,000 images captured automatically during the production of 192 printed objects. Each of the images was labelled with the printer’s settings, such as the speed and temperature of the printing nozzle and flow rate of the printing material. ֱ̽model also received information about how far those settings were from good values, allowing the algorithm to learn how errors arise.</p>&#13; &#13; <p>“Once trained, the algorithm can figure out just by looking at an image which setting is correct and which is wrong – is a particular setting too high or too low, for example, and then apply the appropriate correction,” said Pattinson. “And the cool thing is that printers that use this approach could be continuously gathering data, so the algorithm could be continually improving as well.”</p>&#13; &#13; <p>Using this approach, Brion and Pattinson were able to make an algorithm that is generalisable – in other words, it can be applied to identify and correct errors in unfamiliar objects or materials, or even in new printing systems.</p>&#13; &#13; <p>“When you’re printing with a nozzle, then no matter the material you’re using – polymers, concrete, ketchup, or whatever – you can get similar errors,” said Brion. “For example, if the nozzle is moving too fast, you often end up with blobs of material, or if you’re pushing out too much material, then the printed lines will overlap forming creases.</p>&#13; &#13; <p>“Errors that arise from similar settings will have similar features, no matter what part is being printed or what material is being used. Because our algorithm learned general features shared across different materials, it could say ‘Oh, the printed lines are forming creases, therefore we are likely pushing out too much material’.”</p>&#13; &#13; <p>As a result, the algorithm that was trained using only one kind of material and printing system was able to detect and correct errors in different materials, from engineering polymers to even ketchup and mayonnaise, on a different kind of printing system.</p>&#13; &#13; <p>In future, the trained algorithm could be more efficient and reliable than a human operator at spotting errors. This could be important for quality control in applications where component failure could have serious consequences.</p>&#13; &#13; <p>With the support of Cambridge Enterprise, the ֱ̽’s commercialisation arm, Brion has formed <a href="https://www.matta.ai/">Matta</a>, a spin-out company that will develop the technology for commercial applications.</p>&#13; &#13; <p>“We’re turning our attention to how this might work in high-value industries such as the aerospace, energy, and automotive sectors, where 3D printing technologies are used to manufacture high-performance and expensive parts,” said Brion. “It might take days or weeks to complete a single component at a cost of thousands of pounds. An error that occurs at the start might not be detected until the part is completed and inspected. Our approach would spot the error in real time, significantly improving manufacturing productivity.”</p>&#13; &#13; <p> ֱ̽research was supported by the Engineering and Physical Sciences Research Council, Royal Society, Academy of Medical Sciences, and the Isaac Newton Trust.</p>&#13; &#13; <p> ֱ̽<a href="https://www.repository.cam.ac.uk/handle/1810/339869">full dataset</a> used to train the AI is freely available online. </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Douglas A J Brion &amp; Sebastian W Pattinson. ‘<a href="https://www.nature.com/articles/s41467-022-31985-y">Generalisable 3D printing error detection and correction via multi-head neural networks</a>.’ Nature Communications (2022). DOI: 10.1038/s41467-022-31985-y</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Engineers have created intelligent 3D printers that can quickly detect and correct errors, even in previously unseen designs, or unfamiliar materials like ketchup and mayonnaise, by learning from the experiences of other machines.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Once trained, the algorithm can figure out just by looking at an image which setting is correct and which is wrong</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Sebastian Pattinson</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Douglas Brion</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Example image of the 3D printer nozzle used by the machine learning algorithm to detect and correct errors in real time. </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 16 Aug 2022 15:11:40 +0000 sc604 233791 at Study shows how our brains remain active during familiar, repetitive tasks /research/news/study-shows-how-our-brains-remain-active-during-familiar-repetitive-tasks <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/crop_184.jpg?itok=Cak4vVAh" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Our brains are often likened to computers, with learned skills and memories stored in the activity patterns of billions of nerve cells. However, new research shows that memories of specific events and experiences may never settle down. Instead, the activity patterns that store information can continually change, even when we are not learning anything new.</p>&#13; &#13; <p>Why does this not cause the brain to forget what it has learned? ֱ̽study, from the ֱ̽ of Cambridge, Harvard Medical School and Stanford ֱ̽, reveals how the brain can reliably access stored information despite drastic changes in the brain signals that represent it.</p>&#13; &#13; <p> ֱ̽research, led by <a href="https://www.eng.cam.ac.uk/profiles/tso24">Dr Timothy O’Leary</a> from Cambridge’s Department of Engineering, shows that different parts of our brain may need to relearn and keep track of information in other parts of the brain as it moves around. Their <a href="https://doi.org/10.7554/eLife.51121">study</a>, published in the open-access journal <em>eLife</em>, provides some of the first evidence that constant changes in neural activity are compatible with long term memories of learned skills.</p>&#13; &#13; <p> ֱ̽researchers came to this conclusion through modelling and analysis of data taken from an experiment in which mice were trained to associate a visual cue at the start of a 4.5-metre-long virtual reality maze with turning left or right at a T-junction, before navigating to a reward. ֱ̽results of the <a href="https://www.sciencedirect.com/science/article/pii/S0092867417308280">2017 study</a> showed that single nerve cells in the brain continually changed the information they encoded about this learned task, even though the behaviour of the mice remained stable over time.</p>&#13; &#13; <p> ֱ̽experimental data consisted of activity patterns from hundreds of nerve cells recorded simultaneously in a part of the brain that controls and plans movement, recorded at a resolution that is not yet possible in humans.</p>&#13; &#13; <p>“Finding coherent patterns in this large assembly of cells is challenging, much like trying to determine the behaviour of a swarm of insects by watching a random sample of individuals,” said O’Leary. “However, in some respects the brain itself needs to solve a similar task, because other brain areas need to extract and process information from this same population.”</p>&#13; &#13; <p>Nerve cells connect to hundreds or even thousands of their neighbours and extract information by weighting and pooling it. This has a direct analogy with the methods used by pollsters in the run-up to an election: survey results from multiple sources are collected and ‘weighted’ according to their consistency. In this way, a steady pattern can emerge even when individual measurements vary wildly.</p>&#13; &#13; <p> ֱ̽Cambridge group used this principle to construct a decoding algorithm that extracted consistent, hidden patterns within the complex activity of hundreds of cells. They found two things. First, that there was indeed a consistent hidden pattern that could accurately predict the animal’s behaviour. Second, this consistent pattern itself gradually changes over time, but not so drastically that the decoding algorithm couldn’t keep up. This suggests that the brain continually modifies the internal code that relays information between different internal circuits.</p>&#13; &#13; <p>Science fiction explores the possibility of transferring our memories and experiences into hardware devices directly from our brains. If future technology eventually allows us to upload and download our thoughts and memories, we may find that our brain cannot interpret its own activity patterns if they are replayed many years later. ֱ̽concept of an apple - its colour, flavour, taste and the memories associated with it - may remain consistent, but the patterns of activity it evokes in the brain may change completely over time.</p>&#13; &#13; <p>Such conundrums will likely remain speculative for the immediate future, but experimental technology that achieves a limited version of such mind reading is already a reality, as this study shows. Brain-machine interfaces are a rapidly maturing technology, and human neural interfaces that can control prosthetics and external hardware have been in clinical use for over a decade. ֱ̽work from the Cambridge group highlights a major open challenge in extracting reliable information from the brain.</p>&#13; &#13; <p>“Even though we can now monitor brain activity and relate it directly to memories and experiences, the activity patterns themselves continually change over a period of several days,” said <a href="https://www.eng.cam.ac.uk/profiles/tso24">O’Leary</a>, who is a Lecturer in Information Engineering and Medical Neuroscience. “Our study shows that in spite of this change, we can construct and maintain a relatively stable ‘dictionary’ to read out what an animal is thinking as it navigates a familiar environment.</p>&#13; &#13; <p>“ ֱ̽work suggests that our brains are never at rest, even when we are not learning anything about the external world. This has major implications for our understanding of the brain and for brain-machine interfaces and neural prosthetics.”</p>&#13; &#13; <p><strong><em>References:</em></strong><br />&#13; <em>Michael E. Rule et al. ‘</em><a href="https://doi.org/10.7554/eLife.51121"><em>Stable task information from an unstable neural population</em></a><em>’. eLife (2020). DOI: 10.7554/eLife.51121</em></p>&#13; &#13; <p> </p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>New research, based on earlier results in mice, suggests that our brains are never at rest, even when we are not learning anything about the world around us.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Finding coherent patterns in this large assembly of cells is challenging, much like trying to determine the behaviour of a swarm of insects by watching a random sample of individuals</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Timothy O&#039;Leary</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 14 Jul 2020 07:00:00 +0000 sc604 216232 at Online hate speech could be contained like a computer virus, say researchers /research/news/online-hate-speech-could-be-contained-like-a-computer-virus-say-researchers <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/fig6web.jpg?itok=eYI7rif7" alt="Screenshot of system" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽spread of hate speech via social media could be tackled using the same 'quarantine' approach deployed to combat malicious software, according to ֱ̽ of Cambridge researchers.</p>&#13; &#13; <p>Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example.</p>&#13; &#13; <p>As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended "psychological harm" is inflicted, with armies of moderators required to judge every case.</p>&#13; &#13; <p>This is the new front line of an ancient debate: freedom of speech versus poisonous language.</p>&#13; &#13; <p>Now, an engineer and a linguist have published a proposal in the journal <em><a href="https://link.springer.com/article/10.1007/s10676-019-09516-z">Ethics and Information Technology</a></em> that harnesses cyber security techniques to give control to those targeted, without resorting to censorship.</p>&#13; &#13; <p>Cambridge language and machine learning experts are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.</p>&#13; &#13; <p>As these algorithms get refined, potential hate speech could be identified and "quarantined". Users would receive a warning alert with a "Hate O'Meter" – the hate speech severity score – the sender's name, and an option to view the content or delete unseen.</p>&#13; &#13; <p>This approach is akin to spam and malware filters, and researchers from the 'Giving Voice to Digital Democracies' project believe it could dramatically reduce the amount of hate speech people are forced to experience. They are aiming to have a prototype ready in early 2020.</p>&#13; &#13; <p>"Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining," said co-author and linguist Dr Stefanie Ullman. "In fact, a lot of hate speech is actually generated by software such as Twitter bots."</p>&#13; &#13; <p>"Companies like Facebook, Twitter and Google generally respond reactively to hate speech," said co-author and engineer Dr Marcus Tomalin. "This may be okay for those who don't encounter it often. For others it's too little, too late."</p>&#13; &#13; <p>"Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation," he said.</p>&#13; &#13; <p>Former US Secretary of State Hillary Clinton <a href="https://www.youtube.com/watch?v=Sz7eDCDpw-Y&amp;feature=youtu.be">recently told a UK audience</a> that hate speech posed a "threat to democracies", in the wake of many women MPs <a href="https://www.theguardian.com/politics/2019/oct/31/alarm-over-number-female-mps-stepping-down-after-abuse">citing online abuse</a> as part of the reason they will no longer stand for election.</p>&#13; &#13; <p>While in a <a href="https://about.fb.com/news/2019/10/mark-zuckerberg-stands-for-voice-and-free-expression/">Georgetown ֱ̽ address</a>, Facebook CEO Mark Zuckerberg spoke of "broad disagreements over what qualifies as hate" and argued: "we should err on the side of greater expression".</p>&#13; &#13; <p> ֱ̽researchers say their proposal is not a magic bullet, but it does sit between the "extreme libertarian and authoritarian approaches" of either entirely permitting or prohibiting certain language online.</p>&#13; &#13; <p>Importantly, the user becomes the arbiter. "Many people don't like the idea of an unelected corporation or micromanaging government deciding what we can and can't say to each other," said Tomalin.</p>&#13; &#13; <p>"Our system will flag when you should be careful, but it's always your call. It doesn't stop people posting or viewing what they like, but it gives much needed control to those being inundated with hate."</p>&#13; &#13; <p>In the paper, the researchers refer to detection algorithms achieving 60% accuracy – not much better than chance. Tomalin's machine learning lab has now got this up to 80%, and he anticipates continued improvement of the mathematical modeling.</p>&#13; &#13; <p>Meanwhile, Ullman gathers more 'training data': verified hate speech from which the algorithms can learn. This helps refine the 'confidence scores' that determine a quarantine and subsequent Hate O'Meter read-out, which could be set like a sensitivity dial depending on user preference.</p>&#13; &#13; <p>A basic example might involve a word like 'bitch': a misogynistic slur, but also a legitimate term in contexts such as dog breeding. It's the algorithmic analysis of where such a word sits syntactically - the types of surrounding words and semantic relations between them - that informs the hate speech score.</p>&#13; &#13; <p>"Identifying individual keywords isn't enough, we are looking at entire sentence structures and far beyond. Sociolinguistic information in user profiles and posting histories can all help improve the classification process," said Ullman.</p>&#13; &#13; <p>Added Tomalin: "Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses."</p>&#13; &#13; <p>However, the researchers, who work in Cambridge's <a href="https://www.crassh.cam.ac.uk/">Centre for Research into Arts, Humanities and Social Sciences (CRASSH)</a>, say that – as with computer viruses – there will always be an arms race between hate speech and systems for limiting it.</p>&#13; &#13; <p> ֱ̽project has also begun to look at "counter-speech": the ways people respond to hate speech. ֱ̽researchers intend to feed into debates around how virtual assistants such as 'Siri' should respond to threats and intimidation.</p>&#13; &#13; <p> ֱ̽work has been funded by the <a href="https://hscif.org/">International Foundation for the Humanities and Social Change</a>.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Artificial intelligence is being developed that will allow advisory 'quarantining' of hate speech in a manner akin to malware filters – offering users a way to control exposure to 'hateful content' without resorting to censorship.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">We can empower those at the receiving end of the hate speech poisoning our online discourses</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Marcus Tomalin</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 17 Dec 2019 17:37:13 +0000 fpjl2 210032 at Driverless cars working together can speed up traffic by 35 percent /research/news/driverless-cars-working-together-can-speed-up-traffic-by-35-percent <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/crop_116.jpg?itok=eUXemmDy" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, programmed a small fleet of miniature robotic cars to drive on a multi-lane track and observed how the traffic flow changed when one of the cars stopped.</p> <p>When the cars were not driving cooperatively, any cars behind the stopped car had to stop or slow down and wait for a gap in the traffic, as would typically happen on a real road. A queue quickly formed behind the stopped car and overall traffic flow was slowed.</p> <p>However, when the cars were communicating with each other and driving cooperatively, as soon as one car stopped in the inner lane, it sent a signal to all the other cars. Cars in the outer lane that were in immediate proximity of the stopped car slowed down slightly so that cars in the inner lane were able to quickly pass the stopped car without having to stop or slow down significantly.</p> <p>Additionally, when a human-controlled driver was put on the ‘road’ with the autonomous cars and moved around the track in an aggressive manner, the other cars were able to give way to avoid the aggressive driver, improving safety.</p> <p> ֱ̽<a href="https://arxiv.org/abs/1902.06133">results</a>, to be presented today at the International Conference on Robotics and Automation (ICRA) in Montréal, will be useful for studying how autonomous cars can communicate with each other, and with cars controlled by human drivers, on real roads in the future.</p> <p>“Autonomous cars could fix a lot of different problems associated with driving in cities, but there needs to be a way for them to work together,” said co-author Michael He, an undergraduate student at St John’s College, who designed the <a href="https://github.com/proroklab/minicar">algorithms</a> for the experiment.</p> <p>“If different automotive manufacturers are all developing their own autonomous cars with their own software, those cars all need to communicate with each other effectively,” said co-author Nicholas Hyldmar, an undergraduate student at Downing College, who designed much of the hardware for the experiment.</p> <p> ֱ̽two students completed the work as part of an undergraduate research project in summer 2018, in the lab of Dr Amanda Prorok from Cambridge’s Department of Computer Science and Technology.</p> <p>Many existing tests for multiple autonomous driverless cars are done digitally, or with scale models that are either too large or too expensive to carry out indoor experiments with fleets of cars.</p> <p>Starting with inexpensive scale models of commercially-available vehicles with realistic steering systems, the Cambridge researchers adapted the cars with motion capture sensors and a Raspberry Pi, so that the cars could communicate via wifi.</p> <p>They then adapted a lane-changing algorithm for autonomous cars to work with a fleet of cars. ֱ̽original algorithm decides when a car should change lanes, based on whether it is safe to do so and whether changing lanes would help the car move through traffic more quickly. ֱ̽adapted algorithm allows for cars to be packed more closely when changing lanes and adds a safety constraint to prevent crashes when speeds are low. A second algorithm allowed the cars to detect a projected car in front of it and make space.</p> <p>They then tested the fleet in ‘egocentric’ and ‘cooperative’ driving modes, using both normal and aggressive driving behaviours, and observed how the fleet reacted to a stopped car. In the normal mode, cooperative driving improved traffic flow by 35% over egocentric driving, while for aggressive driving, the improvement was 45%. ֱ̽researchers then tested how the fleet reacted to a single car controlled by a human via a joystick.</p> <p>“Our design allows for a wide range of practical, low-cost experiments to be carried out on autonomous cars,” said Prorok. “For autonomous cars to be safely used on real roads, we need to know how they will interact with each other to improve safety and traffic flow.”</p> <p>In future work, the researchers plan to use the fleet to test multi-car systems in more complex scenarios including roads with more lanes, intersections and a wider range of vehicle types.</p> <p><em><strong>Reference:</strong></em><br /> <em>Nicholas Hyldmar, Yijun He, Amanda Prorok. ‘A Fleet of Miniature Cars for Experiments in Cooperative Driving.’ Paper presented at the <a href="https://ras.papercept.net/conferences/conferences/ICRA19/program/ICRA19_ContentListWeb_1.html#moc1-23_02">International Conference on Robotics and Automation</a> (ICRA 2019). Montréal, Canada. </em></p> <p> </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A fleet of driverless cars working together to keep traffic moving smoothly can improve overall traffic flow by at least 35 percent, researchers have shown.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">For autonomous cars to be safely used on real roads, we need to know how they will interact with each other</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Amanda Prorok</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-148222" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/148222">Can cars talk to each other?</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/e0LIU1Sf6p0?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Sun, 19 May 2019 23:00:45 +0000 sc604 205432 at