ֱ̽ of Cambridge - Simon Godsill /taxonomy/people/simon-godsill en Using machine learning to monitor driver ‘workload’ could help improve road safety /research/news/using-machine-learning-to-monitor-driver-workload-could-help-improve-road-safety <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-166065769-dp.jpg?itok=Kiajf2DW" alt="Head up display of traffic information and weather as seen by the driver" title="Head up display of traffic information and weather as seen by the driver, Credit: Coneyl Jay via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver ‘workload’. Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.</p>&#13; &#13; <p> ֱ̽resulting algorithm is highly adaptable and can respond in near real-time to changes in the driver’s behaviour and status, road conditions, road type, or driver characteristics.</p>&#13; &#13; <p>This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others. Any driver-vehicle interaction can be then customised to prioritise safety and enhance the user experience, delivering adaptive human-machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. ֱ̽<a href="https://ieeexplore.ieee.org/document/10244092">results</a> are reported in the journal <em>IEEE Transactions on Intelligent Vehicles</em>.</p>&#13; &#13; <p>“More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety,” said co-first author Dr Bashar Ahmad from Cambridge’s Department of Engineering. “There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver.”</p>&#13; &#13; <p>A driver’s status – or workload – can change frequently. Driving in a new area, in heavy traffic or poor road conditions, for example, is usually more demanding than a daily commute.</p>&#13; &#13; <p>“If you’re in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display,” said Ahmad. “ ֱ̽issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them.”</p>&#13; &#13; <p>There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that’s available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able to consume and fuse different unsynchronised data streams that have different update rates, including from biometric sensors if available.</p>&#13; &#13; <p>To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car’s central air vent, next to a small LED ring light that would blink at regular intervals. Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.</p>&#13; &#13; <p>Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.</p>&#13; &#13; <p> ֱ̽on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver’s instantaneous workload, using several driving performance signals including steering and braking. ֱ̽framework combines macro and micro measures of workload where the former is the driver’s average workload profile and the latter is the instantaneous one.</p>&#13; &#13; <p>“For most machine learning applications like this, you would have to train it on a particular driver, but we’ve been able to adapt the models on the go using simple Bayesian filtering techniques,” said Ahmad. “It can easily adapt to different road types and conditions, or different drivers using the same car.”</p>&#13; &#13; <p> ֱ̽research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the ֱ̽ of Cambridge.</p>&#13; &#13; <p>“This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients,” said JLR’s Senior Technical Specialist of Human Machine Interface Dr Lee Skrypchuk. “These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys.”</p>&#13; &#13; <p> ֱ̽research at Cambridge was carried out by a team of researchers from the Signal Processing and Communications Laboratory (SigProC), Department of Engineering, under the supervision of Professor Simon Godsill. It was led by Dr Bashar Ahmad and included Nermin Caber (PhD student at the time) and Dr Jiaming Liang, who all worked on the project while based at Cambridge’s Department of Engineering.</p>&#13; &#13; <p> </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Nermin Caber et al. ‘<a href="https://ieeexplore.ieee.org/document/10244092">Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data</a>.’ IEEE Transactions on Intelligent Vehicles (2023). DOI: 10.1109/TIV.2023.3313419</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Bashar Ahmad</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Coneyl Jay via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Head up display of traffic information and weather as seen by the driver</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 07 Dec 2023 07:48:29 +0000 sc604 243581 at Real-time drone intent monitoring could enable safer use of drones and prevent a repeat of 2018 Gatwick incident /research/news/real-time-drone-intent-monitoring-could-enable-safer-use-of-drones-and-prevent-a-repeat-of-2018 <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/dronecity.jpg?itok=wPsxAhzG" alt="Drone and city skyline" title="Drone and city skyline, Credit: Goh Rhy Yan via Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, used a combination of statistical techniques and radar data to predict the flight path of a drone, and whether it intends to enter a restricted airspace, for instance around a civilian airport.  </p>&#13; &#13; <p>Their solution could help prevent a repeat of the Gatwick incident, as it can spot any drones before they enter restricted airspace and can determine, early, if their future actions are likely to pose a threat to other aircraft.</p>&#13; &#13; <p>This new predictive capability can enable automated decision-making and significantly reduce the workload on drone surveillance system operators by offering actionable information on potential threats to facilitate timely and proportionate responses.</p>&#13; &#13; <p>Real radar data from live drone trials at several locations was used to validate the new approach. Some of the results will be reported today (15 September) at the <a href="https://sspd.eng.ed.ac.uk/programme"><em>Sensor Signal Processing for Defence Conference</em></a> in Edinburgh.</p>&#13; &#13; <p>Drones have become ubiquitous over the past several years, with widespread applications in agriculture, surveying and e-commerce, among other fields. However, they can also be a nuisance or present a potential safety risk, especially with the wide availability of cheap and increasingly more capable platforms.</p>&#13; &#13; <p>A few days before Christmas 2018, reported drone sightings near the perimeter of Gatwick Airport caused hundreds of flights to be disrupted due to the possible risk of collision. No culprit was found.</p>&#13; &#13; <p>“While we don’t fully know what happened at Gatwick, the incident highlighted the potential risk drones can pose to the public if they are misused, whether that’s done maliciously or completely innocently,” said paper co-author Dr Bashar Ahmad, who carried out the research while based at Cambridge’s Department of Engineering. “It’s crucial for future drone surveillance systems to have predictive capabilities for revealing, as early as possible, a drone with malicious intent or anomalous behaviour.”</p>&#13; &#13; <p>To aid with air traffic control and prevent any possible collisions, commercial airplanes report their location every few minutes. However, there is no such requirement for drones.</p>&#13; &#13; <p>“There needs to be some sort of automated equivalent to air traffic control for drones,” said Professor Simon Godsill from Cambridge’s Department of Engineering, who led the project. “But unlike large and fast-moving targets, like a passenger jet, drones are small, agile, and slow-moving, which makes them difficult to track. They can also easily be mistaken for birds, and vice versa.”</p>&#13; &#13; <p>“We need to spot threats as early as possible, but we also need to be careful not to overreact, since closing civilian airspace is a drastic and highly disruptive measure that we want to avoid, especially if it ends up being a false alarm,” said first author Dr Jiaming Liang, also from the Department of Engineering, who developed the underlying algorithms with Godsill.</p>&#13; &#13; <p>There are several potential ways to monitor the space around a civilian airport. A typical drone surveillance solution can use a combination of several sensors, such as radar, radio frequency detectors and cameras, but it’s often expensive and labour-intensive to operate.</p>&#13; &#13; <p>Using Bayesian statistical techniques, the Cambridge researchers built a solution that would only flag those drones which pose a threat and offer a way to prioritise them. Threat is defined as a drone that’s intending to enter restricted airspace or displays an unusual flying pattern.</p>&#13; &#13; <p>“We need to know this before it happens, not after it happens,” said Godsill. “This way, if a drone is getting too close, it could be possible to warn the drone operator. For obvious safety reasons, it’s prohibited to disable a drone in civilian airspace, so the only option is to close the airspace. Our goal is to make sure airport authorities don’t have to do this unless the threat is a real one.”</p>&#13; &#13; <p> ֱ̽software-based solution uses a stochastic, or random, model to determine the underlying intent of the drone, which can change dynamically over time. Most drones navigate using waypoints, meaning they travel from one point to the next, and a single journey is made of multiple points.</p>&#13; &#13; <p>In tests using real radar data, the Cambridge-developed solution was able to identify drones before they reached their next waypoint. Based on a drone’s velocity, trajectory and other data, it was able to predict the probability of any given drone reaching the next waypoint in real time.</p>&#13; &#13; <p>“In tests, our system was able to spot potential threats in seconds, but in a real scenario, those seconds or minutes can make the difference between an incident happening or not,” said Liang. “It could give time to warn incoming flights about the threat so that no one gets hurt.”</p>&#13; &#13; <p> ֱ̽Cambridge researchers say their solution can be incorporated into existing surveillance systems, making it a cost-effective way of tracking the risk of drones ending up where they shouldn’t. ֱ̽algorithms could, in principle, also be applied to other domains such as maritime safety, robotics and self-driving cars.</p>&#13; &#13; <p> </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Jiaming Liang et al. ‘<a href="https://sspd.eng.ed.ac.uk/programme">Detection of Malicious Intent in Non-cooperative Drone Surveillance</a>.’ Paper presented at the Sensor Signal Processing for Defence conference. Edinburgh, UK. 14-15 September 2021. <a href="https://sspd.eng.ed.ac.uk/">https://sspd.eng.ed.ac.uk/</a></em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a real-time approach that can help prevent incidents like the large-scale disruption at London’s Gatwick Airport in 2018, where possible drone sightings at the perimeter of the airport caused the cancellation of hundreds of flights.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">While we don’t fully know what happened at Gatwick, the incident highlighted the potential risk drones can pose to the public if they are misused</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Bashar Ahmad</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/silhouette-of-quadcopter-drone-hovering-near-the-city-p_5BnqHfz3Y" target="_blank">Goh Rhy Yan via Unsplash</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Drone and city skyline</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 14 Sep 2021 23:12:50 +0000 sc604 226671 at AI-based ‘no-touch touchscreen’ could reduce risk of pathogen spread from surfaces /research/news/ai-based-no-touch-touchscreen-could-reduce-risk-of-pathogen-spread-from-surfaces <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/crop_186.jpg?itok=-t4CNKoz" alt="Predictive touch system" title="Predictive touch, Credit: Jaguar Land Rover" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽patented technology, known as ‘predictive touch’, was developed by engineers at the ֱ̽ of Cambridge as part of a research collaboration with Jaguar Land Rover. It uses a combination of artificial intelligence and sensor technology to predict a user’s intended target on touchscreens and other interactive displays or control panels, selecting the correct item before the user’s hand reaches the display.</p> <p>More and more passenger cars have touchscreen technology to control entertainment, navigation or temperature control systems. However, users can often miss the correct item – for example due to acceleration or vibrations from road conditions – and have to reselect, meaning that their attention is taken off the road, increasing the risk of an accident.</p> <p>In lab-based tests, driving simulators and road-based trials, the predictive touch technology was able to reduce interaction effort and time by up to 50% due to its ability to predict the user’s intended target with high accuracy early in the pointing task.</p> <p>As lockdown restrictions around the world continue to ease, the researchers say the technology could also be useful in a post-COVID-19 world. Many everyday consumer transactions are conducted using touchscreens: ticketing at rail stations or cinemas, ATMs, check-in kiosks at airports, self-service checkouts in supermarkets, as well as many industrial and manufacturing applications. Eliminating the need to actually touch a touchscreen or other interactive display could reduce the risk of spreading pathogens – such as the common cold, influenza or even coronavirus – from surfaces.</p> <p>In addition, the technology could also be incorporated into smartphones, and could be useful while walking or jogging, allowing users to easily and accurately select items without the need for any physical contact. It even works in situations such as a moving car on a bumpy road, or if the user has a motor disability which causes a tremor or sudden hand jerks, such as Parkinson’s disease or cerebral palsy.</p> <p>“Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that’s driving a car or changing the music on your phone while you’re running,” said Professor Simon Godsill from Cambridge’s Department of Engineering, who led the project. “We also know that certain pathogens can be transmitted via surfaces, so this technology could help reduce the risk for that type of transmission.”</p> <p> ֱ̽technology uses machine intelligence to determine the item the user intends to select on the screen early in the pointing task, speeding up the interaction. It uses a gesture tracker, including vision-based or RF-based sensors, which are increasingly common in consumer electronics; contextual information such as user profile, interface design, environmental conditions; and data available from other sensors, such as an eye-gaze tracker, to infer the user’s intent in real time.</p> <p>“This technology also offers us the chance to make vehicles safer by reducing the cognitive load on drivers and increasing the amount of time they can spend focused on the road ahead. This is a key part of our Destination Zero journey,” said Lee Skrypchuk, Human Machine Interface Technical Specialist at Jaguar Land Rover.</p> <p>It could also be used for displays that do not have a physical surface such as 2D or 3D projections or holograms. Additionally, it promotes inclusive design practices and offers additional design flexibilities, since the interface functionality can be seamlessly personalised for given users and the display size or location is no longer constrained by the user ability to reach-touch.</p> <p>“Our technology has numerous advantages over more basic mid-air interaction techniques or conventional gesture recognition, because it supports intuitive interactions with legacy interface designs and doesn’t require any learning on the part of the user,” said Dr Bashar Ahmad, who led the development of the technology and the underlying algorithms with Professor Godsill. “It fundamentally relies on the system to predict what the user intends and can be incorporated into both new and existing touchscreens and other interactive display technologies.”</p> <p>This software-based solution for contactless interactions has reached high technology readiness levels and can be seamlessly integrated into existing touchscreens and interactive displays, so long as the correct sensory data is available to support the machine learning algorithm.</p> <p> ֱ̽technology was developed between 2012 and 2018 by the Centre for Advanced Photonics and Electronics (CAPE) as part of the CAPE Motion Adaptive Touchscreen System for Automotive - MATSA (1 and 2) project. </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A ‘no-touch touchscreen’ developed for use in cars could also have widespread applications in a post-COVID-19 world, by reducing the risk of transmission of pathogens on surfaces.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that’s driving a car or changing the music on your phone while you’re running</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Simon Godsill</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-163472" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/163472">Can Touchless screens prevent future epidemics and car accidents?</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/ODFxKRNB55E?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Jaguar Land Rover</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Predictive touch</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 23 Jul 2020 05:00:00 +0000 sc604 216492 at