ֱ̽ of Cambridge - Driverless cars /taxonomy/subjects/driverless-cars en 3D holographic head-up display could improve road safety /stories/holographicdisplay <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed the first LiDAR-based augmented reality head-up display for use in vehicles. Tests on a prototype version of the technology suggest that it could improve road safety by ‘seeing through’ objects to alert of potential hazards without distracting the driver.</p> </p></div></div></div> Mon, 26 Apr 2021 11:45:57 +0000 sc604 223651 at Driverless cars working together can speed up traffic by 35 percent /research/news/driverless-cars-working-together-can-speed-up-traffic-by-35-percent <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/crop_116.jpg?itok=eUXemmDy" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, programmed a small fleet of miniature robotic cars to drive on a multi-lane track and observed how the traffic flow changed when one of the cars stopped.</p> <p>When the cars were not driving cooperatively, any cars behind the stopped car had to stop or slow down and wait for a gap in the traffic, as would typically happen on a real road. A queue quickly formed behind the stopped car and overall traffic flow was slowed.</p> <p>However, when the cars were communicating with each other and driving cooperatively, as soon as one car stopped in the inner lane, it sent a signal to all the other cars. Cars in the outer lane that were in immediate proximity of the stopped car slowed down slightly so that cars in the inner lane were able to quickly pass the stopped car without having to stop or slow down significantly.</p> <p>Additionally, when a human-controlled driver was put on the ‘road’ with the autonomous cars and moved around the track in an aggressive manner, the other cars were able to give way to avoid the aggressive driver, improving safety.</p> <p> ֱ̽<a href="https://arxiv.org/abs/1902.06133">results</a>, to be presented today at the International Conference on Robotics and Automation (ICRA) in Montréal, will be useful for studying how autonomous cars can communicate with each other, and with cars controlled by human drivers, on real roads in the future.</p> <p>“Autonomous cars could fix a lot of different problems associated with driving in cities, but there needs to be a way for them to work together,” said co-author Michael He, an undergraduate student at St John’s College, who designed the <a href="https://github.com/proroklab/minicar">algorithms</a> for the experiment.</p> <p>“If different automotive manufacturers are all developing their own autonomous cars with their own software, those cars all need to communicate with each other effectively,” said co-author Nicholas Hyldmar, an undergraduate student at Downing College, who designed much of the hardware for the experiment.</p> <p> ֱ̽two students completed the work as part of an undergraduate research project in summer 2018, in the lab of Dr Amanda Prorok from Cambridge’s Department of Computer Science and Technology.</p> <p>Many existing tests for multiple autonomous driverless cars are done digitally, or with scale models that are either too large or too expensive to carry out indoor experiments with fleets of cars.</p> <p>Starting with inexpensive scale models of commercially-available vehicles with realistic steering systems, the Cambridge researchers adapted the cars with motion capture sensors and a Raspberry Pi, so that the cars could communicate via wifi.</p> <p>They then adapted a lane-changing algorithm for autonomous cars to work with a fleet of cars. ֱ̽original algorithm decides when a car should change lanes, based on whether it is safe to do so and whether changing lanes would help the car move through traffic more quickly. ֱ̽adapted algorithm allows for cars to be packed more closely when changing lanes and adds a safety constraint to prevent crashes when speeds are low. A second algorithm allowed the cars to detect a projected car in front of it and make space.</p> <p>They then tested the fleet in ‘egocentric’ and ‘cooperative’ driving modes, using both normal and aggressive driving behaviours, and observed how the fleet reacted to a stopped car. In the normal mode, cooperative driving improved traffic flow by 35% over egocentric driving, while for aggressive driving, the improvement was 45%. ֱ̽researchers then tested how the fleet reacted to a single car controlled by a human via a joystick.</p> <p>“Our design allows for a wide range of practical, low-cost experiments to be carried out on autonomous cars,” said Prorok. “For autonomous cars to be safely used on real roads, we need to know how they will interact with each other to improve safety and traffic flow.”</p> <p>In future work, the researchers plan to use the fleet to test multi-car systems in more complex scenarios including roads with more lanes, intersections and a wider range of vehicle types.</p> <p><em><strong>Reference:</strong></em><br /> <em>Nicholas Hyldmar, Yijun He, Amanda Prorok. ‘A Fleet of Miniature Cars for Experiments in Cooperative Driving.’ Paper presented at the <a href="https://ras.papercept.net/conferences/conferences/ICRA19/program/ICRA19_ContentListWeb_1.html#moc1-23_02">International Conference on Robotics and Automation</a> (ICRA 2019). Montréal, Canada. </em></p> <p> </p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A fleet of driverless cars working together to keep traffic moving smoothly can improve overall traffic flow by at least 35 percent, researchers have shown.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">For autonomous cars to be safely used on real roads, we need to know how they will interact with each other</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Amanda Prorok</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-148222" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/148222">Can cars talk to each other?</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/e0LIU1Sf6p0?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Sun, 19 May 2019 23:00:45 +0000 sc604 205432 at Artificial intelligence: computer says YES (but is it right?) /research/features/artificial-intelligence-computer-says-yes-but-is-it-right <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/1610202019-by-experienssthierry-ehrmann.jpg?itok=Qk9V5cgv" alt="2019 by ExperiensS" title="2019 by ExperiensS, Credit: Thierry Ehrmann" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.</p>&#13; &#13; <p>Of course many people die in car crashes every day – in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.</p>&#13; &#13; <p>Even so, the tragedy raised a pertinent question: how much do we understand – and trust – the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?</p>&#13; &#13; <p>We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.</p>&#13; &#13; <p>Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.</p>&#13; &#13; <p>Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?</p>&#13; &#13; <p>“Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data,” says Ghahramani. “But what is going on inside the ‘black box’? If the processes by which decisions were being made were more transparent, then trust would be less of an issue.”</p>&#13; &#13; <p>His team builds the algorithms that lie at the heart of these technologies (the “invisible bit” as he refers to it). Trust and transparency are important themes in their work: “We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.</p>&#13; &#13; <p>“When machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us.”</p>&#13; &#13; <p>One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.</p>&#13; &#13; <p>Two years ago, Ghahramani’s group launched the Automatic Statistician with funding from Google. ֱ̽tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.</p>&#13; &#13; <p>“ ֱ̽difficulty with machine learning systems is you don’t really know what’s going on inside – and the answers they provide are not contextualised, like a human would do. ֱ̽Automatic Statistician explains what it’s doing, in a human-understandable form.”</p>&#13; &#13; <p>Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.</p>&#13; &#13; <p>Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: “A particular issue with new artificial intelligence (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand.” His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments – an increasingly common occurrence.</p>&#13; &#13; <p>“We would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and – if they can no longer work reliably – then provide an alert and perhaps shift to a safety mode.” A driverless car, for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.</p>&#13; &#13; <p>Weller’s theme of trust and transparency forms just one of the projects at the newly launched £10 million <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (CFI). Ghahramani, who is Deputy Director of the Centre, explains: “It’s important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications – both the concerns and the benefits to society.”</p>&#13; &#13; <p>CFI brings together four of the world’s leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.</p>&#13; &#13; <p>Ghahramani describes the excitement felt across the machine learning field: “It’s exploding in importance. It used to be an area of research that was very academic – but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.</p>&#13; &#13; <p>“We are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us.”</p>&#13; &#13; <p><em>Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a <a href="https://www.youtube.com/watch?v=_5XvDCjrdXs">speech </a>delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Computers that learn for themselves are with us now. As they become more common in ‘high-stakes’ applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can  trust them.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">As we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Zoubin Ghahramani</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/home_of_chaos/4166229638/in/photolist-7ma1Vu-9jXRQ7-3FjPcz-bx8BcX-cs65bN-dPTAqE-48Dezu-nurxVW-mC75rT-dXxh8b-jR9gc-3KwLDC-5akwi9-75MGSi-fEbbTT-f1ab86-6avjFJ-p7gc1-ofut47-rpxmKL-jbSp7-bmUQLy-q131sg-2QnpAH-bxmfEd-PweVq-qbFyNT-4L32qY-pZVBB9-2uinMh-6L3BZn-re23rM-jfvWFG-dXrAKP-9jXM4U-9jXQoh-qa8G7T-rvMSwj-qdMd23-HXVdh-2Q1fQU-8f9zmW-iAqVac-oy72re-9mi7oc-cs5QkS-oMRA8h-C4Lzp4-paUvZM-6i89ys" target="_blank">Thierry Ehrmann</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">2019 by ExperiensS</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a></div></div></div> Thu, 20 Oct 2016 14:17:17 +0000 lw355 180122 at Teaching machines to see: new smartphone-based system could accelerate development of driverless cars /research/news/teaching-machines-to-see-new-smartphone-based-system-could-accelerate-development-of-driverless-cars <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/segnet-crop.png?itok=4I4BnufE" alt="SegNet demonstration" title="SegNet demonstration, Credit: Alex Kendall" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Two newly-developed systems for driverless cars can identify a user’s location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.</p>&#13; &#13; <p> ֱ̽separate but complementary systems have been designed by researchers from the ֱ̽ of Cambridge and demonstrations are freely available online. Although the systems cannot currently control a driverless car, the ability to make a machine ‘see’ and accurately identify where it is and what it’s looking at is a vital part of developing autonomous vehicles and robotics.</p>&#13; &#13; <p> ֱ̽first system, called SegNet, can take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories – such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.</p>&#13; &#13; <p>Users can visit the SegNet <a href="https://arxiv.org/abs/1511.00561/">website</a> and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. ֱ̽system has been successfully tested on both city roads and motorways.</p>&#13; &#13; <p>For the driverless cars currently in development, radar and base sensors are expensive – in fact, they often cost more than the car itself. In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR (a remote sensing technology), SegNet learns by example – it was ‘trained’ by an industrious group of Cambridge undergraduate students, who manually labelled every pixel in each of 5000 images, with each image taking about 30 minutes to complete. Once the labelling was finished, the researchers then took two days to ‘train’ the system before it was put into action.</p>&#13; &#13; <p>“It’s remarkably good at recognising things in an image, because it’s had so much practice,” said Alex Kendall, a PhD student in the Department of Engineering. “However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.”</p>&#13; &#13; <p>SegNet was primarily trained in highway and urban environments, so it still has some learning to do for rural, snowy or desert environments – although it has performed well in initial tests for these environments.</p>&#13; &#13; <p> ֱ̽system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.</p>&#13; &#13; <p>“Vision is our most powerful sense and driverless cars will also need to see,” said Professor Roberto Cipolla, who led the research. “But teaching a machine to see is far more difficult than it sounds.”</p>&#13; &#13; <p>As children, we learn to recognise objects through example – if we’re shown a toy car several times, we learn to recognise both that specific car and other similar cars as the same type of object. But with a machine, it’s not as simple as showing it a single car and then having it be able to recognise all different types of cars. Machines today learn under supervision: sometimes through thousands of labelled examples.</p>&#13; &#13; <p>There are three key technological questions that must be answered to design autonomous vehicles: where am I, what’s around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.</p>&#13; &#13; <p> ֱ̽localisation system designed by Kendall and Cipolla runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene. ֱ̽system is far more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.</p>&#13; &#13; <p>It has been tested along a kilometre-long stretch of King’s Parade in central Cambridge, and it is able to determine both location and orientation within a few metres and a few degrees, which is far more accurate than GPS – a vital consideration for driverless cars. Users can try out the system for themselves <a href="https://www.repository.cam.ac.uk/handle/1810/251342/">here</a>.</p>&#13; &#13; <p> ֱ̽localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.</p>&#13; &#13; <p>“Work in the field of artificial intelligence and robotics has really taken off in the past few years,” said Kendall. “But what’s cool about our group is that we’ve developed technology that uses deep learning to determine where you are and what’s around you – this is the first time this has been done using deep learning.”</p>&#13; &#13; <p>“In the short term, we’re more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance,” said Cipolla. “It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.”</p>&#13; &#13; <p> ֱ̽researchers are presenting <a href="https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Kendall_PoseNet_A_Convolutional_ICCV_2015_paper.pdf">details</a> of the two technologies at the International Conference on Computer Vision in Santiago, Chile.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Two technologies which use deep learning techniques to help machines to see and recognise their location and surroundings could be used for the development of driverless cars and autonomous robotics – and can be used on a regular camera or smartphone. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Vision is our most powerful sense and driverless cars will also need to see, but teaching a machine to see is far more difficult than it sounds.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Roberto Cipolla</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-96282" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/96282">Teaching machines to see</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-2 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/MxximR-1ln4?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Alex Kendall</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">SegNet demonstration</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 21 Dec 2015 06:34:09 +0000 sc604 164412 at