探花直播 of Cambridge - machine vision /taxonomy/subjects/machine-vision en Robot 鈥榗hef鈥� learns to recreate recipes from watching food videos /research/news/robot-chef-learns-to-recreate-recipes-from-watching-food-videos <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/untitled-3_1.jpg?itok=RV53FI1P" alt="Robot arm reaching for a piece of broccoli" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> 探花直播researchers, from the 探花直播 of Cambridge, programmed their robotic chef with a 鈥榗ookbook鈥� of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it.</p>&#13; &#13; <p>In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their <a href="https://ieeexplore.ieee.org/document/10124218">results</a>, reported in the journal <em>IEEE Access</em>, demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.</p>&#13; &#13; <p>Robotic chefs have been featured in science fiction for decades, but in reality, cooking is a challenging problem for a robot. Several commercial companies have built prototype robot chefs, although none of these are currently commercially available, and they lag well behind their human counterparts in terms of skill.</p>&#13; &#13; <p>Human cooks can learn new recipes through observation, whether that鈥檚 watching another person cook or watching a video on YouTube, but programming a robot to make a range of dishes is costly and time-consuming.</p>&#13; &#13; <p>鈥淲e wanted to see whether we could train a robot chef to learn in the same incremental way that humans can 鈥� by identifying the ingredients and how they go together in the dish,鈥� said Grzegorz Sochacki from Cambridge鈥檚 Department of Engineering, the paper鈥檚 first author.</p>&#13; &#13; <p>Sochacki, a PhD candidate in Professor Fumiya Iida鈥檚 <a href="https://birlab.org/">Bio-Inspired Robotics Laboratory</a>, and his colleagues devised eight simple salad recipes and filmed themselves making them. They then used a publicly available neural network to train their robot chef. 探花直播neural network had already been programmed to identify a range of different objects, including the fruits and vegetables used in the eight salad recipes (broccoli, carrot, apple, banana and orange).</p>&#13; &#13; <p>Using computer vision techniques, the robot analysed each frame of video and was able to identify the different objects and features, such as a knife and the ingredients, as well as the human demonstrator鈥檚 arms, hands and face. Both the recipes and the videos were converted to vectors and the robot performed mathematical operations on the vectors to determine the similarity between a demonstration and a vector.</p>&#13; &#13; <p>By correctly identifying the ingredients and the actions of the human chef, the robot could determine which of the recipes was being prepared. 探花直播robot could infer that if the human demonstrator was holding a knife in one hand and a carrot in the other, the carrot would then get chopped up.</p>&#13; &#13; <p>Of the 16 videos it watched, the robot recognised the correct recipe 93% of the time, even though it only detected 83% of the human chef鈥檚 actions. 探花直播robot was also able to detect that slight variations in a recipe, such as making a double portion or normal human error, were variations and not a new recipe. 探花直播robot also correctly recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.</p>&#13; &#13; <p>鈥淚t鈥檚 amazing how much nuance the robot was able to detect,鈥� said Sochacki. 鈥淭hese recipes aren鈥檛 complex 鈥� they鈥檙e essentially chopped fruits and vegetables, but it was really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.鈥� 聽</p>&#13; &#13; <p> 探花直播videos used to train the robot chef are not like the food videos made by some social media influencers, which are full of fast cuts and visual effects, and quickly move back and forth between the person preparing the food and the dish they鈥檙e preparing. For example, the robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it 鈥� for the robot to identify the carrot, the human demonstrator had to hold up the carrot so that the robot could see the whole vegetable.</p>&#13; &#13; <p>鈥淥ur robot isn鈥檛 interested in the sorts of food videos that go viral on social media 鈥� they鈥檙e simply too hard to follow,鈥� said Sochacki. 鈥淏ut as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.鈥�</p>&#13; &#13; <p> 探花直播research was supported in part by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).</p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Grzegorz Sochacki et al. 鈥�<a href="https://ieeexplore.ieee.org/document/10124218">Recognition of Human Chef鈥檚 Intentions for Incremental Learning of Cookbook by Robotic Salad Chef</a>.鈥� IEEE Access (2023). DOI: 10.1109/ACCESS.2023.3276234</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have trained a robotic 鈥榗hef鈥� to watch and learn from cooking videos, and recreate the dish itself.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can 鈥� by identifying the ingredients and how they go together in the dish</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Greg Sochacki</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-208991" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/robot-chef-learns-to-recreate-recipes-from-watching-food-videos">Robot 鈥榗hef鈥� learns to recreate recipes from watching food videos</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/nx3k4XA3x4Q?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; 探花直播text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright 漏 探花直播 of Cambridge and licensors/contributors as identified.聽 All rights reserved. We make our image and video content available in a number of ways 鈥� as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/social-media/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 05 Jun 2023 01:00:00 +0000 sc604 239811 at Phone-based measurements provide fast, accurate information about the health of forests /research/news/phone-based-measurements-provide-fast-accurate-information-about-the-health-of-forests <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/gettyimages-1329369484-crop.jpg?itok=82uzxanr" alt="Treetops seen from a low angle" title="Treetops seen from a low angle, Credit: Baac3nes via Getty Images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> 探花直播researchers, from the 探花直播 of Cambridge, developed the algorithm, which gives an accurate measurement of tree diameter, an important measurement used by scientists to monitor forest health and levels of carbon sequestration.</p>&#13; &#13; <p> 探花直播algorithm uses low-cost, low-resolution LiDAR sensors that are incorporated into many mobile phones, and provides results that are just as accurate, but much faster, than manual measurement techniques. 探花直播<a href="https://www.mdpi.com/2072-4292/15/3/772">results</a> are reported in the journal <em>Remote Sensing</em>.</p>&#13; &#13; <p> 探花直播primary manual measurement used in forest ecology is tree diameter at chest height. These measurements are used to make determinations about the health of trees and the wider forest ecosystem, as well as how much carbon is being sequestered.</p>&#13; &#13; <p>While this method is reliable, since the measurements are taken from the ground, tree by tree, the method is time-consuming. In addition, human error can lead to variations in measurements.</p>&#13; &#13; <p>鈥淲hen you鈥檙e trying to figure out how much carbon a forest is sequestering, these ground-based measurements are hugely valuable, but also time-consuming,鈥� said first author Amelia Holcomb from Cambridge鈥檚 <a href="https://www.cst.cam.ac.uk/">Department of Computer Science and Technology</a>. 鈥淲e wanted to know whether we could automate this process.鈥�</p>&#13; &#13; <p>Some aspects of forest measurement can be carried out using expensive special-purpose LiDAR sensors, but Holcomb and her colleagues wanted to determine whether these measurements could be taken using cheaper, lower-resolution sensors, of the type that are used in some mobile phones for augmented reality applications.</p>&#13; &#13; <p>Other researchers have carried out some forest measurement studies using this type of sensor, however, this has been focused on highly-managed forests where trees are straight, evenly spaced and undergrowth is regularly cleared. Holcomb and her colleagues wanted to test whether these sensors could return accurate results for non-managed forests quickly, automatically, and in a single image.</p>&#13; &#13; <p>鈥淲e wanted to develop an algorithm that could be used in more natural forests, and that could deal with things like low-hanging branches, or trees with natural irregularities,鈥� said Holcomb.</p>&#13; &#13; <p> 探花直播researchers designed an algorithm that uses a smartphone LiDAR sensor to estimate trunk diameter automatically from a single image in realistic field conditions. 探花直播algorithm was incorporated into a custom-built app for an Android smartphone and is able to return results in near real time.</p>&#13; &#13; <p>To develop the algorithm, the researchers first collected their own dataset by measuring trees manually and taking pictures. Using image processing and computer vision techniques, they were able to train the algorithm to differentiate trunks from large branches, determine which direction trees were leaning in, and other information that could help it refine the information about forests.</p>&#13; &#13; <p> 探花直播researchers tested the app in three different forests 鈥� one each in the UK, US and Canada 鈥� in spring, summer and autumn. 探花直播app was able to detect 100% of tree trunks and had a mean error rate of 8%, which is comparable to the error rate when measuring by hand. However, the app sped up the process significantly and was about four and a half times faster than measuring trees manually.</p>&#13; &#13; <p>鈥淚 was surprised the app works as well as it does,鈥� said Holcomb. 鈥淪ometimes I like to challenge it with a particularly crowded bit of forest, or a particularly oddly-shaped tree, and I think there鈥檚 no way it will get it right, but it does.鈥�</p>&#13; &#13; <p>Since their measurement tool requires no specialised training and uses sensors that are already incorporated into an increasing number of phones, the researchers say that it could be an accurate, low-cost tool for forest measurement, even in complex forest conditions.</p>&#13; &#13; <p> 探花直播researchers plan to make their app publicly available for Android phones later this spring.</p>&#13; &#13; <p> 探花直播research was supported in part by the David Cheriton Graduate Scholarship, the Canadian National Research Council, and the Harding Distinguished Postgraduate Scholarship.</p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Amelia Holcomb, Linzhe Tong, and Srinivasan Keshav. 鈥�<a href="https://www.mdpi.com/2072-4292/15/3/772">Robust Single-Image Tree Diameter Estimation with Mobile Phones</a>.鈥� Remote Sensing (2023). DOI: 10.3390/rs15030772</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed an algorithm that uses computer vision techniques to accurately measure trees almost five times faster than traditional, manual methods.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Ground-based measurements are hugely valuable, but also time-consuming. We wanted to know whether we could automate this process.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Amelia Holcomb</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Baac3nes via Getty Images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Treetops seen from a low angle</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; 探花直播text in this work is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright 漏 探花直播 of Cambridge and licensors/contributors as identified.聽 All rights reserved. We make our image and video content available in a number of ways 鈥� as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/social-media/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 07 Mar 2023 01:21:40 +0000 sc604 237431 at Researchers design AI system to assess pain levels in sheep /research/news/researchers-design-ai-system-to-assess-pain-levels-in-sheep <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/sheep-crop.jpg?itok=Ocg4TBVK" alt="Sheep" title="Sheep, Credit: Marwa Mahmoud" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> 探花直播researchers have developed an AI system which uses five different facial expressions to recognise whether a sheep is in pain, and estimate the severity of that pain. 探花直播results could be used to improve sheep welfare, and could be applied to other types of animals, such as rodents used in animal research, rabbits or horses.</p> <p>Building on earlier work which teaches computers to recognise emotions and expressions in human faces, the system is able to detect the distinct parts of a sheep鈥檚 face and compare it with a standardised measurement tool developed by veterinarians for diagnosing pain. Their <a href="http://www.cl.cam.ac.uk/~pr10/publications/fg17.pdf" target="_blank">results</a> will be presented today (1 June) at the 12th IEEE International Conference on Automatic Face and Gesture Recognition in Washington, DC.</p> <p>Severe pain in sheep is associated with conditions such as foot rot, an extremely painful and contagious condition which causes the foot to rot away; or mastitis, an inflammation of the udder in ewes caused by injury or bacterial infection. Both of these conditions are common in large flocks, and early detection will lead to faster treatment and pain relief. Reliable and efficient pain assessment would also help with early diagnosis.</p> <p>As is common with most animals, facial expressions in sheep are used to assess pain. In 2016, Dr Krista McLennan, a former postdoctoral researcher at the 探花直播 of Cambridge who is now a lecturer in animal behaviour at the 探花直播 of Chester, developed the Sheep Pain Facial Expression Scale (SPFES). 探花直播SPFES is a tool to measure pain levels based on facial expressions of sheep, and has been shown to recognise pain with high accuracy. However, training people to use the tool can be time-consuming and individual bias can lead to inconsistent scores.</p> <p>In order to make the process of pain detection more accurate, the Cambridge researchers behind the current study used the SPFES as the basis of an AI system which uses machine learning techniques to estimate pain levels in sheep. Professor Peter Robinson, who led the research, normally focuses on teaching computers to recognise emotions in human faces, but a meeting with Dr McLennan got him interested in exploring whether a similar system could be developed for animals.</p> <p>鈥淭here鈥檚 been much more study over the years with people,鈥� said Robinson, of Cambridge鈥檚 Computer Laboratory. 鈥淏ut a lot of the earlier work on the faces of animals was actually done by Darwin, who argued that all humans and many animals show emotion through remarkably similar behaviours, so we thought there would likely be crossover between animals and our work in human faces.鈥�</p> <p>According to the SPFES, when a sheep is in pain, there are five main things which happen to their faces: their eyes narrow, their cheeks tighten, their ears fold forwards, their lips pull down and back, and their nostrils change from a U shape to a V shape. 探花直播SPFES then ranks these characteristics on a scale of one to 10 to measure the severity of the pain.</p> <p>鈥� 探花直播interesting part is that you can see a clear analogy between these actions in the sheep鈥檚 faces and similar facial actions in humans when they are in pain 鈥� there is a similarity in terms of the muscles in their faces and in our faces,鈥� said co-author Dr Marwa Mahmoud, a postdoctoral researcher in Robinson鈥檚 group. 鈥淗owever, it is difficult to 鈥榥ormalise鈥� a sheep鈥檚 face in a machine learning model. A sheep鈥檚 face is totally different in profile than looking straight on, and you can鈥檛 really tell a sheep how to pose.鈥�</p> <p><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/normalisation-crop.jpg" style="width: 590px; height: 244px; float: left;" /></p> <p>To train the model, the Cambridge researchers used a small dataset consisting of approximately 500 photographs of sheep, which had been gathered by veterinarians in the course of providing treatment. Yiting Lu, a Cambridge undergraduate in Engineering and co-author on the paper, trained the model by labelling the different parts of the sheep鈥檚 faces on each photograph and ranking their pain levels according to SPFES.</p> <p>Early tests of the model showed that it was able to estimate pain levels with about 80% degree of accuracy, which means that the system is learning. While the results with still photographs have been successful, in order to make the system more robust, they require much larger datasets.</p> <p> 探花直播next plans for the system are to train it to detect and recognise sheep faces from moving images, and to train it to work when the sheep is in profile or not looking directly at the camera. Robinson says that if they are able to train the system well enough, a camera could be positioned at a water trough or other place where sheep congregate, and the system would be able to recognise any sheep which were in pain. 探花直播farmer would then be able to retrieve the affected sheep from the field and get it the necessary medical attention.</p> <p>鈥淚 do a lot of walking in the countryside, and after working on this project, I now often find myself stopping to talk to the sheep and make sure they鈥檙e happy,鈥� said Robinson.</p> <p><strong><em>Reference</em></strong><br /> <em>Yuting Lu, Marwa Mahmoud and Peter Robinson. 鈥�<a href="http://www.cl.cam.ac.uk/~pr10/publications/fg17.pdf">Estimating sheep pain level using facial action unit detection</a>.鈥� Paper presented to the </em><em>IEEE International Conference on Automatic Face and Gesture Recognition, </em><em>Washington, DC. <em>30 May 鈥� 3 June, 2017. </em><a href="http://www.fg2017.org/">http://www.fg2017.org/</a>. </em></p> <p><em>Inset image: Left: Localised facial landmarks; Right: Normalised聽sheep face marked with feature bounding boxes.聽</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>An artificial intelligence system designed by researchers at the 探花直播 of Cambridge is able to detect pain levels in sheep, which could aid in early diagnosis and treatment of common, but painful, conditions in animals.聽</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">You can see a clear analogy between these actions in the sheep鈥檚 faces and similar facial actions in humans when they are in pain.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Marwa Mahmoud</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Marwa Mahmoud</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Sheep</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> 探花直播text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 31 May 2017 23:02:29 +0000 sc604 189242 at Teaching machines to see: new smartphone-based system could accelerate development of driverless cars /research/news/teaching-machines-to-see-new-smartphone-based-system-could-accelerate-development-of-driverless-cars <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/segnet-crop.png?itok=4I4BnufE" alt="SegNet demonstration" title="SegNet demonstration, Credit: Alex Kendall" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Two newly-developed systems for driverless cars can identify a user鈥檚 location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.</p>&#13; &#13; <p> 探花直播separate but complementary systems have been designed by researchers from the 探花直播 of Cambridge and demonstrations are freely available online. Although the systems cannot currently control a driverless car, the ability to make a machine 鈥榮ee鈥� and accurately identify where it is and what it鈥檚 looking at is a vital part of developing autonomous vehicles and robotics.</p>&#13; &#13; <p> 探花直播first system, called SegNet, can take an image of a street scene it hasn鈥檛 seen before and classify it, sorting objects into 12 different categories 鈥� such as roads, street signs, pedestrians, buildings and cyclists 鈥� in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.</p>&#13; &#13; <p>Users can visit the SegNet <a href="https://arxiv.org/abs/1511.00561/">website</a> and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. 探花直播system has been successfully tested on both city roads and motorways.</p>&#13; &#13; <p>For the driverless cars currently in development, radar and base sensors are expensive 鈥� in fact, they often cost more than the car itself. In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR (a remote sensing technology), SegNet learns by example 鈥� it was 鈥榯rained鈥� by an industrious group of Cambridge undergraduate students, who manually labelled every pixel in each of 5000 images, with each image taking about 30 minutes to complete. Once the labelling was finished, the researchers then took two days to 鈥榯rain鈥� the system before it was put into action.</p>&#13; &#13; <p>鈥淚t鈥檚 remarkably good at recognising things in an image, because it鈥檚 had so much practice,鈥� said Alex Kendall, a PhD student in the Department of Engineering. 鈥淗owever, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.鈥�</p>&#13; &#13; <p>SegNet was primarily trained in highway and urban environments, so it still has some learning to do for rural, snowy or desert environments 鈥� although it has performed well in initial tests for these environments.</p>&#13; &#13; <p> 探花直播system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.</p>&#13; &#13; <p>鈥淰ision is our most powerful sense and driverless cars will also need to see,鈥� said Professor Roberto Cipolla, who led the research. 鈥淏ut teaching a machine to see is far more difficult than it sounds.鈥�</p>&#13; &#13; <p>As children, we learn to recognise objects through example 鈥� if we鈥檙e shown a toy car several times, we learn to recognise both that specific car and other similar cars as the same type of object. But with a machine, it鈥檚 not as simple as showing it a single car and then having it be able to recognise all different types of cars. Machines today learn under supervision: sometimes through thousands of labelled examples.</p>&#13; &#13; <p>There are three key technological questions that must be answered to design autonomous vehicles: where am I, what鈥檚 around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.</p>&#13; &#13; <p> 探花直播localisation system designed by Kendall and Cipolla runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene. 探花直播system is far more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.</p>&#13; &#13; <p>It has been tested along a kilometre-long stretch of King鈥檚 Parade in central Cambridge, and it is able to determine both location and orientation within a few metres and a few degrees, which is far more accurate than GPS 鈥� a vital consideration for driverless cars. Users can try out the system for themselves <a href="https://www.repository.cam.ac.uk/handle/1810/251342/">here</a>.</p>&#13; &#13; <p> 探花直播localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.</p>&#13; &#13; <p>鈥淲ork in the field of artificial intelligence and robotics has really taken off in the past few years,鈥� said Kendall. 鈥淏ut what鈥檚 cool about our group is that we鈥檝e developed technology that uses deep learning to determine where you are and what鈥檚 around you 鈥� this is the first time this has been done using deep learning.鈥�</p>&#13; &#13; <p>鈥淚n the short term, we鈥檙e more likely to see this sort of system on a domestic robot 鈥� such as a robotic vacuum cleaner, for instance,鈥� said Cipolla. 鈥淚t will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.鈥�</p>&#13; &#13; <p> 探花直播researchers are presenting <a href="https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Kendall_PoseNet_A_Convolutional_ICCV_2015_paper.pdf">details</a> of the two technologies at the International Conference on Computer Vision in Santiago, Chile.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Two technologies which use deep learning techniques to help machines to see and recognise their location and surroundings could be used for the development of driverless cars and autonomous robotics 鈥� and can be used on a regular camera or smartphone.聽</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Vision is our most powerful sense and driverless cars will also need to see, but teaching a machine to see is far more difficult than it sounds.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Roberto Cipolla</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-96282" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/96282">Teaching machines to see</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-2 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/MxximR-1ln4?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Alex Kendall</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">SegNet demonstration</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; 探花直播text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 21 Dec 2015 06:34:09 +0000 sc604 164412 at