探花直播 of Cambridge - visual processing /taxonomy/subjects/visual-processing en Robot trained to read braille at twice the speed of humans /research/news/robot-trained-to-read-braille-at-twice-the-speed-of-humans <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/img-4841-dp.jpg?itok=RoYah_Zz" alt="Robot braille reader" title="Robot braille reader, Credit: Parth Potdar" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> 探花直播research team, from the 探花直播 of Cambridge, used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. 探花直播robot was able to read the braille at 315 words per minute at close to 90% accuracy.</p> <p>Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips. 探花直播<a href="https://ieeexplore.ieee.org/document/10410896">results</a> are reported in the journal <em>IEEE Robotics and Automation Letters</em>.</p> <p>Human fingertips are remarkably sensitive and help us gather information about the world around us. Our fingertips can detect tiny changes in the texture of a material or help us know how much force to use when grasping an object: for example, picking up an egg without breaking it or a bowling ball without dropping it.</p> <p>Reproducing that level of sensitivity in a robotic hand, in an energy-efficient way, is a big engineering challenge. In <a href="https://birlab.org/">Professor Fumiya Iida鈥檚 lab</a> in Cambridge鈥檚 Department of Engineering, researchers are developing solutions to this and other skills that humans find easy, but robots find difficult.</p> <p>鈥 探花直播softness of human fingertips is one of the reasons we鈥檙e able to grip things with the right amount of pressure,鈥 said Parth Potdar from Cambridge鈥檚 Department of Engineering and an undergraduate at Pembroke College, the paper鈥檚 first author. 鈥淔or robotics, softness is a useful characteristic, but you also need lots of sensor information, and it鈥檚 tricky to have both at once, especially when dealing with flexible or deformable surfaces.鈥</p> <p>Braille is an ideal test for a robot 鈥榝ingertip鈥 as reading it requires high sensitivity, since the dots in each representative letter pattern are so close together. 探花直播researchers used an off-the-shelf sensor to develop a robotic braille reader that more accurately replicates human reading behaviour.</p> <p>鈥淭here are existing robotic braille readers, but they only read one letter at a time, which is not how humans read,鈥 said co-author David Hardman, also from the Department of Engineering. 鈥淓xisting robotic braille readers work in a static way: they touch one letter pattern, read it, pull up from the surface, move over, lower onto the next letter pattern, and so on. We want something that鈥檚 more realistic and far more efficient.鈥</p> <p> 探花直播robotic sensor the researchers used has a camera in its 鈥榝ingertip鈥, and reads by using a combination of the information from the camera and the sensors. 鈥淭his is a hard problem for roboticists as there鈥檚 a lot of image processing that needs to be done to remove motion blur, which is time and energy-consuming,鈥 said Potdar.</p> <p> 探花直播team developed machine learning algorithms so the robotic reader would be able to 鈥榙eblur鈥 the images before the sensor attempted to recognise the letters. They trained the algorithm on a set of sharp images of braille with fake blur applied. After the algorithm had learned to deblur the letters, they used a computer vision model to detect and classify each character.</p> <p>Once the algorithms were incorporated, the researchers tested their reader by sliding it quickly along rows of braille characters. 探花直播robotic braille reader could read at 315 words per minute at 87% accuracy, which is twice as fast and about as accurate as a human Braille reader.</p> <p>鈥淐onsidering that we used fake blur the train the algorithm, it was surprising how accurate it was at reading braille,鈥 said Hardman. 鈥淲e found a nice trade-off between speed and accuracy, which is also the case with human readers.鈥</p> <p>鈥淏raille reading speed is a great way to measure the dynamic performance of tactile sensing systems, so our findings could be applicable beyond braille, for applications like detecting surface textures or slippage in robotic manipulation,鈥 said Potdar.</p> <p>In future, the researchers are hoping to scale the technology to the size of a humanoid hand or skin. 探花直播research was supported in part by the Samsung Global Research Outreach Program.</p> <p>聽</p> <p><em><strong>Reference:</strong><br /> Parth Potdar et al. 鈥<a href="https://ieeexplore.ieee.org/document/10410896">High-Speed Tactile Braille Reading via Biomimetic Sliding Interactions</a>.鈥 IEEE Robotics and Automation Letters (2024). DOI: 10.1109/LRA.2024.3356978</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers.</p> </p></div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-217601" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/can-robots-read-braille">Can robots read braille?</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/xqtA2Z668Ic?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Parth Potdar</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Robot braille reader</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> 探花直播text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright 漏 探花直播 of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways 鈥 on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 29 Jan 2024 06:04:52 +0000 sc604 244161 at Just made coffee while chatting to a friend? Time to thank your 鈥榲isuomotor binding鈥 mechanism鈥 /research/news/just-made-coffee-while-chatting-to-a-friend-time-to-thank-your-visuomotor-binding-mechanism <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/140314-dunkingacookieintoacupofcoffee.jpg?itok=7c10UJkd" alt="Dunking a cookie into a cup of coffee" title="Dunking a cookie into a cup of coffee, Credit: Jenny Downing" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>We talk about being 鈥榦n automatic鈥 when we鈥檙e describing carrying out a familiar series of actions without being aware of what we鈥檙e doing.</p>&#13; <p>Now researchers have for the first time found evidence that a dedicated information highway or 鈥榲isuomotor binding鈥 mechanism connects what we see with what we do. This mechanism helps us to coordinate our movements in order to carry out all kinds of tasks from dunking a biscuit in your coffee, while maintaining eye contact with someone else, to playing basketball on a crowded court.</p>&#13; <p> 探花直播UCL-led research (published聽yesterday in the journal <a href="https://www.cell.com/current-biology/fulltext/S0960-9822(14)00198-5"><em>Current Biology</em></a>) was a collaboration between Dr Alexandra Reichenbach, of the UCL Institute of Cognitive Neuroscience, and Dr David Franklin, of the Computational and Biological Learning Lab at Cambridge鈥檚 Department of Engineering.</p>&#13; <p>Their research suggests that a specialised mechanism for spatial self-awareness links visual cues with body motion. 探花直播finding could help us understand the feeling of disconnection reported by schizophrenia patients and could also explain why people with even the most advanced prosthetic limbs find it hard to coordinate their movements.</p>&#13; <p>Standard visual processing relies on us being able to ignore distractions and pay attention to objects of interest while filtering out others. 鈥 探花直播study shows that our brains also have separate hard-wired systems to track our own bodies visually even when we are not paying attention to them,鈥 explained Franklin. 鈥淭his allows visual attention to focus on objects in the world around us rather than on our own movements.鈥</p>&#13; <p> 探花直播newly-discovered mechanism was identified when three experiments were carried out on 52 healthy adults. In all three experiments, participants used robotic interfaces to control cursors on two-dimensional displays, where cursor motion was directly linked to hand movement. They were asked to keep their eyes fixed on the centre of the screen, a requirement checked by eye tracking. 鈥 探花直播robotic virtual reality system allowed us to instantaneously manipulate visual feedback independently of the physical movement of the body,鈥 said Franklin.</p>&#13; <p>In the first experiment, participants controlled two separate cursors 鈥 equally close to the centre of the screen 鈥 with their right and left hands. Their goal was to guide each cursor to a corresponding target at the top of the screen. Occasionally the cursor or target on each side would jump left or right, requiring participants to take corrective action. Each jump was 鈥榗ued鈥 by a flash on one side, but this was random and did not always correspond to the side about to change.</p>&#13; <p>Not surprisingly, people reached faster to cursor jumps when their attention was drawn to the 鈥榗orrect鈥 side by the cue. However, reactions to jumps were fast regardless of cuing, suggesting that a separate mechanism independent of attention is responsible for tracking our movements.</p>&#13; <p>鈥 探花直播first experiment showed us that we react very quickly to changes relating to objects directly under our own control, even when we are not paying attention to them,鈥 explained Reichenbach. 鈥淭his provides strong evidence for a dedicated neural pathway linking motor control to visual information, independently of the standard visual systems that are dependent on attention.鈥</p>&#13; <p> 探花直播second experiment was similar to the first but introduced changes in brightness to demonstrate the attention effect on the visual perception system. In the third experiment, participants were asked to guide one cursor to its target in the presence of up to four dummy targets or cursors, acting as 鈥榙istractors鈥 alongside the real ones. In this experiment, responses to cursor jumps were less affected by distractors than responses to target jumps. Reactions to cursor jumps remained strong with one or two distractors but decreased significantly in the presence of four.</p>&#13; <p>鈥淭hese results provide further evidence of a dedicated visuomotor binding mechanism that is less prone to distractions than standard visual processing,鈥 said Reichenbach. 鈥淚t looks like the specialised system has a higher tolerance for distractions but in the end it is effected by them. Exactly why we evolved a separate mechanism remains to be seen but the need to react rapidly to different visual clues about ourselves and the environment may have enough to necessitate a separate pathway.鈥</p>&#13; <p>For more information about this story contact Alexandra Buxton, Office of Communications, 探花直播 of Cambridge, <a href="mailto:amb206@admin.cam.ac.uk">amb206@admin.cam.ac.uk</a>, 01223 761673</p>&#13; <p>聽</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Experiments have identified a dedicated information highway that combines visual cues with body motion. This mechanism triggers responses to cues before the conscious brain has become aware of them.聽聽聽聽聽聽聽聽聽聽聽</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> 探花直播study shows that our brains also have separate hard-wired systems to track our own bodies visually even when we are not paying attention to them.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">David Franklin</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://commons.wikimedia.org/wiki/File:Dunking_a_cookie_into_a_cup_of_coffee.jpg" target="_blank">Jenny Downing</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Dunking a cookie into a cup of coffee</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Fri, 14 Mar 2014 15:00:00 +0000 amb206 122812 at