ֱ̽ of Cambridge - Per Ola Kristensson /taxonomy/people/per-ola-kristensson en Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality /research/news/machine-learning-gives-users-superhuman-ability-to-open-and-control-tools-in-virtual-reality <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/screenshot-2023-11-07-163538.jpg?itok=DJaBykvi" alt="Modelling a sailboat in virtual reality." title="Modelling a sailboat in virtual reality , Credit: ֱ̽ of Cambridge" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, used machine learning to develop ‘HotGestures’ – analogous to the hot keys used in many desktop applications.</p>&#13; &#13; <p>HotGestures give users the ability to build figures and shapes in virtual reality without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.</p>&#13; &#13; <p> ֱ̽idea of being able to open and control tools in virtual reality has been a movie trope for decades, but the researchers say that this is the first time such a ‘superhuman’ ability has been made possible. ֱ̽<a href="https://ieeexplore.ieee.org/document/10269004">results</a> are reported in the journal <em>IEEE Transactions on Visualization and Computer Graphics</em>.</p>&#13; &#13; <p>Virtual reality (VR) and related applications have been touted as game-changers for years, but outside of gaming, their promise has not fully materialised. “Users gain some qualities when using VR, but very few people want to use it for an extended period of time,” said <a href="https://pokristensson.com/">Professor Per Ola Kristensson</a> from Cambridge’s Department of Engineering, who led the research. “Beyond the visual fatigue and ergonomic issues, VR isn’t really offering anything you can’t get in the real world.”</p>&#13; &#13; <p>Most users of desktop software will be familiar with the concept of hot keys – command shortcuts such as ctrl-c to copy and ctrl-v to paste. While these shortcuts omit the need to open a menu to find the right tool or command, they rely on the user having the correct command memorised.</p>&#13; &#13; <p>“We wanted to take the concept of hot keys and turn it into something more meaningful for virtual reality – something that wouldn’t rely on the user having a shortcut in their head already,” said Kristensson, who is also co-Director of the <a href="https://www.chia.cam.ac.uk/">Centre for Human-Inspired Artificial Intelligence</a>.</p>&#13; &#13; <p>Instead of hot keys, Kristensson and his colleagues developed ‘HotGestures’, where users perform a gesture with their hand to open and control the tool they need in 3D virtual reality environments.</p>&#13; &#13; <p>For example, performing a cutting motion opens the scissor tool, and the spray motion opens the spray can tool. There is no need for the user to open a menu to find the tool they need, or to remember a specific shortcut. Users can seamlessly switch between different tools by performing different gestures during a task, without having to pause their work to browse a menu or to press a button on a controller or keyboard.</p>&#13; &#13; <p>“We all communicate using our hands in the real world, so it made sense to extend this form of communication to the virtual world,” said Kristensson.</p>&#13; &#13; <p>For the study, the researchers built a neural network gesture recognition system that can recognise gestures by performing predictions on an incoming hand joint data stream. ֱ̽system was built to recognise ten different gestures associated with building 3D models: pen, cube, cylinder, sphere, palette, spray, cut, scale, duplicate and delete.</p>&#13; &#13; <p> ֱ̽team carried out two small studies where participants used HotGestures, menu commands or a combination. ֱ̽gesture-based technique provided fast and effective shortcuts for tool selection and usage. Participants found HotGestures to be distinctive, fast, and easy to use while also complementing conventional menu-based interaction. ֱ̽researchers designed the system so that there were no false activations – the gesture-based system was able to correctly recognise what was a command and what was normal hand movement. Overall, the gesture-based system was faster than a menu-based system.</p>&#13; &#13; <p>“There is no VR system currently available that can do this,” said Kristensson. “If using VR is just like using a keyboard and a mouse, then what’s the point of using it? It needs to give you almost superhuman powers that you can’t get elsewhere.”</p>&#13; &#13; <p> ֱ̽researchers have made the source code and dataset publicly available so that designers of VR applications can incorporate it into their products.</p>&#13; &#13; <p>“We want this to be a standard way of interacting with VR,” said Kristensson. “We’ve had the tired old metaphor of the filing cabinet for decades. We need new ways of interacting with technology, and we think this is a step in that direction. When done right, VR can be like magic.”</p>&#13; &#13; <p> ֱ̽research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).</p>&#13; &#13; <p> </p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Zhaomou Song; John J Dudley; Per Ola Kristensson. ‘<a href="https://ieeexplore.ieee.org/document/10269004">HotGestures: Complementing Command Selection and Use with Delimiter-Free Gesture-Based Shortcuts in Virtual Reality</a>.’ IEEE Transactions on Visualization and Computer Graphics (2023). DOI: 10.1109/TVCG.2023.3320257</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a virtual reality application where a range of 3D modelling tools can be opened and controlled using just the movement of a user’s hand. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> We need new ways of interacting with technology, and we think this is a step in that direction</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Per Ola Kristensson</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-215161" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/hotgestures-give-users-superhuman-ability-to-open-and-control-tools-in-virtual-reality">HotGestures give users ‘superhuman’ ability to open and control tools in virtual reality</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/3kNFvhU5ntU?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank"> ֱ̽ of Cambridge</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Modelling a sailboat in virtual reality </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 08 Nov 2023 07:44:16 +0000 sc604 243101 at What is the metaverse – and will it help us or harm us? /stories/metaverse <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>An interconnected world of extended reality is coming that will reshape how we work, play and communicate – and expose us to new levels of risk. What is the metaverse? Will we be safe? How do we make the most of it?</p> </p></div></div></div> Thu, 27 Jul 2023 07:52:28 +0000 lw355 241051 at Cambridge research centre puts people at the heart of AI /research/news/cambridge-research-centre-puts-people-at-the-heart-of-ai <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/artificial-intelligence-digital-concept-credit-olemedia-event-what-is-machine-learning-and-why-is-it.jpg?itok=zBkfvA00" alt="" title="Illustration representing artificial intelligence, Credit: Olemedia (Getty Images)" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽<a href="https://www.chia.cam.ac.uk/">Centre for Human-Inspired Artificial Intelligence </a>(CHIA) brings together researchers from engineering and mathematics, philosophy and social sciences; a broad range of disciplines to investigate how human and machine intelligence can be combined in technologies that best contribute to social and global progress.</p> <p>Anna Korhonen, Director of CHIA and Professor of Natural Language Processing, said: “We know from history that new technologies can drive changes with both positive and negative consequences, and this will likely be the case for AI. ֱ̽goal of our new Centre is to put humans at the centre of every stage of AI development – basic research, application, commercialisation and policymaking – to help ensure AI benefits everyone."</p> <p>Artificial intelligence is a rapidly developing technology predicted to transform much of our society. While AI has the potential to tackle some of the world’s most pressing problems in healthcare, education, climate science and economic sustainability it will need to embrace its human origins to become responsible, transparent and inclusive.</p> <p>Per-Ola Kristensson, Co-director of CHIA and Professor of Interactive Systems Engineering, said: “For true progress and real-life impact it’s critical to nurture a close engagement with industry, policy makers, non-governmental organisations and civil society. Few universities in the world can rival the breadth and depth of Cambridge making us ideally positioned to make these connections and engage with the communities who face the greatest impact from AI.”</p> <p>Designed to deliver both academic and real-world impact, CHIA seeks partners in academic, industrial, third-sector and other organizations that share an interest in promoting human-inspired AI.</p> <p>John Suckling, Co-director of CHIA and Director of Research in Psychiatric Neuroimaging, said: “Our students will be educated in an interdisciplinary environment with access to experts in the technical, ethical, human and industrial aspects of AI. Early-career researchers will be part of all our activities. We are committed to inclusivity and diversity as a way of delivering robust and practical outcomes.”</p> <p>CHIA will educate the next generation of AI creators and leaders, with dedicated graduate training in human-inspired AI.</p> <p>Professor Mark Girolami from the Department of Engineering, said: “As artificial intelligence becomes increasingly pervasive, it’s critical to align its development with societal interests. This new ֱ̽-wide Centre will explore a human-centric approach to the development of AI to ensure beneficial outcomes for society. Cambridge's depth of expertise in AI and a focus on interdisciplinary collaboration make it an ideal home for CHIA.”</p> <p>Apart from research and education, the CHIA will also host seminars, public events and international conferences to raise awareness of human-inspired AI. Forums will be convened around topics of ethical or societal concern with representation from all stakeholders.</p> <p>Professor Anne Ferguson-Smith, Pro-Vice-Chancellor for Research, said: “If we’re to ensure that AI works for everyone and does not widen inequalities, then we need to place people at its heart and consider the societal and ethical implications alongside its development. Cambridge, with its ability to draw on researchers across multiple disciplines, is uniquely positioned to be able to lead in this area.”</p> <p>Neil Lawrence, DeepMind Professor of Machine Learning, added: “Artificial intelligence is provoking new questions in our societies. It’s vital that we deliver the answers in a people-centric manner. ֱ̽Centre in Human-Inspired AI will provide a new interdisciplinary hub that delivers the solutions for these challenges.”</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽ ֱ̽ of Cambridge today launches a new research centre dedicated to exploring the possibilities of a world shared by both humans and machines with artificial intelligence (AI).</p> </p></div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-197761" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/introducing-the-centre-for-human-inspired-artificial-intelligence">Introducing ֱ̽Centre for Human-Inspired Artificial Intelligence</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-2 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/jsESFNuYuJM?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Olemedia (Getty Images)</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Illustration representing artificial intelligence</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 12 Jul 2022 07:00:51 +0000 cjb250 233251 at AI reduces ‘communication gap’ for nonverbal people by as much as half /research/news/ai-reduces-communication-gap-for-nonverbal-people-by-as-much-as-half <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/speech.jpg?itok=zBWNfujR" alt="Speech bubble" title="Speech bubble, Credit: Photo by Volodymyr Hryshchenko on Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽team, from the ֱ̽ of Cambridge and the ֱ̽ of Dundee, developed a new context-aware method that reduces this communication gap by eliminating between 50% and 96% of the keystrokes the person has to type to communicate.</p>&#13; &#13; <p> ֱ̽system is specifically tailed for nonverbal people and uses a range of context ‘clues’ – such as the user’s location, the time of day or the identity of the user’s speaking partner – to assist in suggesting sentences that are the most relevant for the user.</p>&#13; &#13; <p>Nonverbal people with motor disabilities often use a computer with speech output to communicate with others. However, even without a physical disability that affects the typing process, these communication aids are too slow and error-prone for meaningful conversation: typical typing rates are between five and 20 words per minute, while a typical speaking rate is in the range of 100 to 140 words per minute.</p>&#13; &#13; <p>“This difference in communication rates is referred to as the communication gap,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, the study’s lead author. “ ֱ̽gap is typically between 80 and 135 words per minute and affects the quality of everyday interactions for people who rely on computers to communicate.”</p>&#13; &#13; <p> ֱ̽method developed by Kristensson and his colleagues uses artificial intelligence to allow a user to quickly retrieve sentences they have typed in the past. Prior research has shown that people who rely on speech synthesis, just like everyone else, tend to reuse many of the same phrases and sentences in everyday conversation. However, retrieving these phrases and sentences is a time-consuming process for users of existing speech synthesis technologies, further slowing down the flow of conversation.</p>&#13; &#13; <p>In the new system, as the person is typing, the system uses information retrieval algorithms to automatically retrieve the most relevant previous sentences based on the text typed and the context the conversation the person is involved in. Context includes information about the conversation such as the location, time of day, and automatic identification of the speaking partner’s face. ֱ̽other speaker is identified using a computer vision algorithm trained to recognise human faces from a front-mounted camera.</p>&#13; &#13; <p> ֱ̽system was developed using design engineering methods typically used for jet engines or medical devices. ֱ̽researchers first identified the critical functions of the system, such as the word auto-complete function and the sentence retrieval function. After these functions had been identified, the researchers simulated a nonverbal person typing a large set of sentences from a sentence set representative of the type of text a nonverbal person would like to communicate.</p>&#13; &#13; <p>This analysis allowed the researchers to understand the best method for retrieving sentences and the impact of a range of parameters on performance, such as the accuracy of word-auto complete and the impact of using many context tags. For example, this analysis revealed that only two reasonably accurate context tags are required to provide the majority of the gain. Word-auto complete provides a positive contribution but is not essential for realising the majority of the gain. ֱ̽sentences are retrieved using information retrieval algorithms, similar to web search. Context tags are added to the words the user types to form a query.</p>&#13; &#13; <p> ֱ̽study is the first to integrate context-aware information retrieval with speech-generating devices for people with motor disabilities, demonstrating how context-sensitive artificial intelligence can improve the lives of people with motor disabilities.</p>&#13; &#13; <p>“This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future,” said Kristensson. “We’ve shown it’s possible to reduce the opportunity cost of <em>not </em>doing innovative research with AI-infused user interfaces that challenge traditional user interface design mantra and processes.”</p>&#13; &#13; <p> ֱ̽research paper was published at CHI 2020.</p>&#13; &#13; <p> ֱ̽research was funded by the Engineering and Physical Sciences Research Council.</p>&#13; &#13; <p><strong><em>Reference:</em></strong><br />&#13; <em>Kristensson, P.O., Lilley, J., Black, R. and Waller, A. ‘</em><a href="https://dl.acm.org/doi/10.1145/3313831.3376525"><em>A design engineering approach for quantitatively exploring context-aware sentence retrieval for nonspeaking individuals with motor disabilities</em></a><em>.’ In Proceedings of the 38th ACM Conference on Human Factors in Computing Systems (CHI 2020). DOI: 10.1145/3313831.3376525</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have used artificial intelligence to reduce the ‘communication gap’ for nonverbal people with motor disabilities who rely on computers to converse with others.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Per Ola Kristensson</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/three-crumpled-yellow-papers-on-green-surface-surrounded-by-yellow-lined-papers-V5vqWC9gyEU" target="_blank">Photo by Volodymyr Hryshchenko on Unsplash</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Speech bubble</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 15 Jun 2020 16:00:00 +0000 sc604 215482 at What makes a faster typist? /research/news/what-makes-a-faster-typist <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/crop_68.jpg?itok=OwsE3HiW" alt="" title="Credit: Photo by Cytonn Photography on Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽data was collected by researchers from Aalto ֱ̽ in Finland and the ֱ̽ of Cambridge. Volunteers from over 200 countries took the typing test, which is freely available <a href="http://typingmaster.research.netlab.hut.fi/">online</a>. Participants were asked to transcribe randomised sentences, and their accuracy and speed were assessed by the researchers.</p>&#13; &#13; <p>Unsurprisingly, the researchers found that faster typists make fewer mistakes. However, they also found that the fastest typists also performed between 40 and 70 percent of keystrokes using rollover typing, in which the next key is pressed down before the previous key is lifted. ֱ̽strategy is well-known in the gaming community but has not been observed in a typing study. ֱ̽<a href="http://userinterfaces.aalto.fi/136Mkeystrokes/resources/chi-18-analysis.pdf">results</a> will be presented later this month at the <a href="https://chi2018.acm.org/technical-program/">ACM CHI Conference on Human Factors in Computing Systems</a> in Montréal.</p>&#13; &#13; <p>“Crowdsourcing experiments that allow us to analyse how people interact with computers on a large scale are instrumental for identifying solution principles for the design of next-generation user interfaces,” said study co-author Dr Per Ola Kristensson from Cambridge’s Department of Engineering.</p>&#13; &#13; <p>Most of our knowledge of how people type is based on studies from the typewriter era. Now, decades after the typewriter was replaced by computers, people make different types of mistakes. For example, errors where one letter is replaced by another are now more common, whereas in the typewriter era typists often added or omitted characters.</p>&#13; &#13; <p>Another difference is that modern users use their hands differently. “Modern keyboards allow us to type keys with different fingers of the same hand with much less force than what was possible with typewriters,” said co-author Anna Feit from Aalto ֱ̽. “This partially explains why self-taught typists using fewer than ten fingers can be as fast as touch typists, which was probably not the case in the typewriter era.”</p>&#13; &#13; <p> ֱ̽average user in the study typed 52 words per minute, much slower than the professionally trained typists in the 70s and 80s, who typically reached 60-90 words per minute. However, performance varied largely. “ ֱ̽fastest users in our study typed 120 words per minute, which is amazing given that this is a controlled study with randomised phrases,” said co-author Dr Antti Oulasvirta, also from Aalto. “Many informal tests allow users to practice the sentences, resulting in unrealistically high performance.”</p>&#13; &#13; <p> ֱ̽researchers found that users who had previously taken a typing course actually had a similar typing behaviour as those who had never taken such a course, in terms of how fast they type, how they use their hands and the errors they make - even though they use fewer fingers.</p>&#13; &#13; <p> ֱ̽researchers found that users display different typing styles, characterised by how they use their hands and fingers, the use of rollover, tapping speeds, and typing accuracy.</p>&#13; &#13; <p>For example, some users could be classified as ‘’careless typists’’ who move their fingers quickly but have to correct many mistakes; and others as attentive error-free typists, who gain speed by moving hands and fingers in parallel, pressing the next key before the first one is released.</p>&#13; &#13; <p>It is now possible to classify users’ typing behaviour based on the observed keystroke timings which does not require the storage of the text that users have typed. Such information can be useful for example for spell checkers, or to create new personalised training programmes for typing.</p>&#13; &#13; <p>“You do not need to change to the touch typing system if you want to type faster,” said Feit. “A few simple exercises can help you to improve your own typing technique.”</p>&#13; &#13; <p> ֱ̽anonymised dataset is available at the project homepage: <a href="http://userinterfaces.aalto.fi/136Mkeystrokes/">http://userinterfaces.aalto.fi/136Mkeystrokes/</a></p>&#13; &#13; <p><em><strong>Reference: </strong><br />&#13; Dhakal, V., Feit, A., Kristensson, P.O. and Oulasvirta, A. 2018. '<a href="http://userinterfaces.aalto.fi/136Mkeystrokes/resources/chi-18-analysis.pdf">Observations on typing from 136 million keystrokes</a>.' In Proceedings of the 36th ACM Conference on Human Factors in Computing Systems (CHI 2018). ACM Press.</em></p>&#13; &#13; <p><em>Adapted from an Aalto ֱ̽ press release. </em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽largest-ever dataset on typing speeds and styles, based on 136 million keystrokes from 168,000 volunteers, finds that the fastest typists not only make fewer errors, but they often type the next key before the previous one has been released. </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Crowdsourcing experiments that allow us to analyse how people interact with computers on a large scale are instrumental for identifying solution principles for the design of next-generation user interfaces.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Per Ola Kristensson</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/persons-hand-on-macbook-near-iphone-flat-lay-photography-ZJEKICY5EXY" target="_blank">Photo by Cytonn Photography on Unsplash</a></div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Want to type faster?</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><ul>&#13; <li>Pay attention to errors, as they are costly to correct. Slow down to avoid them and you will be faster in the long run.</li>&#13; <li>Learn to type without looking at fingers; your motor system will automatically pick up very fast ‘’trills’’ for frequently occurring letter combinations (“the”), which will speed up your typing. Being able to look at the screen while typing also allows you to quickly detect mistakes.</li>&#13; <li>Practice rollover: use different fingers for successive letter keys instead of moving a single finger from one key to another. Then, when typing a letter with one finger, press the next one with the other finger.</li>&#13; <li>Take an <a href="http://typingmaster.research.netlab.hut.fi/">online typing test</a> to track performance and identify weaknesses such as high error rates. Make sure that the test requires you to type new sentences so you do not over-practice the same text.</li>&#13; <li>Dedicate time to practice deliberately. People may forget the good habits and relapse to less efficient ways of typing. </li>&#13; </ul>&#13; </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 05 Apr 2018 08:31:58 +0000 sc604 196372 at