ֱ̽ of Cambridge - speech /taxonomy/subjects/speech en ‘Smart choker’ uses AI to help people with speech impairment to communicate /research/news/smart-choker-uses-ai-to-help-people-with-speech-impairment-to-communicate <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/choker.jpg?itok=0FRsAXJk" alt="Smart Choker" title="Smart Choker, Credit: Luigi Occhipinti" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽smart choker, developed by researchers at the ֱ̽ of Cambridge, incorporates electronic sensors in a soft, stretchable fabric, and is comfortable to wear. ֱ̽device could be useful for people who have temporary or permanent speech impairments, whether due to laryngeal surgery, or conditions such as Parkinson’s, stroke or cerebral palsy.</p> <p>By incorporating machine learning techniques, the smart choker can also successfully recognise differences in pronunciation, accent and vocabulary between users, reducing the amount of training required.</p> <p> ֱ̽choker is a type of technology known as a silent speech interface, which analyses non-vocal signals to decode speech in silent conditions – the user only needs to mouth the words in order for them to be captured. ֱ̽captured speech signals can then be transferred to a computer or speaker to facilitate conversation.</p> <p>Tests of the smart choker showed it could recognise words with over 95% accuracy, while using 90% less computational energy than existing state-of-the art technologies. ֱ̽<a href="https://www.nature.com/articles/s41528-024-00315-1">results</a> are reported in the journal <em>npj Flexible Electronics</em>.</p> <p>“Current solutions for people with speech impairments often fail to capture words and require a lot of training,” said Dr Luigi Occhipinti from the Cambridge Graphene Centre, who led the research. “They are also rigid, bulky and sometimes require invasive surgery to the throat.”</p> <p> ֱ̽smart choker developed by Occhipinti and his colleagues outperforms current technologies on accuracy, requires less computing power, is comfortable for users to wear, and can be removed whenever it’s not needed. ֱ̽choker is made from a sustainable bamboo-based textile, with strain sensors based on graphene ink incorporated in the fabric. When the sensors detect any strain, tiny, controllable cracks form in the graphene. ֱ̽sensitivity of the sensors is more than four times higher than existing state of the art.</p> <p>“These sensors can detect tiny vibrations, such as those formed in the throat when whispering or even silently mouthing words, which makes them ideal for speech detection,” said Occhipinti. “By combining the ultra-high sensitivity of the sensors with highly efficient machine learning, we’ve come up with a device we think could help a lot of people who struggle with their speech.”</p> <p>Vocal signals are incredibly complex, so associating a specific signal with a specific word requires a high level of computational processing. “On top of that, every person is different in terms of the way they speak, and machine learning gives us the tools we need to learn and adapt the interpretation of signals from person to person,” said Occhipinti.</p> <p> ֱ̽researchers trained their machine learning model on a database of the most frequently used words in English, and selected words which are frequently confused with each other, such as ‘book’ and ‘look’. ֱ̽model was trained with a variety of users, including different genders, native and non-native English speakers, as well as people with different accents and different speaking speeds.</p> <p>Thanks to the device’s ability to capture rich dynamic signal characteristics, the researchers found it possible to use lightweight neural network architectures with simplified depth and signal dimensions to extract and enhance the speech information features. This resulted in a machine learning model with high computational and energy efficiency, ideal for integration in battery-operated wearable devices with real-time AI processing capabilities.</p> <p>“We chose to train the model with lots of different English speakers, so we could show it was capable of learning,” said Occhipinti. “Machine learning has the capability to learn quickly and efficiently from one user to the next, so the retraining process is quick.”</p> <p>Tests of the smart choker showed it was 95.25% accurate in decoding speech. “I was surprised at just how sensitive the device is,” said Occhipinti. “We couldn’t capture all the signals and complexity of human speech before, but now that we can, it unlocks a whole new set of potential applications.”</p> <p>Although the choker will have to undergo extensive testing and clinical trials before it is approved for use in patients with speech impairments, the researchers say that their smart choker could also be used in other health monitoring applications, or for improving communication in noisy or secure environments.</p> <p> ֱ̽research was supported in part by the EU Graphene Flagship and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).</p> <p><em><strong>Reference:</strong><br /> Chenyu Tang et al. ‘<a href="https://www.nature.com/articles/s41528-024-00315-1">Ultrasensitive textile strain sensors redefine wearable silent speech interfaces with high machine learning efficiency</a>.’ npj Flexible Electronics (2024). DOI: 10.1038/s41528-024-00315-1</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have developed a wearable ‘smart choker’ that uses a combination of flexible electronics and artificial intelligence techniques to allow people with speech impairments to communicate by detecting tiny movements in the throat.</p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Luigi Occhipinti</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Smart Choker</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 13 Sep 2024 13:40:19 +0000 sc604 247791 at AI reduces ‘communication gap’ for nonverbal people by as much as half /research/news/ai-reduces-communication-gap-for-nonverbal-people-by-as-much-as-half <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/speech.jpg?itok=zBWNfujR" alt="Speech bubble" title="Speech bubble, Credit: Photo by Volodymyr Hryshchenko on Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽team, from the ֱ̽ of Cambridge and the ֱ̽ of Dundee, developed a new context-aware method that reduces this communication gap by eliminating between 50% and 96% of the keystrokes the person has to type to communicate.</p>&#13; &#13; <p> ֱ̽system is specifically tailed for nonverbal people and uses a range of context ‘clues’ – such as the user’s location, the time of day or the identity of the user’s speaking partner – to assist in suggesting sentences that are the most relevant for the user.</p>&#13; &#13; <p>Nonverbal people with motor disabilities often use a computer with speech output to communicate with others. However, even without a physical disability that affects the typing process, these communication aids are too slow and error-prone for meaningful conversation: typical typing rates are between five and 20 words per minute, while a typical speaking rate is in the range of 100 to 140 words per minute.</p>&#13; &#13; <p>“This difference in communication rates is referred to as the communication gap,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, the study’s lead author. “ ֱ̽gap is typically between 80 and 135 words per minute and affects the quality of everyday interactions for people who rely on computers to communicate.”</p>&#13; &#13; <p> ֱ̽method developed by Kristensson and his colleagues uses artificial intelligence to allow a user to quickly retrieve sentences they have typed in the past. Prior research has shown that people who rely on speech synthesis, just like everyone else, tend to reuse many of the same phrases and sentences in everyday conversation. However, retrieving these phrases and sentences is a time-consuming process for users of existing speech synthesis technologies, further slowing down the flow of conversation.</p>&#13; &#13; <p>In the new system, as the person is typing, the system uses information retrieval algorithms to automatically retrieve the most relevant previous sentences based on the text typed and the context the conversation the person is involved in. Context includes information about the conversation such as the location, time of day, and automatic identification of the speaking partner’s face. ֱ̽other speaker is identified using a computer vision algorithm trained to recognise human faces from a front-mounted camera.</p>&#13; &#13; <p> ֱ̽system was developed using design engineering methods typically used for jet engines or medical devices. ֱ̽researchers first identified the critical functions of the system, such as the word auto-complete function and the sentence retrieval function. After these functions had been identified, the researchers simulated a nonverbal person typing a large set of sentences from a sentence set representative of the type of text a nonverbal person would like to communicate.</p>&#13; &#13; <p>This analysis allowed the researchers to understand the best method for retrieving sentences and the impact of a range of parameters on performance, such as the accuracy of word-auto complete and the impact of using many context tags. For example, this analysis revealed that only two reasonably accurate context tags are required to provide the majority of the gain. Word-auto complete provides a positive contribution but is not essential for realising the majority of the gain. ֱ̽sentences are retrieved using information retrieval algorithms, similar to web search. Context tags are added to the words the user types to form a query.</p>&#13; &#13; <p> ֱ̽study is the first to integrate context-aware information retrieval with speech-generating devices for people with motor disabilities, demonstrating how context-sensitive artificial intelligence can improve the lives of people with motor disabilities.</p>&#13; &#13; <p>“This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future,” said Kristensson. “We’ve shown it’s possible to reduce the opportunity cost of <em>not </em>doing innovative research with AI-infused user interfaces that challenge traditional user interface design mantra and processes.”</p>&#13; &#13; <p> ֱ̽research paper was published at CHI 2020.</p>&#13; &#13; <p> ֱ̽research was funded by the Engineering and Physical Sciences Research Council.</p>&#13; &#13; <p><strong><em>Reference:</em></strong><br />&#13; <em>Kristensson, P.O., Lilley, J., Black, R. and Waller, A. ‘</em><a href="https://dl.acm.org/doi/10.1145/3313831.3376525"><em>A design engineering approach for quantitatively exploring context-aware sentence retrieval for nonspeaking individuals with motor disabilities</em></a><em>.’ In Proceedings of the 38th ACM Conference on Human Factors in Computing Systems (CHI 2020). DOI: 10.1145/3313831.3376525</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have used artificial intelligence to reduce the ‘communication gap’ for nonverbal people with motor disabilities who rely on computers to converse with others.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Per Ola Kristensson</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/three-crumpled-yellow-papers-on-green-surface-surrounded-by-yellow-lined-papers-V5vqWC9gyEU" target="_blank">Photo by Volodymyr Hryshchenko on Unsplash</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Speech bubble</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 15 Jun 2020 16:00:00 +0000 sc604 215482 at Study unearths Britain’s first speech therapists /research/news/study-unearths-britains-first-speech-therapists <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/joseph-priestleycrop_0.jpg?itok=lK8Teh0h" alt="Joseph Priestley: theologian, scientist, clergyman and stammerer" title="Joseph Priestley: theologian, scientist, clergyman and stammerer. Pastel by Ellen Sharples, probably after James Sharples, c.1797, Credit: National Portrait Gallery, London" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><div>Until now, historians had assumed that John Thelwall became Britain’s first speech therapist in the early nineteenth century.*</div> <div> </div> <div>But Cambridge historian Elizabeth Foyster has discovered that James Ford was advertising his services in London as early as 1703, and that many other speech therapists emerged over the course of the eighteenth century.</div> <div> </div> <div>Ford’s advert (pictured), published in the <em>Post Man</em> newspaper on 23 October 1703, states that "he removes Stammering, and other impediments in Speech", as well as teaching "Foreigners to pronounce English like Natives".</div> <div> </div> <div>Ford had previously worked with the deaf and dumb but realised that there was more money to be made by offering other speech improvement services as a branch of education for wealthy children.</div> <div> </div> <p></p> <div> </div> <div>“In the eighteenth century, speaking well was crucial to being accepted in polite society and to succeeding in a profession,” said Foyster. “Speech impediments posed a major obstacle and the stress this caused often made a sufferer’s speech even worse. At the same time, wealthy parents were made to feel guilty and they started spending increasingly large sums to try to “cure” their children.”</div> <div> </div> <div>By 1703, Ford was based in Newington Green, in the suburbs of London, but twice a week he waited near the city’s Royal Exchange and Temple Bar to secure business from merchants, financiers and lawyers desperate to improve their children’s life chances.</div> <div> </div> <div>By 1714, some of these families were seeking out the help of Jacob Wane, a therapist who drew on a 33-year personal struggle with the condition. And by the 1760s, several practitioners were competing for business in London.</div> <div> </div> <div>“We have lost sight of these origins of speech therapy because historians have been looking to identify a profession which had agreed qualifications for entry, an organising body, scientific methods and standards, as we have today,” said Foyster. “In the eighteenth century, speech therapy was regarded as an art not a science. But with its attention to the individual, and the psychological as well as physiological causes of speech defects, we can see the roots of today's speech therapy.”</div> <div> </div> <h3><strong>Art and business</strong></h3> <div>Foyster’s study, published in the journal <em>Cultural and Social History</em>, shows that speech specialists emerged in the early eighteenth century as new attention was given to the role of the nerves, emotions and psychological origins of speech impediments.</div> <div> </div> <div>Prior to this, in the seventeenth century, the main cure on offer had involved painful physical intervention including the cutting of tongues. But as speech defects came to be understood as resulting from nervous disorders, entrepreneurial therapists stepped in to end the monopoly of the surgeons.</div> <div> </div> <div>“These men, and some women, made no claim to medical knowledge,” Foyster says. “In fact, some were very keen to emphasise that they were nothing like the surgeons who had caused so much unnecessary pain. They described themselves as ‘Artists’ and their gentler methods were much more attractive to wealthy clients.” </div> <div> </div> <div>These speech ‘artists’ jealously guarded their trade secrets but gave away some clues to their methods in print. Close attention was paid to the position of the lips, tongue and mouth; clients were given breathing and voice exercises to practise; and practitioners emphasised the importance of speaking slowly so that every sound could be articulated.</div> <div> </div> <div>By the 1750s, London’s speech therapists had become masters of publicity publishing books, placing advertisements in newspapers and giving lectures in universities and other venues. In 1752, Samuel Angier achieved the remarkable feat of lecturing to Cambridge academics on four occasions about speech impediments and the ‘art of pronunciation’, despite having never attended university himself.</div> <div> </div> <div>Foyster has identified several successful speech therapy businesses, some of which were passed down from one generation to the next. Most of these were based in London but practitioners would often follow their clientele to fashionable resort towns such as Bath and Margate.</div> <div> </div> <div>In 1761, Charles Angier became the third generation to take over his family’s business; and by the 1780s, he claimed to be able to remove all speech impediments within six to eight months if his pupils were ‘attentive’. By then, he was reported to be charging fifty guineas ‘for the Cure’ at a time when many Londoners were earning less than ten guineas a year.</div> <div> </div> <div>To be successful, these entrepreneurs had to separate themselves from quackery. Some heightened their credibility by securing accreditation from respected physicians while others printed testimonials from satisfied clients beneath their newspaper advertisements.</div> <div> </div> <h3><strong>Suffering and determination</strong></h3> <div>Foyster’s study also sheds light on the appalling suffering and inspirational determination of stammerers in the eighteenth century, including some well-known figures.</div> <div> </div> <div>Joseph Priestley (1733-1804), the theologian, scientist and clergyman (pictured), recalled that his worsening stammer made ‘preaching very painful, and took from me all chance of recommending myself to any better place’.</div> <div> </div> <div>His fellow scientist, Erasmus Darwin, also suffered from a stammer, as did Darwin’s daughter, Violetta, and eldest son, Charles. In 1775, Darwin compiled detailed instructions to help his daughter overcome her stammer which involved sounding out each letter and practising problematic words for weeks on end.</div> <div> </div> <div>“It is tempting to think that sympathy for stammering is a very recent phenomenon but a significant change in attitudes took hold in the eighteenth century,” said Foyster. “While stammerers continued to be mocked and cruelly treated, polite society became increasingly compassionate, especially when someone demonstrated a willingness to seek specialist help.”</div> <div> </div> <div> </div> <div> </div> <div><em>References:</em></div> <div> </div> <div><em>Elizabeth Foyster, ‘<a href="https://www.tandfonline.com/doi/full/10.1080/14780038.2018.1518565">Fear of Giving Offence Makes Me Give the More Offence’: Politeness, Speech and Its Impediments in British Society, c.1660–1800</a>.' Cultural and Social History (2018). DOI: 10.1080/14780038.2018.1518565</em></div> <div> </div> <div><em>* Denyse Rockey, '<a href="https://www.tandfonline.com/doi/abs/10.3109/13682827709011313?tab=permissions&amp;scroll=top"> ֱ̽Logopaedic thought of John Thelwall, 1764-1834: First British Speech Therapist</a>', British Journal of Disorders of Communication (1977). DOI: 10.3109/13682827709011313</em></div> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>On International Stammering Awareness Day (22 October), a new study reveals that Britain’s first speech therapists emerged at least a century earlier than previously thought.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">It is tempting to think that sympathy for stammering is a very recent phenomenon but a significant change in attitudes took hold in the eighteenth century</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Elizabeth Foyster</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.npg.org.uk/collections/search/portrait/mw05143/Joseph-Priestley?LinkID=mp03658&amp;search=sas&amp;sText=joseph priestley&amp;role=sit&amp;rNo=0" target="_blank">National Portrait Gallery, London</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Joseph Priestley: theologian, scientist, clergyman and stammerer. Pastel by Ellen Sharples, probably after James Sharples, c.1797</div></div></div><div class="field field-name-field-slideshow field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/sites/default/files/james_ford_1703_ad.jpg" title="James Ford&#039;s advert in the Post Man (23 October 1703). © ֱ̽British Library Board " class="colorbox" data-colorbox-gallery="" data-cbox-img-attrs="{&quot;title&quot;: &quot;James Ford&#039;s advert in the Post Man (23 October 1703). © ֱ̽British Library Board &quot;, &quot;alt&quot;: &quot;&quot;}"><img class="cam-scale-with-grid" src="/sites/default/files/styles/slideshow/public/james_ford_1703_ad.jpg?itok=nIM7aCyH" width="590" height="288" alt="" title="James Ford&#039;s advert in the Post Man (23 October 1703). © ֱ̽British Library Board " /></a></div><div class="field-item odd"><a href="/sites/default/files/joseph-priestley.jpg" title="Joseph Priestley: theologian, scientist, clergyman and stammerer, c.1797. © National Portrait Gallery, London" class="colorbox" data-colorbox-gallery="" data-cbox-img-attrs="{&quot;title&quot;: &quot;Joseph Priestley: theologian, scientist, clergyman and stammerer, c.1797. © National Portrait Gallery, London&quot;, &quot;alt&quot;: &quot;&quot;}"><img class="cam-scale-with-grid" src="/sites/default/files/styles/slideshow/public/joseph-priestley.jpg?itok=gB3CXXGw" width="590" height="288" alt="" title="Joseph Priestley: theologian, scientist, clergyman and stammerer, c.1797. © National Portrait Gallery, London" /></a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-noncommerical">Attribution-Noncommerical</a></div></div></div> Mon, 22 Oct 2018 08:15:00 +0000 ta385 200572 at Time travelling to the mother tongue /research/features/time-travelling-to-the-mother-tongue <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/160630spectrogram.jpg?itok=854Lwc4i" alt="" title="Spectrogram showing the shape of the sound of a word, Credit: John Aston" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>No matter whether you speak English or Urdu, Waloon or Waziri, Portuguese or Persian, the roots of your language are the same. Proto-Indo-European (PIE) is the mother tongue – shared by several hundred contemporary languages, as well as many now extinct, and spoken by people who lived from about 6,000 to 3,500 BC on the steppes to the north of the Caspian Sea.</p> <p>They left no written texts and although historical linguists have, since the 19th century, painstakingly reconstructed the language from daughter languages, the question of how it actually sounded was assumed to be permanently out of reach.</p> <p>Now, researchers at the Universities of Cambridge and Oxford have developed a sound-based method to move back through the family tree of languages that stem from PIE. They can simulate how certain words would have sounded when they were spoken 8,000 years ago.</p> <p>Remarkably, at the heart of the technology is the statistics of shape.</p> <p>“Sounds have shape,” explains Professor John Aston, from Cambridge’s Statistical Laboratory. “As a word is uttered it vibrates air, and the shape of this soundwave can be measured and turned into a series of numbers. Once we have these stats, and the stats of another spoken word, we can start asking how similar they are and what it would take to shift from one to another.” </p> <p>A word said in a certain language will have a different shape to the same word in another language, or an earlier language. ֱ̽researchers can shift from one shape to another through a series of small changes in the statistics. “It’s more than an averaging process, it’s a continuum from one sound to the other,” adds Aston, who is funded by the Engineering and Physical Sciences Research Council (EPSRC). “At each stage, we can turn the shape back into sound to hear how the word has changed.”</p> <p>Rather than reconstructing written forms of ancient words, the researchers triangulate backwards from contemporary and archival audio recordings to regenerate audible spoken forms from earlier points in the evolutionary tree. Using a relatively new field of shape-based mathematics, the researchers take the soundwave and visualise it as a spectrogram – basically an undulating three-dimensional surface that represents the shape of that sound – and then reshape the spectrogram along a trajectory ‘signposted’ by known sounds.</p> <p>While Aston leads the team of statistician ‘shape-shifters’ in Cambridge, the acoustic-phonetic and linguistic expertise is provided by Professor John Coleman’s group in Oxford.</p> <p> ֱ̽researchers are working on the words for numbers as these have the same meaning in any language. ֱ̽longest path of development simulated so far goes backwards 8,000 years from <a href="http://www.phon.ox.ac.uk/jcoleman/one-from-oins.wav">English <em>one</em> to its PIE ancestor <em>oinos</em></a>, and likewise for other numerals. They have also ‘gone forwards’ from the PIE <em>penkwe</em> to the modern Greek <em>pente</em>, modern Welsh <em>pimp</em> and modern English<em>five</em>, as well as simulating change from Modern English to Anglo-Saxon (or vice versa), and from Modern Romance languages back to Latin.</p> <p><em>(Other audio demonstrations are available <a href="http://www.phon.ox.ac.uk/jcoleman/ancient-sounds-audio.html">here</a>)</em></p> <p>“We’ve explicitly focused on reproducing sound changes and etymologies that the established analyses already suggest, rather than seeking to overturn them,” says Coleman, whose research was funded by the Arts and Humanities Research Council.</p> <p>They have discovered words that appear to correctly ‘fall out’ of the continuum. “It’s pleasing, not because it overturns the received wisdom, but because it encourages us that we are getting something right, some of the time at least. And along the way there have also been a few surprises!” ֱ̽method sometimes follows paths that do not seem to be etymologically correct, demonstrating that the method is scientifically testable and pointing to areas in which refinements are needed.</p> <p>Remarkably, because the statistics describe the sound of an individual saying the word, the researchers are able to keep the characteristics of pitch and delivery the same. They can effectively turn the word spoken by someone in one language into what it would sound like if they were speaking fluently in another.</p> <p><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/160630_horizontal_language_figure.jpg" style="width: 100%;" /></p> <p>They can also extrapolate into the future, although with caveats, as Coleman describes: “If you just extrapolate linearly, you’ll reach a point at which the sound change hits the limit of what is a humanly reasonable sound. This has happened in some languages in the past with certain vowel sounds. But if you asked me what English will sound like in 300 years, my educated guess is that it will be hardly any different from today!”</p> <p>For the team, the excitement of the research includes unearthing some gems of archival recordings of various languages that had been given up for dead, including an Old Prussian word last spoken by people in the early 1700s but ‘borrowed’ into Low Prussian and discovered in a German audio archive.</p> <p>Their work has applications in automatic translation and film dubbing, as well as medical imaging (see panel), but the principal aim is for the technology to be used alongside traditional methods used by historical linguists to understand the process of language change over thousands of years.</p> <p>“From my point of view, it’s amazing that we can turn exciting yet highly abstract statistical theory into something that really helps explain the roots of modern language,” says Aston.</p> <p>“Now that we’ve developed many of the necessary technical methods for realising the extraordinary ambition of hearing ancient sounds once more,” adds Coleman, “these early successes are opening up a wide range of new questions, one of the central being how far back in time can we really go?”</p> <p><em>Audio demonstrations are available here: <a href="http://www.phon.ox.ac.uk/jcoleman/ancient-sounds-audio.html">www.phon.ox.ac.uk/jcoleman/ancient-sounds-audio.html</a></em></p> <p><em>Inset image: Spectrograms showing how the shape of the sound of a word in one language can be morphed into the sound of the same word in another language; credit: John Aston.</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽sounds of languages that died thousands of years ago have been brought to life again through technology that uses statistics in a revolutionary new way.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">As a word is uttered it vibrates air, and the shape of this soundwave can be measured and turned into a series of numbers</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">John Aston</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">John Aston</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Spectrogram showing the shape of the sound of a word</div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Medical imaging reshaped</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><strong> ֱ̽statistics of shape are not just being used to show how different languages relate to each – they are also being used to improve the analysis of medical images.</strong></p> <p>Just as soundwaves have a shape that can be analysed using statistics, so do the patterns of neurons interacting with each other or the dimensions of the surface of a tumour. Now a new research Centre will develop tools that use the mathematics of the shapes found in medical images to improve diagnosis, prognosis and treatment planning for patients.</p> <p> ֱ̽<a href="http://www.damtp.cam.ac.uk/user/cbs31/CMiH/Welcome.html">EPSRC Centre for Mathematical and Statistical Analysis of Multimodal Clinical Imaging</a>, one of five ‘maths’ centres recently funded by £10 million from EPSRC, is co-led by Aston and Dr Carola-Bibiane Schönlieb from the Department of Applied Mathematics and Theoretical Physics in Cambridge.</p> <p>“ ֱ̽new methodologies will allow clinical medicine to move beyond one person reading single scans, to automated systems capable of analysing populations of images,” explains Schönlieb. “As a result, clinicians will have far greater scope to ask complex questions of the medical image.”</p> <p>It’s already possible to extract statistical information from an image of a patient’s thigh bone, turn the data into a template for comparison with those from other people in the population, and then ask whether a particular shape of bone is more prone to being broken than others in the elderly.</p> <p>Most organ scans split the image into many elements, which are then analysed voxel by voxel. “But complex structures like the heart and the brain should be analysed holistically,” explains Dr James Rudd, from the Department of Medicine, who leads the clinical interaction with the Centre. “ ֱ̽tools we are developing will enable the analysis of organs like the brain as single objects with millions of connections.”</p> <p> ֱ̽Centre brings together researchers and clinicians from applied and pure maths, engineering, physics, biology, oncology, clinical neuroscience and cardiology, and involves industrial partners Siemens, AstraZeneca, Microsoft, GSK and Cambridge Computed Imaging.</p> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="http://www.phon.ox.ac.uk/jcoleman/ancient-sounds-home.html">Ancient Sounds project</a></div><div class="field-item odd"><a href="http://www.damtp.cam.ac.uk/user/cbs31/CMiH/Welcome.html">EPSRC Centre for Mathematical and Statistical Analysis of Multimodal Clinical Imaging</a></div></div></div> Tue, 19 Jul 2016 08:00:51 +0000 lw355 176132 at Tuning into the melody of speech /research/features/tuning-into-the-melody-of-speech <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/news/130926brains.jpg?itok=3CJJiZ83" alt="Areas highlighted in red on the right and left brain hemispheres show the frontal and temporal brain networks involved in the processing of linguistic information in intonation" title="Areas highlighted in red on the right and left brain hemispheres show the frontal and temporal brain networks involved in the processing of linguistic information in intonation, Credit: Emmanuel Stamatakis" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>If you were to read out loud the words, “I’m absolutely delighted that Kate blamed Paul and Tessa Arnold” in a flat voice, with no rises or falls and placing equal weight on each syllable, you would quickly demonstrate the fundamental importance in human communication of intonation. Is Kate blaming Paul, while Tessa blames Arnold? Or is Kate blaming the Arnolds: Paul and Tessa? It would also be difficult to tell whether the speaker really is delighted, or whether they are being sarcastic. You would have suppressed a natural tendency to vary how high or low your voice is (pitch), to stress particular syllables, to hesitate where you would expect commas (rhythm), and to convey emphasis by varying volume. All of these elements constitute intonation.</p> <p>Dr Brechtje Post of the Phonetics Laboratory in the Department of Theoretical and Applied Linguistics describes intonation as “the melody of language”. “It signals,” she explained, “how the speech stream is structured and what category of statement you are making. ֱ̽word <em>now</em>, for example, can signify a question or an answer depending on intonation.”</p> <p>However, intonation also signals how we feel. Different intonation patterns for now can also express emotions such as triumph or frustration. “We call this function ‘paralinguistic’,” said Post. “It is thought to result from our primate inheritance, reflecting biologically driven codes that are now exploited to express attitudes and emotions universally across the languages of the world. It is distinct from the linguistic use of intonation, which is language specific.”</p> <p>Since the linguistic meaning and the emotions of the speaker are conveyed by the same acoustic signals – mainly pitch – linguists have struggled to disentangle the relationship between them. “Linguists have long theorised that linguistic and paralinguistic information are crucially different, but evidence has been elusive,” said Post. “This suggests that they would have to be processed differently in the brain, but this had not been shown either – until now.”</p> <p>With funding from the Economic and Social Research Council, Post and her co-investigator, neuroscientist Dr Emmanuel Stamatakis, conducted a four-year study combining experimental tasks with the latest MRI brain-scanning techniques. Native English-speaking participants within a specific age cohort were scanned while hearing test words and giving a yes/no response to either a linguistic question: ‘Does this sound like a statement?,’ or a paralinguistic question: ‘Does this sound surprised?.’ Distinct areas of their brains activated according to whether they were processing linguistic or paralinguistic meaning.</p> <p> ֱ̽researchers did indeed find that different frontal and temporal brain networks in both hemispheres contribute in different ways to the processing of intonational information.</p> <p>“ ֱ̽network which is engaged in the linguistic interpretation of intonation is the same as that which supports abstraction and categorisation for other types of linguistic information, such as recognising consonants and vowels,” said Stamatakis. “We did not, however, expect the degree of overlap between these networks or that processing paralinguistic information involves a much more limited network.”</p> <p>These findings confirm that neural processing of linguistic information in intonation is distinct from emotional or attitudinal information. This insight will aid the understanding of speech and comprehension deficits following, for example, stroke, and may have potential applications in speech therapy. As for the implications for understanding intonation, the findings show that it is not merely a side effect of biological imperatives related to animal communication (for example, a high squeaky sound being associated with danger), but that at least some of it is integral to the structure of human language.</p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>In a groundbreaking new study, Cambridge researchers have mapped out the neurobiological basis of a key aspect of human communication: intonation.</p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Emmanuel Stamatakis</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Areas highlighted in red on the right and left brain hemispheres show the frontal and temporal brain networks involved in the processing of linguistic information in intonation</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p> <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 15 Oct 2013 08:10:13 +0000 sj387 104162 at Cambridge Language Sciences launched /research/news/cambridge-language-sciences-launched <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/150512language-sciencesnigel-luckhurst.jpg?itok=sWEYZh-Z" alt="Launch event" title="Launch event, Credit: Photographer: Nigel Luckhurst" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Around 20 different departments and affiliated research bodies were represented at the event, which was designed to promote awareness of the Language Sciences Initiative and to encourage the active participation of the research community in setting the agenda.</p>&#13; <p>After a welcome given by Dr Henriëtte Hendriks of the Department of Theoretical and Applied Linguistics, Professor Lynn Gladden, Pro-Vice Chancellor for Research, and Professor Simon Franklin, Head of the School of Arts and Humanities, a packed day included presentations organised around each of the Initiative’s five current interdisciplinary research themes: Language Communication and Comprehension, Language Learning across the Lifespan, Human Language Technologies, Cambridge English, and Language Change and Diversity.</p>&#13; <p>Presentations from the conference will be posted on the new <a href="https://www.languagesciences.cam.ac.uk/">language sciences website</a> later this week. ֱ̽site also has a Research Directory and more information about the Initiative and its research themes.</p>&#13; <p>Jane Walsh, Coordinator for the Language Sciences Initiative, said “ ֱ̽excitement generated by the conference is very encouraging, but the big challenge will be to build on that initial enthusiasm. I’m in the process of collating all the feedback we received, which is very helpful. One of the ideas which we would like to act on as a priority is to work with graduate students to plan some networking events for early career researchers at the start of the next academic year. If anyone has any ideas or suggestions about this, or anything else to do with the Initiative, they can contact me via the website.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽ ֱ̽ launched its new Strategic Initiative in Language Sciences at a special one-day conference at Newnham College on 12 May, attended by over 90 delegates.</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Photographer: Nigel Luckhurst</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Launch event</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.languagesciences.cam.ac.uk/">Cambridge Language Sciences</a></div><div class="field-item odd"><a href="https://www.languagesciences.cam.ac.uk/">Cambridge Language Sciences</a></div><div class="field-item even"><a href="https://www.languagesciences.cam.ac.uk/">Cambridge Language Sciences</a></div><div class="field-item odd"><a href="https://www.languagesciences.cam.ac.uk/">Cambridge Language Sciences</a></div></div></div> Tue, 15 May 2012 17:34:14 +0000 lw355 26730 at ֱ̽communicative brain /research/news/the-communicative-brain <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/111101-brain-credit-william-marslen-wilson-and-lorraine-tyler.jpg?itok=mm06fVAT" alt="Functional neuroimaging of the human brain" title="Functional neuroimaging of the human brain, Credit: William Marslen-Wilson and Lorraine Tyler" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽ability to communicate using language is fundamental to the distinctive and remarkable success of the modern human. It is this capacity that separates us most decisively from our primate cousins, despite all that we have in common across species as intelligent social primates.</p>&#13; <p>A major challenge for the cognitive neurosciences is to understand this relationship: what is the neurobiological context in which human language and communication have emerged, and what are the special human properties that make language itself possible?</p>&#13; <p>For the past 150 years, scientific thinking about this relationship has been dominated by the concept of a single, central language system built around the brain’s left hemisphere. Pioneering 19th-century neurologists Paul Broca and Carl Wernicke noticed that patients with left hemisphere brain damage had difficulties with language comprehension and language production. Two areas of the left frontal and temporal lobes, Broca’s area and Wernicke’s area, and the bundle of nerve fibres connecting them, were identified as critical for speaking and understanding language.</p>&#13; <p>Recent research in our laboratories suggests major limitations to this classic approach to language and the brain. ֱ̽Broca–Wernicke concept captures one important aspect of the neural language system – the key role of the left hemisphere network – but it obscures another, equally important one. This is the role of bi-hemispheric systems and processes, whereby both left and right hemispheres work together to provide the fundamental underpinnings for human communicative processes.</p>&#13; <p>A more fruitful approach to human language and communication will require a dual neurobiological framework in which these capacities are supported by two intersecting but evolutionarily and functionally distinguishable subsystems. ֱ̽historical failure to make this separation has, we suggest, severely undermined scientific attempts to understand language, both as a neurocognitive phenomenon in the modern human, and in terms of its evolutionary and neurobiological context.</p>&#13; <h2>&#13; Dual systems</h2>&#13; <p>A strong evolutionary continuity between humans and our primate relatives is provided by a distributed, bi-hemispheric set of capacities that support the dynamic interpretation of visual and auditory signals in the service of social communication. These capacities have been the object of intensive study in monkeys and apes, and there is good evidence that their basic architecture underpins related communicative functions in the human.</p>&#13; <p>In the context of human language comprehension, the bi-hemispheric systems support the ability not only to identify the words a speaker is producing – typically by integrating auditory and visual cues in face-to-face interaction – but also to make sense of these word-meanings in the general context of the listener’s knowledge of the world and of the specific context of speaking.</p>&#13; <p>Where we see divergence between humans and other primates is in the domain of grammatical (or syntactic) function. Primate communication systems are not remotely comparable to human language in their expressive capacities. Human language is much more than a set of signs that stand for things. It constitutes a powerful and flexible set of grammatical devices for organising the flow of linguistic information and its interpretation, allowing us to represent and combine abstract linguistic elements, where these elements convey not only meaning but also the subtle structural cues that indicate how these elements are linked together.</p>&#13; <p>It is the fronto-temporal network of regions in the left hemisphere that mediates these core grammatical functions in humans. This is a network that differs neuroanatomically from those of the brains of other primates, showing substantial increases in size, complexity and connectivity.</p>&#13; <p>Although it’s not yet understood just how these evolutionary changes in the left hemisphere provide the neural substrate on which grammatical functions depend, it is clear that they are essential. When the left hemisphere system is damaged, the parallel right hemisphere regions cannot take over these functions, even when damage is sustained early in childhood.</p>&#13; <p>Critically, however, the left hemisphere system that has emerged in humans neither replaces nor displaces the bi-hemispheric system for social communication and action found in both humans and other primates. It interacts and combines with it to create a co-ordinated process of linguistically guided communication and social interaction.</p>&#13; <h2>&#13; Functional separability</h2>&#13; <p> ֱ̽most direct evidence for a dual system approach is the ability to separate these systems in the modern human. Using a combination of behavioural and neuroimaging techniques, we have been able to demonstrate this both in patients with left hemisphere brain damage and in unimpaired young adults.</p>&#13; <p>In the research with patients (conducted with Dr Paul Wright in the Department of Experimental Psychology and Dr Emmanuel Stamatakis in the Division of Anaesthesia) we focus on the comprehension of spoken words and spoken sentences. In initial testing, patients perform classic measures of syntactic function, where they match different spoken sentences to sets of pictures. Shown three pictures – a woman pushing a girl, a girl pushing a woman and a woman teaching a girl – patients will correctly match the sentence ‘ ֱ̽woman pushed the girl’ to the first picture but will incorrectly match the passive sentence ‘ ֱ̽woman is being pushed by the girl’ to the same picture. ֱ̽second sentence requires the use of syntactic cues to extract the right meaning – just using the order of words is not sufficient.</p>&#13; <p>These behavioural tests of syntactic impairment are linked, in the same patients, to their performance in the neuroimaging laboratory, where they hear sentences that vary in their syntactic demands, and where the precise extent of the injury to their brains can be mapped out. When we put these different sources of information together, we see that damage to the left hemisphere system progressively impairs the syntactic aspects of language processing – the more damage, the worse the performance.</p>&#13; <p>Critically, however, the amount of left hemisphere damage, and the extent to which it involves the key fronto-temporal circuit, does not affect the patients’ ability to identify the words being spoken or to understand the messages being communicated – so long as syntactic cues are not required to do so. These capacities are supported bi-hemispherically, and can remain relatively intact even in the face of massive left hemisphere damage.</p>&#13; <p>In work carried out with Dr Mirjana Bozic, then based at the Medical Research Council (MRC) Cognition and Brain Sciences Unit in Cambridge, we have been able to delineate these systems in the undamaged brain, using functional neuroimaging to tease out the different processing regions that are engaged by speech inputs with different properties.</p>&#13; <p>Listeners hear either words that are specifically linguistically complex (words like <em>played</em>, which have the grammatical inflection ‘ed’), or words that make more general demands on the language processing system (words like <em>ramp</em>, which have another word, <em>ram</em>, embedded in them). Using an analysis technique that identifies the separate dimensions of the brain’s response to these sets of words, we see that the linguistically complex words activate a response component that is restricted to the left fronto-temporal region. By contrast, words that are perceptually complex, due to increased competition between the whole word and the embedded word, activate a strongly bi-hemispheric set of regions, partially overlapping with the linguistic component. Even in the intact brain, therefore, we can see the dynamic allocation of processing resources across the two systems, as a function of their joint roles in the communicative process.</p>&#13; <h2>&#13; Implications</h2>&#13; <p>A dual systems account of the ‘communicative brain’ is likely to have important and illuminating consequences for the sciences of language and its disorders.</p>&#13; <p>In the context of left hemisphere brain damage we can better appreciate – and build upon for rehabilitation – the substantial bi-hemispheric communicative capacities the patient may still possess. In first- and second-language acquisition, we can better understand the learning trajectories that lead to language proficiency in terms of the relative contributions of these two aspects of communicative function.</p>&#13; <p> ֱ̽approach also provides a new perspective on the variation between languages, where different languages may load more or less heavily on the different computational resources made available by the two systems. Most importantly, it enables us to clarify and focus the core issues for a neurobiological account of language and communication, a scientific domain clouded by ideology and inconsistency.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>What is it about the human brain that makes language possible? Two evolutionary systems working together, say neuroscientists Professor William Marslen-Wilson and Professor Lorraine Tyler.</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">William Marslen-Wilson and Lorraine Tyler</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Functional neuroimaging of the human brain</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 29 Nov 2011 10:00:57 +0000 lw355 26496 at ֱ̽politics of speechmaking /research/news/the-politics-of-speechmaking <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/111024-microphone-via-flickr-cc-nfsa-aus.jpg?itok=eBFH-tx5" alt=" ֱ̽Politics of Speechmaking at the Festival of Ideas" title=" ֱ̽Politics of Speechmaking at the Festival of Ideas, Credit: Microphone - NFSA Australia via Flickr CC" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Phil Collins, Tony Blair's speechwriter until 2007, said because the media picked up on any dissent, politicians have to be wary of how their words might be interpreted.</p>&#13; <p>“ ֱ̽incentive to be dull is very serious,” said Collins.</p>&#13; <p>He added that great speeches were rare nowadays. This was partly because the writing was poor and drew on a lot of jargon, particularly from business.</p>&#13; <p> ֱ̽pace of political life was much faster too, which meant modern politicians gave far more speeches than ever before, most of which were instantly forgettable.</p>&#13; <p>Collins added that mass education had had an impact with politicians now aiming speeches at a mass audience rather than an elite. This meant they couldn't make literary references and that their language was narrower.</p>&#13; <p>There were also fewer great injustices to rectify due to medical and other advances and those that there were were complex, such as the financial crisis. There was also more focus on pragmatic issues rather than big ideologies, like capitalism and socialism, which were more worthy of grand styles of speech.</p>&#13; <p>He denied that the focus on soundbites was a factor. In fact, having a good argument was central and this could be encapsulated in a soundbite. He urged speechwriters to start from the soundbite which was really a summary of their argument. “If you don't know from a sentence what you are trying to say you don't know at all,” he said.</p>&#13; <p>Speechmakers also needed to fit their language to the occasion and the audience. Churchill's speeches were not as successful early on his career because he was talking about things “that did not warrant that degree of poetry”. “You have to get the language in the right register,” said Collins.</p>&#13; <p>Author Piers Brendon, a former Keeper of the Churchill Archives Centre, told the packed audience at Churchill College's Wolfson Theatre that Churchill was an old-fashioned speaker who worked hard on his words and had studied and learnt by heart the great speeches of the past. Indeed his most famous 1940 speech – “Never in the field of human conflict...” - had been gestating since 1899 and he had tried out phrases from it five times beforehand.</p>&#13; <p>He said the cadence of his words were like that of blank verse. “His speeches were old-fashioned, ornate, musical performances full of outdated terms,” he said. They were also, he added, pieces of cunningly fashioned propaganda, but he said propaganda was only effective if it reflected what people thought. Churchill was “booted out” after the War because he was out of time with a post-war world. “Speechifying is not good if it is not in tune with the times,” he said.</p>&#13; <p>David Runciman, reader in political thought at the ֱ̽ of Cambridge, said politicians nowadays were anxious to come across as “real people” due to the growing distrust of politicians and spin, but often their attempts to come across as real seemed clumsy and didn't work.</p>&#13; <p>He highlighted three successful recent speeches which were game changers and which did have a ring of authenticity. First was Obama's 2004 speech to the Democratic Convention in which he used his personal narrative to make wider points about the story of the US and harked back to the great presidential speeches of the past.</p>&#13; <p>Another successful speech was David Cameron's 2005 speech to the Conservative Party conference which overnight turned him into the frontrunner for leader of the Party. Unlike Obama's speech, it was not full of historical resonance and is mainly remembered because he spoke without notes. However, he looked “comfortable in his skin”, unlike his competitor David Davis. This made him seem more authentic, even though Davis had a more interesting personal story.</p>&#13; <p> ֱ̽last speech he highlighted was George Osborne's 2007 speech to the Conservative Party conference on inheritance tax. It was a “boring speech”, said Runciman, but it was the audience's response which was key.</p>&#13; <p>They gave a “bark of enthusiasm and approval” which surprised even Osborne. “It put the fear of God into Gordon Brown,” said Runciman.</p>&#13; <p>Michael White, the Guardian's assistant editor, has sat through many a political speech. He reeled off his impressions of the best.</p>&#13; <p>Thatcher was “not eloquent”, he said, but was “a force of nature” and “beat you into submission”. Blair was good at talking both to the two audiences at party conferences – the people in the hall and the people at home. Clinton was good on empathy. As an actor delivering lines written by a good writer, Reagan did well. Both Bushes were “awful”. Kinnock was good in the right circumstances, but a bit verbose.</p>&#13; <p>Jesse Jackson was the most memorable speaker he had heard. Obama was good at high politics, but not so good at “the arm-twisting, fixing low politics”. He highlighted too David Cameron's recent speech to the Conservative Party conference, saying it was “an attempt at Churchillian optimism”, trying to rally the country to face the economic troubles ahead. “He deserves praise for that,” said White.</p>&#13; <p> ֱ̽event was chaired by Allen Packwood, current Keeper of the Churchill Archives Centre, which put on an exhibition of past political speeches to accompany the debate.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Modern politicians are too stuck in a 24/7 media bubble to make the kind of grand speeches associated with past leaders, a debate on political rhetoric at the Cambridge Festival of Ideas heard last week.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽incentive to be dull is very serious.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Phil Collins</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Microphone - NFSA Australia via Flickr CC</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽Politics of Speechmaking at the Festival of Ideas</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/"><img alt="" src="/sites/www.cam.ac.uk/files/80x15.png" style="width: 80px; height: 15px;" /></a></p>&#13; <p>This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Licence</a>. If you use this content on your site please link back to this page.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 24 Oct 2011 15:51:24 +0000 sjr81 26449 at