ֱ̽ of Cambridge - automation /taxonomy/subjects/automation en Robot ‘chef’ learns to recreate recipes from watching food videos /research/news/robot-chef-learns-to-recreate-recipes-from-watching-food-videos <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/untitled-3_1.jpg?itok=RV53FI1P" alt="Robot arm reaching for a piece of broccoli" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽researchers, from the ֱ̽ of Cambridge, programmed their robotic chef with a ‘cookbook’ of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it.</p>&#13; &#13; <p>In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their <a href="https://ieeexplore.ieee.org/document/10124218">results</a>, reported in the journal <em>IEEE Access</em>, demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.</p>&#13; &#13; <p>Robotic chefs have been featured in science fiction for decades, but in reality, cooking is a challenging problem for a robot. Several commercial companies have built prototype robot chefs, although none of these are currently commercially available, and they lag well behind their human counterparts in terms of skill.</p>&#13; &#13; <p>Human cooks can learn new recipes through observation, whether that’s watching another person cook or watching a video on YouTube, but programming a robot to make a range of dishes is costly and time-consuming.</p>&#13; &#13; <p>“We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can – by identifying the ingredients and how they go together in the dish,” said Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author.</p>&#13; &#13; <p>Sochacki, a PhD candidate in Professor Fumiya Iida’s <a href="https://birlab.org/">Bio-Inspired Robotics Laboratory</a>, and his colleagues devised eight simple salad recipes and filmed themselves making them. They then used a publicly available neural network to train their robot chef. ֱ̽neural network had already been programmed to identify a range of different objects, including the fruits and vegetables used in the eight salad recipes (broccoli, carrot, apple, banana and orange).</p>&#13; &#13; <p>Using computer vision techniques, the robot analysed each frame of video and was able to identify the different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands and face. Both the recipes and the videos were converted to vectors and the robot performed mathematical operations on the vectors to determine the similarity between a demonstration and a vector.</p>&#13; &#13; <p>By correctly identifying the ingredients and the actions of the human chef, the robot could determine which of the recipes was being prepared. ֱ̽robot could infer that if the human demonstrator was holding a knife in one hand and a carrot in the other, the carrot would then get chopped up.</p>&#13; &#13; <p>Of the 16 videos it watched, the robot recognised the correct recipe 93% of the time, even though it only detected 83% of the human chef’s actions. ֱ̽robot was also able to detect that slight variations in a recipe, such as making a double portion or normal human error, were variations and not a new recipe. ֱ̽robot also correctly recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.</p>&#13; &#13; <p>“It’s amazing how much nuance the robot was able to detect,” said Sochacki. “These recipes aren’t complex – they’re essentially chopped fruits and vegetables, but it was really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”  </p>&#13; &#13; <p> ֱ̽videos used to train the robot chef are not like the food videos made by some social media influencers, which are full of fast cuts and visual effects, and quickly move back and forth between the person preparing the food and the dish they’re preparing. For example, the robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it – for the robot to identify the carrot, the human demonstrator had to hold up the carrot so that the robot could see the whole vegetable.</p>&#13; &#13; <p>“Our robot isn’t interested in the sorts of food videos that go viral on social media – they’re simply too hard to follow,” said Sochacki. “But as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.”</p>&#13; &#13; <p> ֱ̽research was supported in part by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).</p>&#13; &#13; <p><em><strong>Reference:</strong><br />&#13; Grzegorz Sochacki et al. ‘<a href="https://ieeexplore.ieee.org/document/10124218">Recognition of Human Chef’s Intentions for Incremental Learning of Cookbook by Robotic Salad Chef</a>.’ IEEE Access (2023). DOI: 10.1109/ACCESS.2023.3276234</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have trained a robotic ‘chef’ to watch and learn from cooking videos, and recreate the dish itself.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can – by identifying the ingredients and how they go together in the dish</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Greg Sochacki</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-208991" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/robot-chef-learns-to-recreate-recipes-from-watching-food-videos">Robot ‘chef’ learns to recreate recipes from watching food videos</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/nx3k4XA3x4Q?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img alt="Creative Commons License." src="/sites/www.cam.ac.uk/files/inner-images/cc-by-nc-sa-4-license.png" style="border-width: 0px; width: 88px; height: 31px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/social-media/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 05 Jun 2023 01:00:00 +0000 sc604 239811 at Taste of the future: robot chef learns to ‘taste as you go’ /research/news/taste-of-the-future-robot-chef-learns-to-taste-as-you-go <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/chef.jpg?itok=zwU4FEoU" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Working in collaboration with domestic appliances manufacturer Beko, researchers from the ֱ̽ of Cambridge trained their robot chef to assess the saltiness of a dish at different stages of the chewing process, imitating a similar process in humans.</p> <p>Their results could be useful in the development of automated or semi-automated food preparation by helping robots to learn what tastes good and what doesn’t, making them better cooks.</p> <p>When we chew our food, we notice a change in texture and taste. For example, biting into a fresh tomato at the height of summer will release juices, and as we chew, releasing both saliva and digestive enzymes, our perception of the tomato’s flavour will change.</p> <p> ֱ̽robot chef, which has already been trained to make omelettes based on human taster’s feedback, tasted nine different variations of a simple dish of scrambled eggs and tomatoes at three different stages of the chewing process, and produced ‘taste maps’ of the different dishes.</p> <p> ֱ̽researchers found that this ‘taste as you go’ approach significantly improved the robot’s ability to quickly and accurately assess the saltiness of the dish over other electronic tasting technologies, which only test a single homogenised sample. ֱ̽<a href="https://www.frontiersin.org/articles/10.3389/frobt.2022.886074/abstract">results</a> are reported in the journal <em>Frontiers in Robotics &amp; AI</em>.</p> <p> ֱ̽perception of taste is a complex process in humans that has evolved over millions of years: the appearance, smell, texture and temperature of food all affect how we perceive taste; the saliva produced during chewing helps carry chemical compounds in food to taste receptors mostly on the tongue; and the signals from taste receptors are passed to the brain. Once our brains are aware of the flavour, we decide whether we enjoy the food or not.</p> <p>Taste is also highly individual: some people love spicy food, while others have a sweet tooth. A good cook, whether amateur or professional, relies on their sense of taste, and can balance the various flavours within a dish to make a well-rounded final product.</p> <p>“Most home cooks will be familiar with the concept of tasting as you go – checking a dish throughout the cooking process to check whether the balance of flavours is right,” said Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author. “If robots are to be used for certain aspects of food preparation, it’s important that they are able to ‘taste’ what they’re cooking.”</p> <p>“When we taste, the process of chewing also provides continuous feedback to our brains,” said co-author Dr Arsen Abdulali, also from the Department of Engineering. “Current methods of electronic testing only take a single snapshot from a homogenised sample, so we wanted to replicate a more realistic process of chewing and tasting in a robotic system, which should result in a tastier end product.”</p> <p> ֱ̽researchers are members of Cambridge’s <a href="https://birlab.org/">Bio-Inspired Robotics Laboratory</a> run by <a href="http://mi.eng.cam.ac.uk/Main/FI224">Professor Fumiya Iida</a> of the Department of Engineering, which focuses on training robots to carry out the so-called last metre problems which humans find easy, but robots find difficult. Cooking is one of these tasks: earlier tests with their robot ‘chef’ have produced a passable omelette using feedback from human tasters.</p> <p>“We needed something cheap, small and fast to add to our robot so it could do the tasting: it needed to be cheap enough to use in a kitchen, small enough for a robot, and fast enough to use while cooking,” said Sochacki.</p> <p>To imitate the human process of chewing and tasting in their robot chef, the researchers attached a conductance probe, which acts as a salinity sensor, to a robot arm. They prepared scrambled eggs and tomatoes, varying the number of tomatoes and the amount of salt in each dish.</p> <p>Using the probe, the robot ‘tasted’ the dishes in a grid-like fashion, returning a reading in just a few seconds.</p> <p>To imitate the change in texture caused by chewing, the team then put the egg mixture in a blender and had the robot test the dish again. ֱ̽different readings at different points of ‘chewing’ produced taste maps of each dish.</p> <p>Their results showed a significant improvement in the ability of robots to assess saltiness over other electronic tasting methods, which are often time-consuming and only provide a single reading.</p> <p>While their technique is a proof of concept, the researchers say that by imitating the human processes of chewing and tasting, robots will eventually be able to produce food that humans will enjoy and could be tweaked according to individual tastes.</p> <p>“When a robot is learning how to cook, like any other cook, it needs indications of how well it did,” said Abdulali. “We want the robots to understand the concept of taste, which will make them better cooks. In our experiment, the robot can ‘see’ the difference in the food as it’s chewed, which improves its ability to taste.”</p> <p>“Beko has a vision to bring robots to the home environment which are safe and easy to use,” said Dr Muhammad W. Chughtai, Senior Scientist at Beko plc. “We believe that the development of robotic chefs will play a major role in busy households and assisted living homes in the future. This result is a leap forward in robotic cooking, and by using machine and deep learning algorithms, mastication will help robot chefs adjust taste for different dishes and users.”</p> <p>In future, the researchers are looking to improve the robot chef so it can taste different types of food and improve sensing capabilities so it can taste sweet or oily food, for example.</p> <p> ֱ̽research was supported in part by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC) Centre of Doctoral Training on Agri-Food Robotics (Agriforwards CDT). EPSRC is part of UK Research and Innovation (UKRI). Fumiya Iida is a Fellow of Corpus Christi College, Cambridge.</p> <p> </p> <p><em><strong>Reference:</strong><br /> Grzegorz Sochacki, Arsen Abdulali, and Fumiya Iida. ‘<a href="https://www.frontiersin.org/articles/10.3389/frobt.2022.886074/abstract">Mastication-Enhanced Taste-Based Classification of Multi-Ingredient Dishes for Robotic Cooking</a>.’ Frontiers in Robotics &amp; AI (2022). DOI: 10.3389/frobt.2022.886074</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A robot ‘chef’ has been trained to taste food at different stages of the chewing process to assess whether it’s sufficiently seasoned.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">If robots are to be used for certain aspects of food preparation, it’s important that they are able to ‘taste’ what they’re cooking</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Grzegorz Sochacki</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-194681" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/taste-of-the-future-robot-chef-learns-to-taste-as-you-go">Taste of the future: robot chef learns to ‘taste as you go’</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-2 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/nZ0xviqzUJg?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 04 May 2022 04:00:00 +0000 sc604 231861 at Machine learning to help develop self-healing robots that ‘feel pain’ /research/news/machine-learning-to-help-develop-self-healing-robots-that-feel-pain <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/sphgreen.jpg?itok=KhUygY6c" alt="Robotic hand made of self-healing material that can heal at room temperature" title="Robotic hand made of self-healing material that can heal at room temperature, Credit: Bram Vanderborght" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ֱ̽goal of the €3 million Self-healing soft robot (SHERO) project, funded by the European Commission, is to create a next-generation robot made from self-healing materials (flexible plastics) that can detect damage, take the necessary steps to temporarily heal itself and then resume its work – all without the need for human interaction.</p>&#13; &#13; <p>Led by the ֱ̽ of Brussels (VUB), the research consortium includes the Department of Engineering ( ֱ̽ of Cambridge), École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI), Swiss Federal Laboratories for Materials Science and Technology (Empa), and the Dutch Polymer manufacturer SupraPolix.</p>&#13; &#13; <p>As part of the SHERO project, the Cambridge team, led by <a href="https://www.eng.cam.ac.uk/profiles/fi224">Dr Fumiya Iida</a> from the Department of Engineering are looking at integrating self-healing materials into soft robotic arms.</p>&#13; &#13; <p><a href="https://www.eng.cam.ac.uk/profiles/tg444">Dr Thomas George Thuruthel</a>, also from the Department of Engineering, said self-healing materials could have future applications in modular robotics, educational robotics and evolutionary robotics where a single robot can be 'recycled' to generate a fresh prototype.</p>&#13; &#13; <p>“We will be using machine learning to work on the modelling and integration of these self-healing materials, to include self-healing actuators and sensors, damage detection, localisation and controlled healing,” he said. “ ֱ̽adaptation of models after the loss of sensory data and during the healing process is another area we are looking to address. ֱ̽end goal is to integrate the self-healing sensors and actuators into demonstration platforms in order to perform specific tasks.”</p>&#13; &#13; <p>Professor Bram Vanderborght, from VUB, who is leading the project with scientists from the robotics research centre Brubotics and the polymer research lab FYSC, said: “We are obviously very pleased to be working on the next generation of robots. Over the past few years, we have already taken the first steps in creating self-healing materials for robots. With this research we want to continue and, above all, ensure that robots that are used in our working environment are safer, but also more sustainable. Due to the self-repair mechanism of this new kind of robot, complex, costly repairs may be a thing of the past.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers from the ֱ̽ of Cambridge will use self-healing materials and machine learning to develop soft robotics as part of a new collaborative project.</p>&#13; </p></div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-150492" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/150492">Self-healing robots that ‘feel pain’</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-3 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/R7fZbYUFtc8?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Bram Vanderborght</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Robotic hand made of self-healing material that can heal at room temperature</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 07 Aug 2019 09:11:57 +0000 Anonymous 206972 at One day of paid work a week is all we need to get mental health benefits of employment /stories/employment-dosage <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Latest research finds up to eight hours of paid work a week significantly boosts mental health and life satisfaction. However, researchers found little evidence that any more hours – including a full five-day week – provide further increases in wellbeing. </p> </p></div></div></div> Wed, 19 Jun 2019 09:23:35 +0000 fpjl2 206022 at ֱ̽£2 billion vegetable and the agricultural future of the East /stories/the-two-billion-vegetable <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>From crop science to robotics, supply chains to economics, Cambridge ֱ̽ researchers are working with farmers and industry to sustainably increase agricultural productivity and profitability. </p> </p></div></div></div> Fri, 15 Mar 2019 11:00:01 +0000 lw355 204062 at Humans need not apply /research/features/humans-need-not-apply <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/16.humansneednotapply.jpg?itok=LMfjQlXw" alt="" title="Credit: ֱ̽District" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>On googling ‘will a robot take my job?’ I find myself on a BBC webpage that invites me to discover the likelihood that my work will be automated in the next 20 years. I type in ‘editor’. “It’s quite unlikely, 8%” comes back. Quite reassuring – but, coming from a farming family, it’s a sobering moment when I type in ‘farmer’: “It’s fairly likely, 76%”.</p>&#13; &#13; <p> ֱ̽results may well be out of date – such is the swiftness of change in labour market predictions – but the fact that the webpage even exists says something about the focus of many of today’s conversations around the future of work.</p>&#13; &#13; <p>Many of the discussions are driven by stark numbers. According to a scenario suggested recently by consultancy McKinsey, 75–375 million workers (3–14% of the global workforce) will need to switch occupational categories by 2030, and all workers will need to adapt “as their occupations evolve alongside increasingly capable machines”.</p>&#13; &#13; <p>Just recently, online retailer Shop Direct announced the closure of warehouses and a move to automation, putting nearly 2,000 jobs at risk. Automation – or ‘embodied’ artificial intelligence (AI) – is one aspect of the disruptive effects of technology on the labour market. ‘Disembodied AI’, like the algorithms running in our smartphones, is another.</p>&#13; &#13; <p>Dr Stella Pachidi from Cambridge Judge Business School believes that some of the most fundamental changes in work are happening as a result of ‘algorithmication’ of jobs that are dependent on information rather than production – the so-called knowledge economy.</p>&#13; &#13; <p>Algorithms are capable of learning from data to undertake tasks that previously needed human judgement, such as reading legal contracts, analysing medical scans and gathering market intelligence.</p>&#13; &#13; <p>“In many cases, they can outperform humans,” says Pachidi. “Organisations are attracted to using algorithms because they want to make choices based on what they consider is ‘perfect information’, as well as to reduce costs and enhance productivity.”</p>&#13; &#13; <p><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/cover_1.jpg" style="width: 200px; height: 278px; float: right;" /></p>&#13; &#13; <p>But these enhancements are not without consequences, says Pachidi, who has recently started to look at the impact of AI on the legal profession.</p>&#13; &#13; <p>“If routine cognitive tasks are taken over by AI, how do professions develop their future experts?” she asks. “Expertise and the authority it gives you is distributed in the workplace. One way of learning about a job is ‘legitimate peripheral participation’ – a novice stands next to experts and learns by observation. If this isn’t happening, then you need to find new ways to learn.”</p>&#13; &#13; <p>Another issue is the extent to which the technology influences or even controls the workforce. For over two years, Pachidi was embedded in a telecommunications company. There she observed “small battles” playing out that could have vast consequences for the future of the company.</p>&#13; &#13; <p>“ ֱ̽way telecoms salespeople work is through personal and frequent contact with clients, using the benefit of experience to assess a situation and reach a decision. However, the company had started using a data analytics algorithm that defined when account managers should contact certain customers about which kinds of campaigns and what to offer them.”</p>&#13; &#13; <p> ֱ̽algorithm – usually built by external designers – often becomes the curator of knowledge, she explains. “In cases like this, a myopic view begins to creep into working practices whereby workers learn through the ‘algorithm’s eyes’ and become dependent on its instructions. Alternative explorations – the so-called technology of foolishness where innovation comes out of experimentation and intuition – is effectively discouraged.”</p>&#13; &#13; <p>Pachidi and colleagues have even observed the development of strategies to ‘game’ the algorithm. “Decisions made by algorithms can structure and control the work of employees. We are seeing cases where workers feed the algorithm with false data to reach their targets.”</p>&#13; &#13; <p>It’s scenarios like these that many researchers in Cambridge and beyond are working to avoid by increasing the trustworthiness and transparency of AI technologies (see <a href="/system/files/issue_35_research_horizons_new.pdf">issue 35 of <em>Research Horizons</em></a>), so that organisations and individuals understand how AI decisions are made.</p>&#13; &#13; <p>In the meantime, says Pachidi, in our race to reap the undoubted benefits of new technology, it’s important to avoid taking a laissez-faire approach to algorithmication: “We need to make sure we fully understand the dilemmas that this new world raises regarding expertise, occupational boundaries and control.”</p>&#13; &#13; <p>While Pachidi sees changes ahead in the nature of work, economist Professor Hamish Low believes that the future of work will involve major transitions across the whole life course for everyone: “ ֱ̽traditional trajectory of full-time education followed by full-time work followed by a pensioned retirement is a thing of the past.”</p>&#13; &#13; <p>“Disruptive technologies, the rise of the ad hoc ‘gig economy’, living longer and the fragile economics of pension provision will mean a multistage employment life: one where retraining happens across the life course, and where multiple jobs and no job happen by choice at different stages.”</p>&#13; &#13; <p>His research examines the role of risk and the welfare system in relation to work at these various life stages. “When we are talking about the future of work,” he says, “we should have in mind these new frameworks for what people’s lives will look like, and prepare new generations for a different perspective on employment.”</p>&#13; &#13; <p>On the subject of future job loss, he believes the rhetoric is based on a fallacy: “It assumes that the number of jobs is fixed. If in 30 years, half of 100 jobs are being carried out by robots that doesn’t mean we are left with just 50 jobs for humans. ֱ̽number of jobs will increase: we would expect there to be 150 jobs.”</p>&#13; &#13; <p>Dr Ewan McGaughey, at Cambridge’s Centre for Business Research and King’s College London, agrees that “apocalyptic” views about the future of work are misguided. “It’s the laws that restrict the supply of capital to the job market, not the advent of new technologies that causes unemployment.”</p>&#13; &#13; <p>His recently published research answers the question of whether automation, AI and robotics will mean a ‘jobless future’ by looking at the causes of unemployment. “History is clear that change can mean redundancies – after World War II, 42% of UK jobs were redundant, but social policy maintained full employment. Yes, technology can displace people. But social policies can tackle this through retraining and redeployment.”</p>&#13; &#13; <p>He adds: “ ֱ̽big problem won’t be unemployment it will be underemployment – people who want to work but can’t because they have zero-hours contracts. If there is going to be change to jobs as a result of AI and robotics then I’d like to see governments seizing the opportunity to improve policy to enforce good job security. We can ‘reprogramme’ the law to prepare for a fairer future of work and leisure.”</p>&#13; &#13; <p>This might mean revitalising fiscal and monetary policies such as a universal social security and taxing the owners of robots.</p>&#13; &#13; <p>McGaughey’s findings are a call to arms to leaders of organisations, governments and banks to pre-empt the coming changes with bold new policies that ensure full employment, fair incomes and a thriving economic democracy.</p>&#13; &#13; <p>“ ֱ̽promises of these new technologies are astounding. They deliver humankind the capacity to live in a way that nobody could have once imagined,” he adds. “Just as the industrial revolution brought people past subsistence agriculture, and the corporate revolution enabled mass production, a third revolution has been pronounced. But it will not only be one of technology. ֱ̽next revolution will be social.”</p>&#13; &#13; <p><em>Inset image: read more about our research on the topic of work in the ֱ̽'s research magazine; download a <a href="/system/files/issue_36_research_horizons.pdf">pdf</a>; view on <a href="https://issuu.com/uni_cambridge/docs/issue_36_research_horizons">Issuu</a>.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Will automation, AI and robotics mean a jobless future, or will their productivity free us to innovate and explore? Is the impact of new technologies to be feared, or a chance to rethink the structure of our working lives and ensure a fairer future for all?</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">If routine cognitive tasks are taken over by AI, how do professions develop their future experts?</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stella Pachidi</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.thedistrict.co.uk/" target="_blank"> ֱ̽District</a></div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Linking research to policy makers</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Dr Koen Jonkers is at the <a href="https://commission.europa.eu/about/departments-and-executive-agencies/joint-research-centre_en">Joint Research Centre</a> – the European Commission’s science and knowledge service in Brussels – and also a policy fellow at Cambridge’s <a href="https://www.csap.cam.ac.uk/">Centre for Science and Policy</a> (CSaP).</p>&#13; &#13; <p>Over the past few months, Jonkers has been discussing the future of work with academic experts in Cambridge as part of his research for a special JRC report aimed at providing evidence for the European Commission’s employment and social affairs policies.</p>&#13; &#13; <p>“Among the megatrends that will affect the future of work – an ageing workforce, migration, globalisation, urbanisation, and so on – the impact of technology is one where we seem to be witnessing a step change in the relationship that many people have with their work,” says Jonkers, who is one of the scientists employed by the JRC to provide independent scientific advice and support to EU policy.</p>&#13; &#13; <p>“Some people have said there will be a major shock in terms of joblessness. Others that it is part of a trend that is ongoing and that it will bring opportunity. We want to give an overview of all the viewpoints, to analyse how well societies are equipped to deal with change, to mitigate potential adverse consequences, and to come up with an idea of what is likely to happen.</p>&#13; &#13; <p>“As well as reskilling and upskilling current workers, governments will be keen to look at anticipatory actions to prepare young people to have a different type of work life to that of their parents and grandparents, so that they will be used to a world where people and machines work together.”</p>&#13; &#13; <p> ֱ̽mission of CSaP is to improve public policy – in the UK and Europe – through the more effective use of evidence and expertise. “Through the CSaP Fellowship, it’s been very refreshing to talk with people with very high levels of expertise in fields other to my own,” says Junkers. “In such a multifaceted areas as the future of work, it’s been important for me to have expert analysis of the themes that are playing out.”</p>&#13; </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 05 Jul 2018 13:08:49 +0000 lw355 198632 at All in a day’s work /research/discussion/all-in-a-days-work <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/christopher-burns-368617-unsplash_0.jpg?itok=BAc_9TJj" alt="" title="Credit: Christopher Burns on Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="/stories/all-in-a-days-work">READ THE STORY HERE</a></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers at the ֱ̽ of Cambridge are helping to understand the world of work – the good, the bad, the fair and the future.</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/person-holding-tool-during-daytime-8KfCR12oeUM" target="_blank">Christopher Burns on Unsplash</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 12 Jun 2018 08:54:24 +0000 lw355 198002 at Let’s get statted /research/features/lets-get-statted <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/150603automatic-statitician.jpg?itok=Xd_7Dc2L" alt="" title="Credit: Automatic Statistician" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>“I keep saying that the sexy job in the next 10 years will be statisticians, and I’m not kidding,” Hal Varian, Chief Economist at Google famously observed in 2009. It seems a difficult assertion to take seriously, but six years on, there is little question that their skills are at a premium.</p>&#13; &#13; <p>Indeed, we may need statisticians now more than at any time in our history. Even compared with a decade ago, we can now gather, produce and consume unimaginably large quantities of information. As Varian predicted, statisticians who can crunch these numbers are all the rage. A new discipline, ‘Data Science’, which fuses statistics and computational work, has emerged.</p>&#13; &#13; <p>“People are awash in data,” reflects Zoubin Ghahramani, Professor of Information Engineering at Cambridge. “This is occurring across industry, it’s changing society as we become more digitally connected, and it’s true of the sciences as well, where fields like biology and astronomy generate vast amounts of data.”</p>&#13; &#13; <p>Over the past few years, Richard Samworth, Professor of Statistics, has watched the datarati step out from the shadows. “It’s probably fair to say that statistics didn’t have the world’s best PR for quite a long time,” he says. “Since this explosion in the amount of data that we can collect and store, opportunities have arisen to answer questions we previously had no hope of being able to address. These demand an awful lot of new statistical techniques.”</p>&#13; &#13; <p>‘Big data’ is most obviously relevant to the sciences, where large volumes of information are gathered to answer questions in fields such as genetics, astronomy and particle physics, but it also has more familiar applications. Transport authorities gather data from electronic ticketing systems like Oyster cards to understand more about passenger movements; supermarkets closely monitor customer transactions to react to shoppers’ predilections. As users of social media, many of us disclose data about ourselves that is as valuable to marketing as it is relevant to psychoanalytics. Increasingly, we are also ‘lifeloggers’, monitoring our own behaviour, health, diet and fitness, through smart technology.</p>&#13; &#13; <p>This information, as Ghahramani points out, is no use on its own: “It fills hard drives, but to extract value from it, we need methods that learn patterns in the data and allow us to make predictions and intelligent decisions.” This is what statisticians, computer scientists and machine learning specialists bring to the party – they build algorithms, which are coded as computer software, to see patterns. At root, the datarati are interpreters.</p>&#13; &#13; <p>Despite their ‘sexy’ new image, however, not enough data scientists exist to meet this rocketing demand. Could some aspects of the interpretation be automated using artificial intelligence instead, Ghahramani wondered? And so, in 2014 and with funding from Google, the first incarnation of ֱ̽Automatic Statistician was launched online. Despite minimal publicity, 3,000 users uploaded datasets to it within a few months.</p>&#13; &#13; <p><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/150603_zoubin-and-richard.jpg" style="width: 590px; height: 260px; float: left;" /></p>&#13; &#13; <p>Once fed a dataset, the Automatic Statistician assesses it against various statistical models, interprets the data and – uniquely – translates this interpretation into a short report of readable English. It does this without human intervention, drawing on an open-ended ‘grammar’ of statistical models. It is also deliberately conservative, only basing its assessments on sound statistical methodology, and even critiquing its own approach.</p>&#13; &#13; <p>Ghahramani and his team are now refining the system to cope with the messy, incomplete nature of real-world data, and also plan to develop its base of knowledge and to offer interactive reports. In the longer term, they hope that the Automatic Statistician will learn from its own work: “ ֱ̽idea is that it will look at a new dataset and say, ‘Ah, I’ve seen this kind of thing before, so maybe I should check the model I used last time’,” he explains.</p>&#13; &#13; <p>While automated systems rely on existing models, new algorithms are needed to extract useful information from evolving and expanding datasets. Here, the role of human statisticians is vital.</p>&#13; &#13; <p>To characterise the problem, Samworth presents a then-and-now comparison. During the past century, a typical statistical problem might, for instance, have been to understand the relationship between the initial speed and stopping distance of cars based on a sample size of 50.</p>&#13; &#13; <p>These days, however, we can record information on a huge number of variables at once – the weather, road surface, make of car, wind direction, and so on. Although the extra information has the potential to yield better models and reduce uncertainty, in many areas, the number of features measured is so high it may even exceed the number of observations. Identifying appropriate models in this context is a serious challenge, which requires the development of new algorithms.</p>&#13; &#13; <p>To resolve this, statisticians rely on a principle called ‘sparsity’; the idea that only a few bits of the dataset are really important. ֱ̽statistician identifies these needles in the haystack. Various algorithms have been developed to select the important variables, so that the initial sprawl of information starts to become manageable and patterns can be extracted.</p>&#13; &#13; <p>Together with his colleague Dr Rajen Shah in the Department of Pure Mathematics and Mathematical Statistics, Samworth has developed a method for refining any such variable selection technique called ‘Complementary Pairs Stability Selection’. This applies the original method to random subsamples of the data instead of the whole, and does this over and over again. Eventually, the variables that appear on a high proportion of the subsamples emerge as those meriting further attention.</p>&#13; &#13; <p>Scanning Google Scholar for citations of the paper in which this was proposed, Samworth finds that his algorithm has been used in numerous research projects. One looks at how to improve fundraising for disaster zones, another examines potential biomarkers for breast cancer survival, and a third identifies risk factors connected with childhood malnutrition.</p>&#13; &#13; <p>How does he feel when he sees his work being applied so far and wide? “It’s funny,” he says. “My training is in mathematics and I still get a kick from proving a theorem, but it’s also rewarding to see people using your work. It’s often said that the good thing about being a statistician is that you get to play in everyone’s back yard. I suppose this demonstrates why that’s true.”</p>&#13; &#13; <p><em>Inset image: left to right, Zoubin Ghahramani and Richard Samworth</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>With more information than ever at our fingertips, statisticians are vital to innumerable fields and industries. Welcome to the world of the datarati, where humans and machines team up to crunch the numbers.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">It fills hard drives, but to extract value from it, we need methods that learn patterns in the data and allow us to make predictions and intelligent decisions</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Zoubin Ghahramani</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.automaticstatistician.com/" target="_blank">Automatic Statistician</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="https://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.automaticstatistician.com/">Automatic Statistician</a></div></div></div> Wed, 03 Jun 2015 14:13:27 +0000 tdk25 152612 at