ֱ̽ of Cambridge - intelligence /taxonomy/subjects/intelligence en Living with artificial intelligence: how do we get it right? /research/discussion/living-with-artificial-intelligence-how-do-we-get-it-right <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/gic-on-stocksy_0.jpg?itok=JEpbgoWy" alt="" title="Credit: GIC on Stocksy" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?</p>&#13; &#13; <p>True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.</p>&#13; &#13; <p>If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.</p>&#13; &#13; <p>Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?</p>&#13; &#13; <p>On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.</p>&#13; &#13; <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p>&#13; &#13; <p>So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.</p>&#13; &#13; <p>For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.</p>&#13; &#13; <p>However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. ֱ̽‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.</p>&#13; &#13; <p> ֱ̽‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.</p>&#13; &#13; <p>Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.</p>&#13; &#13; <p>As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.</p>&#13; &#13; <p>Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?</p>&#13; &#13; <p>These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.</p>&#13; &#13; <p>This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”</p>&#13; &#13; <p>We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.</p>&#13; &#13; <p>But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.</p>&#13; &#13; <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; download a <a href="/system/files/issue_35_research_horizons_new.pdf">pdf</a>; view on <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">Issuu</a>.</em></p>&#13; &#13; <p><em>Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a>, where they work on '<a href="https://www.lcfi.ac.uk/projects/ai-agents-and-persons/">Agents and persons</a>'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Huw Price and Karina Vold</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">GIC on Stocksy</a></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a></div></div></div> Wed, 28 Feb 2018 11:00:51 +0000 Anonymous 195752 at Artificial intelligence is growing up fast: what’s next for thinking machines? /research/features/artificial-intelligence-is-growing-up-fast-whats-next-for-thinking-machines <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/p26-27whatsnext.jpg?itok=K-rQlbow" alt="" title="Artificial intelligence, Credit: ֱ̽District" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>We are well on the way to a world in which many aspects of our daily lives will depend on AI systems.</p> <p>Within a decade, machines might diagnose patients with the learned expertise of not just one doctor but thousands. They might make judiciary recommendations based on vast datasets of legal decisions and complex regulations. And they will almost certainly know exactly what’s around the corner in autonomous vehicles.</p> <p>“Machine capabilities are growing,” says Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI). “Machines will perform the tasks that we don’t want to: the mundane jobs, the dangerous jobs. And they’ll do the tasks we aren’t capable of – those involving too much data for a human to process, or where the machine is simply faster, better, cheaper.”</p> <p>Dr Mateja Jamnik, AI expert at the Department of Computer Science and Technology, agrees: “Everything is going in the direction of augmenting human performance – helping humans, cooperating with humans, enabling humans to concentrate on the areas where humans are intrinsically better such as strategy, creativity and empathy.”</p> <p>Part of the attraction of AI requires that future technologies perform tasks autonomously, without humans needing to monitor activities every step of the way. In other words, machines of the future will need to think for themselves. But, although computers today outperform humans on many tasks, including learning from data and making decisions, they can still trip up on things that are really quite trivial for us.</p> <p>Take, for instance, working out the formula for the area of a parallelogram. Humans might use a diagram to visualise how cutting off the corners and reassembling it as a rectangle simplifies the problem. Machines, however, may “use calculus or integrate a function. This works, but it’s like using a sledgehammer to crack a nut,” says Jamnik, who was recently appointed Specialist Adviser to the House of Lords Select Committee on AI.</p> <p><a href="/system/files/issue_35_research_horizons_new.pdf"><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/front-cover_for-web.jpg" style="width: 288px; height: 407px; float: right;" /></a></p> <p>“When I was a child, I was fascinated by the beauty and elegance of mathematical solutions. I wondered how people came up with such intuitive answers. Today, I work with neuroscientists and experimental psychologists to investigate this human ability to reason and think flexibly, and to make computers do the same.”</p> <p>Jamnik believes that AI systems that can choose so-called heuristic approaches – employing practical, often visual, approaches to problem solving – in a similar way to humans will be an essential component of human-like computers. They will be needed, for instance, so that machines can explain their workings to humans – an important part of the transparency of decision-making that we will require of AI.</p> <p>With funding from the Engineering and Physical Sciences Research Council and the Leverhulme Trust, she is building systems that have begun to reason like humans through diagrams. Her aim now is to enable them to move flexibly between different “modalities of reasoning”, just as humans have the agility to switch between methods when problem solving. </p> <p> Being able to model one aspect of human intelligence in computers raises the question of what other aspects would be useful. And in fact how ‘human-like’ would we want AI systems to be? This is what interests Professor José Hernandez-Orallo, from the Universitat Politècnica de València in Spain and Visiting Fellow at the CFI.</p> <p>“We typically put humans as the ultimate goal of AI because we have an anthropocentric view of intelligence that places humans at the pinnacle of a monolith,” says Hernandez-Orallo. “But human intelligence is just one of many kinds. Certain human skills, such as reasoning, will be important in future systems. But perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all.</p> <p>“I believe that future machines can be more powerful than humans not just because they are faster but because they can have cognitive functionalities that are inherently not human.” This raises a difficulty, says Hernandez-Orallo: “How do we measure the intelligence of the systems that we build? Any definition of intelligence needs to be linked to a way of measuring it, otherwise it’s like trying to define electricity without a way of showing it.”</p> <p> ֱ̽intelligence tests we use today – such as psychometric tests or animal cognition tests – are not suitable for measuring intelligence of a new kind, he explains. Perhaps the most famous test for AI is that devised by 1950s Cambridge computer scientist Alan Turing. To pass the Turing Test, a computer must fool a human into believing it is human. “Turing never meant it as a test of the sort of AI that is becoming possible – apart from anything else, it’s all or nothing and cannot be used to rank AI,” says Hernandez-Orallo.</p> <p>In his recently published book ֱ̽Measure of all Minds, he argues for the development of “universal tests of intelligence” – those that measure the same skill or capability independently of the subject, whether it’s a robot, a human or an octopus.</p> <p>His work at the CFI as part of the ‘Kinds of Intelligence’ project, led by Dr Marta Halina, is asking not only what these tests might look like but also how their measurement can be built into the development of AI. Hernandez-Orallo sees a very practical application of such tests: the future job market. “I can imagine a time when universal tests would provide a measure of what’s needed to accomplish a job, whether it’s by a human or a machine.”</p> <p>Cave is also interested in the impact of AI on future jobs, discussing this in a <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69702.pdf">report</a> on the ethics and governance of AI recently submitted to the House of Lords Select Committee on AI on behalf of researchers at Cambridge, Oxford, Imperial College and the ֱ̽ of California at Berkeley. “AI systems currently remain narrow in their range of abilities by comparison with a human. But the breadth of their capacities is increasing rapidly in ways that will pose new ethical and governance challenges – as well as create new opportunities,” says Cave. “Many of these risks and benefits will be related to the impact these new capacities will have on the economy, and the labour market in particular.”</p> <p>Hernandez-Orallo adds: “Much has been written about the jobs that will be at risk in the future. This happens every time there is a major shift in the economy. But just as some machines will do tasks that humans currently carry out, other machines will help humans do what they currently cannot – providing enhanced cognitive assistance or replacing lost functions such as memory, hearing or sight.”</p> <p>Jamnik also sees opportunities in the age of intelligent machines: “As with any revolution, there is change. Yes some jobs will become obsolete. But history tells us that there will be jobs appearing. These will capitalise on inherently human qualities. Others will be jobs that we can’t even conceive of – memory augmentation practitioners, data creators, data bias correctors, and so on. That’s one reason I think this is perhaps the most exciting time in the history of humanity.”</p> <p><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/MK31E4mSbXw" width="560"></iframe></p> <p><em>Inset image: read more about our AI research in the ֱ̽'s research magazine; download a <a href="/system/files/issue_35_research_horizons_new.pdf">pdf</a>; view on <a href="https://issuu.com/uni_cambridge/docs/issue_35_research_horizons">Issuu</a>.</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Our lives are already enhanced by AI – or at least an AI in its infancy – with technologies using algorithms that help them to learn from our behaviour. As AI grows up and starts to think, not just to learn, we ask how human-like do we want their intelligence to be and what impact will machines have on our jobs? </p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">José Hernandez-Orallo</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank"> ֱ̽District</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Artificial intelligence</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width: 0px;" /></a><br /> ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 06 Feb 2018 09:11:12 +0000 cjb250 195052 at New brain mapping technique highlights relationship between connectivity and IQ /research/news/new-brain-mapping-technique-highlights-relationship-between-connectivity-and-iq <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/811439966730e7770e7dk.jpg?itok=exjDdqJm" alt="&quot;Mini Stack&quot; Interchange of Interstate 10, Loop 202, and State Route 51 at Night (2)" title="&amp;quot;Mini Stack&amp;quot; Interchange of Interstate 10, Loop 202, and State Route 51 at Night (2), Credit: Alan Stark" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In recent years, there has been a concerted effort among scientists to map the connections in the brain – the so-called ‘connectome’ – and to understand how this relates to human behaviours, such as intelligence and mental health disorders.</p>&#13; &#13; <p>Now, in research published in the journal <em>Neuron</em>, an international team led by scientists at the ֱ̽ of Cambridge and the National Institutes of Health (NIH), USA, has shown that it is possible to build up a map of the connectome by analysing conventional brain scans taken using a magnetic resonance imaging (MRI) scanner.</p>&#13; &#13; <p> ֱ̽team compared the brains of 296 typically-developing adolescent volunteers. Their results were then validated in a cohort of a further 124 volunteers. ֱ̽team used a conventional 3T MRI scanner, where 3T represents the strength of the magnetic field; however, Cambridge has recently installed a much more powerful Siemens 7T Terra MRI scanner, which should allow this technique to give an even more precise mapping of the human brain.</p>&#13; &#13; <p>A typical MRI scan will provide a single image of the brain, from which it is possible to calculate multiple structural features of the brain. This means that every region of the brain can be described using as many as ten different characteristics. ֱ̽researchers showed that if two regions have similar profiles, then they are described as having ‘morphometric similarity’ and it can be assumed that they are a connected network. They verified this assumption using publically-available MRI data on a cohort of 31 juvenile rhesus macaque monkeys to compare to ‘gold-standard’ connectivity estimates in that species.</p>&#13; &#13; <p>Using these morphometric similarity networks (MSNs), the researchers were able to build up a map showing how well connected the ‘hubs’ – the major connection points between different regions of the brain network – were. They found a link between the connectivity in the MSNs in brain regions linked to higher order functions – such as problem solving and language – and intelligence.</p>&#13; &#13; <p>“We saw a clear link between the ‘hubbiness’ of higher-order brain regions – in other words, how densely connected they were to the rest of the network – and an individual’s IQ,” explains PhD candidate Jakob Seidlitz at the ֱ̽ of Cambridge and NIH. “This makes sense if you think of the hubs as enabling the flow of information around the brain – the stronger the connections, the better the brain is at processing information.”</p>&#13; &#13; <p>While IQ varied across the participants, the MSNs accounted for around 40% of this variation – it is possible that higher-resolution multi-modal data provided by a 7T scanner may be able to account for an even greater proportion of the individual variation, says the researchers.</p>&#13; &#13; <p>“What this doesn’t tell us, though, is where exactly this variation comes from,” adds Seidlitz. “What makes some brains more connected than others – is it down to their genetics or their educational upbringing, for example? And how do these connections strengthen or weaken across development?”</p>&#13; &#13; <p>“This could take us closer to being able to get an idea of intelligence from brain scans, rather than having to rely on IQ tests,” says Professor Ed Bullmore, Head of Psychiatry at Cambridge. “Our new mapping technique could also help us understand how the symptoms of mental health disorders such as anxiety and depression or even schizophrenia arise from differences in connectivity within the brain.”</p>&#13; &#13; <p> ֱ̽research was funded by the Wellcome Trust and the National Institutes of Health.</p>&#13; &#13; <p><em><strong>Reference</strong><br />&#13; Seidlitz, J et al. <a href="https://www.cell.com/neuron/abstract/S0896-6273(17)31092-9">Morphometric Similarity Networks Detect Microscale Cortical Organisation and Predict Inter-Individual Cognitive Variation</a>. Neuron; 21 Dec 2017; DOI: 10.1016/j.neuron.2017.11.039</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A new and relatively simple technique for mapping the wiring of the brain has shown a correlation between how well connected an individual’s brain regions are and their intelligence, say researchers at the ֱ̽ of Cambridge.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">This could take us closer to being able to get an idea of intelligence from brain scans, rather than having to rely on IQ tests</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Ed Bullmore</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/squeaks2569/8114399667/" target="_blank">Alan Stark</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">&quot;Mini Stack&quot; Interchange of Interstate 10, Loop 202, and State Route 51 at Night (2)</div></div></div><div class="field field-name-field-panel-title field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Researcher profile: Jakob Seidlitz</div></div></div><div class="field field-name-field-panel-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><img alt="" src="/sites/www.cam.ac.uk/files/inner-images/jakob_seidlitzmed.jpg" style="width: 200px; height: 300px; float: right; margin: 5px;" />​Jakob Seidlitz is at PhD student on the NIH Oxford-Cambridge Scholars Programme. A graduate of the ֱ̽ of Rochester, USA, he spends half of his time in Cambridge and half at the National Institutes of Health in the USA.</p>&#13; &#13; <p>Jakob’s research aims to better understand the origins of psychiatric disease, using techniques such as MRI to study child and adolescent brain development and map patterns of brain connectivity.</p>&#13; &#13; <p>“A typical day consists of performing MRI data analysis, statistical testing, reading scientific literature, and preparing and editing manuscripts. “It’s great being able to work on such amazing large-scale neuroimaging datasets that allow for answering longstanding questions in psychiatry,” he says.</p>&#13; &#13; <p>“Cambridge is a great place for my work. Ed [Bullmore], my supervisor, is extremely inclusive and collaborative, which meant developing relationships within and outside the department. Socially, the college post-grad community is amazingly diverse and welcoming, and the collegiate atmosphere of Cambridge can be truly inspiring.”</p>&#13; &#13; <p>Jakob is a member of Wolfson College. Outside of his research, he plays football for the ‘Blues’ (the Cambridge ֱ̽ Association Football Club).</p>&#13; </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div> Tue, 02 Jan 2018 14:21:01 +0000 cjb250 194252 at Elephants’ ‘body awareness’ adds to increasing evidence of their intelligence /research/news/elephants-body-awareness-adds-to-increasing-evidence-of-their-intelligence <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/daleplotnik1.jpg?itok=v0WtLOTM" alt="Elephant body awareness tast" title="Elephant body awareness tast, Credit: Josh Plotnik/Rachel Dale" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Self-awareness in both animals and young children is usually tested using the ‘mirror self-recognition test’ to see if they understand that the reflection in front of them is actually their own. Only a few species have so far shown themselves capable of self-recognition – great apes, dolphins, magpies and elephants. It is thought to be linked to more complex forms of perspective taking and empathy.</p>&#13; &#13; <p>Critics, however, have argued that this test is limited in its ability to investigate complex thoughts and understanding, and that it may be less useful in testing animals who rely less on vision than other species.</p>&#13; &#13; <p>One potential complement to the mirror test as a measure of self-understanding may be a test of ‘body-awareness’. This test looks at how individuals may recognise their bodies as obstacles to success in a problem-solving task. Such a task could demonstrate an individual’s understanding of its body in relation to its physical environment, which may be easier to define than the distinction between oneself and another demonstrated through success at the mirror test.</p>&#13; &#13; <p>To test for body-awareness in Asian elephants, Dr Josh Plotnik, visiting researcher at the ֱ̽ of Cambridge, visiting assistant professor of psychology at Hunter College, City ֱ̽ of New York and founder of conservation charity <a href="http://thinkelephants.org/">Think Elephants International</a>, devised a new test of self-awareness together with his colleague Rachel Dale (now a PhD student at the ֱ̽ of Veterinary Medicine in Vienna). ֱ̽new test was adapted from one in which children were asked to push a shopping trolley, but the trolley was attached to a mat on which they were standing.</p>&#13; &#13; <p>In the elephant version of the test, Plotnik and Dale attached a stick to a rubber mat using a rope; the elephants were then required to walk onto the mat, pick up the stick and pass it to an experimenter standing in front of them. ֱ̽researchers wanted to investigate whether elephants understood the role of their bodies as potential obstacles to success in the task by observing how and when the animals removed themselves from the mat in order to exchange the stick. In one control arm of the test, the stick was unattached to the mat, meaning the elephant could pass the stick while standing on the mat.</p>&#13; &#13; <p> ֱ̽results of the study, which was largely funded by a Newton International Fellowship from the Royal Society awarded to Dr Plotnik, are published today in the journal <em>Scientific Reports</em>.</p>&#13; &#13; <p>“Elephants are well regarded as one of the most intelligent animals on the planet, but we still need more empirical, scientific evidence to support this belief,” says Dale. “We know, for example, that they are capable of thoughtful cooperation and empathy, and are able to recognise themselves in a mirror. These abilities are highly unusual in animals and very rare indeed in non-primates. We wanted to see if they also show ‘body-awareness’.”</p>&#13; &#13; <p>Plotnik and Dale found that the elephants stepped off the mat to pass the stick to the experimenter significantly more often during the test than during the control arm. Elephants stepped off the mat an average (mean) of around 42 out of 48 times during the test compared to just three times on average during the control.</p>&#13; &#13; <p>“This is a deceptively simple test, but its implications are quite profound,” says Dr Plotnik. “ ֱ̽elephants understood that their bodies were getting in the way, so they stepped aside to enable themselves to complete the task. In a similar test, this is something that young children are unable to understand until they are about two years old.</p>&#13; &#13; <p>“This implies that elephants may be capable of recognising themselves as separate from objects or their environment. This means that they may have a level of self-understanding, coupled with their passing of the mirror test, which is quite rare in the animal kingdom.”</p>&#13; &#13; <p>Species that have demonstrated a capacity for self-recognition in the mirror test all show varying levels of cooperative problem-solving, perspective taking and empathy, suggesting that ‘self-awareness’ may relate to effective cooperative-living in socially intelligent animals. A more developed self-understanding of how an individual relates to those around may underlie more complex forms of empathic perspective taking. It may also underlie how an individual targets help towards others in need. Both aspect are seen in studies of human children.</p>&#13; &#13; <p>Both self-awareness as demonstrated by the mirror test and body-awareness as demonstrated by the current study help scientists better understand how an animal’s understanding of self and of its place in the environment may impact social decision-making in the wild.</p>&#13; &#13; <p>Plotnik argues that studies such as this are important for helping increase our understanding of and appreciation for the behaviour and intelligence of animals. He also says that understanding elephant behaviour has important implications for the development of human/elephant conflict mitigation strategies in places like Thailand and India, where humans and elephants are competing for land. Only through careful consideration of both human and elephant needs can long-term solutions be sustainable.</p>&#13; &#13; <p>“ ֱ̽more we can understand about elephants’ behaviour, the more we can understand what their needs are, how they think and the strains they face in their social relationships,” he says. “This will help us if we are going to try to come up with viable long term solutions to the problems that these animals face in the wild, especially those that bring them into regular conflict with humans.”</p>&#13; &#13; <p><em><strong>Reference</strong><br />&#13; Dale, R, and Plotnik, JM. <a href="https://dx.doi.org/10.1038/srep46309">Elephants know when their bodies are obstacles to success in a novel transfer task.</a> Scientific Reports; 12 April 2017; DOI: 10.1038/srep46309</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Asian elephants are able to recognise their bodies as obstacles to success in problem-solving, further strengthening evidence of their intelligence and self-awareness, according to a new study from the ֱ̽ of Cambridge.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"> ֱ̽more we can understand about elephants’ behaviour, the more we can understand what their needs are, how they think and the strains they face in their social relationships</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Josh Plotnik</div></div></div><div class="field field-name-field-media field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><div id="file-124102" class="file file-video file-video-youtube"> <h2 class="element-invisible"><a href="/file/124102">Elephants demonstrate awareness of own bodies</a></h2> <div class="content"> <div class="cam-video-container media-youtube-video media-youtube-1 "> <iframe class="media-youtube-player" src="https://www.youtube-nocookie.com/embed/akjDRRgeUoI?wmode=opaque&controls=1&rel=0&autohide=0" frameborder="0" allowfullscreen></iframe> </div> </div> </div> </div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="http://thinkelephants.org/" target="_blank">Josh Plotnik/Rachel Dale</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Elephant body awareness tast</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 12 Apr 2017 09:00:06 +0000 cjb250 187362 at Artificial intelligence: computer says YES (but is it right?) /research/features/artificial-intelligence-computer-says-yes-but-is-it-right <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/features/1610202019-by-experienssthierry-ehrmann.jpg?itok=Qk9V5cgv" alt="2019 by ExperiensS" title="2019 by ExperiensS, Credit: Thierry Ehrmann" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.</p>&#13; &#13; <p>Of course many people die in car crashes every day – in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.</p>&#13; &#13; <p>Even so, the tragedy raised a pertinent question: how much do we understand – and trust – the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?</p>&#13; &#13; <p>We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.</p>&#13; &#13; <p>Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.</p>&#13; &#13; <p>Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?</p>&#13; &#13; <p>“Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data,” says Ghahramani. “But what is going on inside the ‘black box’? If the processes by which decisions were being made were more transparent, then trust would be less of an issue.”</p>&#13; &#13; <p>His team builds the algorithms that lie at the heart of these technologies (the “invisible bit” as he refers to it). Trust and transparency are important themes in their work: “We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.</p>&#13; &#13; <p>“When machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us.”</p>&#13; &#13; <p>One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.</p>&#13; &#13; <p>Two years ago, Ghahramani’s group launched the Automatic Statistician with funding from Google. ֱ̽tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.</p>&#13; &#13; <p>“ ֱ̽difficulty with machine learning systems is you don’t really know what’s going on inside – and the answers they provide are not contextualised, like a human would do. ֱ̽Automatic Statistician explains what it’s doing, in a human-understandable form.”</p>&#13; &#13; <p>Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.</p>&#13; &#13; <p>Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: “A particular issue with new artificial intelligence (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand.” His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments – an increasingly common occurrence.</p>&#13; &#13; <p>“We would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and – if they can no longer work reliably – then provide an alert and perhaps shift to a safety mode.” A driverless car, for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.</p>&#13; &#13; <p>Weller’s theme of trust and transparency forms just one of the projects at the newly launched £10 million <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (CFI). Ghahramani, who is Deputy Director of the Centre, explains: “It’s important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications – both the concerns and the benefits to society.”</p>&#13; &#13; <p>CFI brings together four of the world’s leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.</p>&#13; &#13; <p>Ghahramani describes the excitement felt across the machine learning field: “It’s exploding in importance. It used to be an area of research that was very academic – but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.</p>&#13; &#13; <p>“We are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us.”</p>&#13; &#13; <p><em>Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a <a href="https://www.youtube.com/watch?v=_5XvDCjrdXs">speech </a>delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Computers that learn for themselves are with us now. As they become more common in ‘high-stakes’ applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can  trust them.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">As we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Zoubin Ghahramani</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/home_of_chaos/4166229638/in/photolist-7ma1Vu-9jXRQ7-3FjPcz-bx8BcX-cs65bN-dPTAqE-48Dezu-nurxVW-mC75rT-dXxh8b-jR9gc-3KwLDC-5akwi9-75MGSi-fEbbTT-f1ab86-6avjFJ-p7gc1-ofut47-rpxmKL-jbSp7-bmUQLy-q131sg-2QnpAH-bxmfEd-PweVq-qbFyNT-4L32qY-pZVBB9-2uinMh-6L3BZn-re23rM-jfvWFG-dXrAKP-9jXM4U-9jXQoh-qa8G7T-rvMSwj-qdMd23-HXVdh-2Q1fQU-8f9zmW-iAqVac-oy72re-9mi7oc-cs5QkS-oMRA8h-C4Lzp4-paUvZM-6i89ys" target="_blank">Thierry Ehrmann</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">2019 by ExperiensS</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div><div class="field field-name-field-related-links field-type-link-field field-label-above"><div class="field-label">Related Links:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a></div></div></div> Thu, 20 Oct 2016 14:17:17 +0000 lw355 180122 at Opinion: Can genes really predict how well you’ll do academically? /research/discussion/opinion-can-genes-really-predict-how-well-youll-do-academically <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/160726graduates.jpg?itok=c0rnEV4z" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Researchers at King’s College London say they are able <a href="https://doi.org/10.1038/mp.2016.107">to predict educational achievement</a> from DNA alone. Using a new type of analysis called a “genome-wide polygenic score”, or GPS, they <a href="https://theconversation.com/your-genes-can-help-predict-how-well-youll-do-in-school-heres-how-we-cracked-it-62848">analysed DNA samples from 3,497 people</a> in the ongoing <a href="https://www.teds.ac.uk/">Twins Early Development Study</a>. They found that people whose DNA had the highest GPS score performed substantially better at school. In fact, by age 16, there was a whole school-grade difference between those with the highest GPS scores and the lowest. ֱ̽researchers herald their findings as a “tipping point” in the ability to use DNA – and DNA alone – in predicting educational achievement.</p>&#13; &#13; <p>These findings will certainly generate debate, particularly about nature versus nurture. It’s a debate that forces us – often uncomfortably – to think about what makes us who we are. Are our careers, hobbies, food preferences, income levels, emotional dispositions, or even general success in life rooted in our genes (nature)? Or are we shaped more by our environment (nurture)? If it’s all down to our genes, what happens to the idea of determining our own destiny?</p>&#13; &#13; <p>When it comes to the subject of intelligence, which today includes behavioural genetics research into “<a href="https://doi.org/10.2307/1412107">g</a> (a measure of intelligence commonly used as a variable in research in this area) and <a href="https://doi.org/10.1177/0956797612457952">cognitive ability</a>, the nature-nurture debate becomes that much more heated.</p>&#13; &#13; <p>There is a growing body of research that suggests intelligence is a <a href="https://doi.org/10.1038/mp.2012.184">highly heritable and polygenic trait</a>, meaning that there are many genes that predict intelligence, each with a small effect size. While the connection between genetics research on educational achievement and findings on intelligence might not seem direct, studies like the one out of King’s establishes a biological connection between “g” and educational achievement. ֱ̽findings mark the strongest genetic prediction for educational achievement so far, estimating up to 9% of variance in educational achievement at age 16.</p>&#13; &#13; <figure><iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/174804851" webkitallowfullscreen="" width="500"></iframe></figure><p>But <a href="https://doi.org/10.1002/hast.497">despite claims</a> that this research moves “us closer to the possibility of early intervention and personalised learning”, there are important ethical concerns to take into account. For example, who would early intervention and personalised learning reach first? Is it possible parents with money, means, awareness and access would be first to place their children in <a href="https://www.wiley.com/en-gb/G+is+for+Genes%3A+The+Impact+of+Genetics+on+Education+and+Achievement-p-9781118482780">“genetically sensitive schools”</a> in the hope of getting an extra advantage?</p>&#13; &#13; <h2>Dark past</h2>&#13; &#13; <p>It is not a secret that the history of intelligence research, and by extension genetics research on cognitive ability or educational achievement, is <a href="https://onlinelibrary.wiley.com/doi/10.1002/hast.492/abstract">rooted in eugenics and racism</a>, and has been used to validate the existence of racial and class differences. So how does this shameful past impact the field of behavioural genetics research today?</p>&#13; &#13; <p>Many behavioural geneticists, like Robert Plomin, the senior author on the King’s study, believe the field has moved past this dark history and that the science is objective, neutral (as neutral as any research can be) and clear. ֱ̽controversies that surround this research, at least in the eyes of Plomin and others, are fuelled by <a href="https://www.wiley.com/en-gb/G+is+for+Genes%3A+The+Impact+of+Genetics+on+Education+and+Achievement-p-9781118482780">media sensationalism</a>.</p>&#13; &#13; <p>But many bioethicists and social scientists disagree with him. They argue that society values intelligence too much for this research to remain in neutral territory. Previously, the field was largely used to marginalise certain groups, particularly low-income or ethnic minority groups.</p>&#13; &#13; <p>For some, attributing intelligence to genetics justifies the adverse circumstances many low-income and ethnic minority groups find themselves in; it wasn’t nurture that led to the under-performance of <a href="http://www.mitpressjournals.org/doi/abs/10.1162/003465304323031049#.V5WyIDm7iko">low-income or ethnic minority students</a> in the classroom, it was nature, and nature cannot be changed. For bioethicists today, the question hanging over this branch of behavioural genetics is: who’s to say new research in this area won’t perpetuate the same social inequalities that similar work has done before?</p>&#13; &#13; <p>Genetic research in an area once used to oppress people should openly acknowledge this past and explicitly state what its findings can and cannot prove (what many bioethicists call <a href="https://onlinelibrary.wiley.com/doi/10.1002/hast.501/abstract">“trustworthy research”</a>).</p>&#13; &#13; <p>Stark <a href="https://equalitytrust.org.uk/scale-economic-inequality-uk/">class</a> and <a href="https://www.independent.co.uk/news/uk/home-news/britains-hidden-racism-workplace-inequality-has-grown-in-the-last-decade-9898930.html">race</a> divides still persist in the UK and US, two countries where this branch of research is rapidly growing. While the study mentions the impact of a person’s place in society with educational achievement, it links this status back to genetics, highlighting the genetic overlap between educational achievement, g and family socioeconomic status.</p>&#13; &#13; <p> ֱ̽possibility that this kind of research may influence attitudes towards certain ethnic minorities and the less well off is real, as is the risk that this work might be used to justify social inequality. These concerns should be admitted and addressed by behavioural geneticists. ֱ̽alternative could be a <a href="https://www.tandfonline.com/doi/full/10.1080/02680939.2016.1139189">new form of eugenics</a>.</p>&#13; &#13; <p><em><strong><span><a href="https://theconversation.com/profiles/daphne-martschenko-238687">Daphne Martschenko</a>, PhD Candidate, <a href="https://theconversation.com/institutions/university-of-cambridge-1283"> ֱ̽ of Cambridge</a></span></strong></em></p>&#13; &#13; <p><em><strong>This article was originally published on <a href="https://theconversation.com/"> ֱ̽Conversation</a>. Read the <a href="https://theconversation.com/can-genes-really-predict-how-well-youll-do-academically-62844">original article</a>.</strong></em></p>&#13; &#13; <p><em> ֱ̽opinions expressed in this article are those of the individual author(s) and do not represent the views of the ֱ̽ of Cambridge.</em></p>&#13; &#13; <p><img alt=" ֱ̽Conversation" height="1" src="https://counter.theconversation.edu.au/content/62844/count.gif" width="1" /></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Daphne Martschenko (Faculty of Education) discusses whether DNA can predict our educational achievement.</p>&#13; </p></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 26 Jul 2016 10:26:13 +0000 Anonymous 177122 at Opinion: Genetics: what it is that makes you clever – and why it’s shrouded in controversy /research/discussion/opinion-genetics-what-it-is-that-makes-you-clever-and-why-its-shrouded-in-controversy <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/160421reading.jpg?itok=V3W4ExFT" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>For nearly 150 years, the concept of intelligence and its study have offered scientific ways of classifying people in terms of their “ability”. ֱ̽drive to identify and quantify exceptional mental capacity may have a chequered <a href="https://onlinelibrary.wiley.com/doi/10.1002/hast.499/abstract;jsessionid=1C167A1612F22CDFE6340960AC893439.f04t03?userIsAuthenticated=false&amp;amp;deniedAccessCustomisedMessage=">history</a>, but it is still being pursued by some researchers today.</p>&#13; &#13; <p>Francis Galton, who was Charles Darwin’s cousin, is considered the father of eugenics and was one of the first to formally study intelligence. His 1869 work <a href="https://books.google.co.uk/books/about/Hereditary_Genius.html?id=1h0Ztc1q-RoC&amp;source=kp_cover&amp;redir_esc=y">Hereditary Genius</a> argued that superior mental capabilities were passed down via natural selection – confined to Europe’s most eminent men, a “lineage of genius”. Barring a few exceptions, women, ethnic minorities and lower socioeconomic communities were labelled as inferior in intelligence.</p>&#13; &#13; <p>Galton’s controversial theories on race, socioeconomics and intelligence have been highly influential and shaped the ideologies of numerous researchers and theorists around the world.</p>&#13; &#13; <p>In the UK, proponents of a Galtonian view on intelligence included educational psychologist <a href="https://www.timeshighereducation.com/books/a-true-pro-and-his-cons/161397.article">Cyril Burt</a>, who helped formulate the 11-plus examination, and <a href="https://www.britannica.com:443/biography/Charles-E-Spearman">psychologist Charles Spearman</a> who is best known for his creation of the concept “g” – the innate general factor of human mental ability. Spearman’s background as an engineer in the British army gave him a statistical sophistication that proved instrumental in shifting the direction of the field of intelligence study.</p>&#13; &#13; <figure class="align-right "><img alt="" src="https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/115800/width237/image-20160321-30917-1i9hs6m.jpg" style="width: 100%;" /><figcaption><span class="caption">Spearman: statistician who delved into human intelligence.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File%3AExposition_universelle_de_1900_-_portraits_des_commissaires_g%C3%A9n%C3%A9raux-Charles_Spearman.jpg">Eugène Pirou via Wikimedia Commons</a></span></figcaption></figure><p> </p>&#13; &#13; <p>Spearman hypothesised that intelligence is comprised of “<a href="https://www.jstor.org/stable/1412107?origin=crossref&amp;amp;seq=1">g</a>” – or “general intelligence”, and two other specific factors: verbal ability and fluency. Spearman’s extensive work on the use of “g” within the field of statistics meant that some used the “hard” sciences and maths as instruments to argue that there were biological differences between races and social classes. “G” as a representation of the biological basis of intelligence <a href="https://onlinelibrary.wiley.com/doi/10.1002/hast.494/abstract">is still being used today in research</a> within the current field of behavioural genetics.</p>&#13; &#13; <h2>Political currency</h2>&#13; &#13; <p> ֱ̽concept of inheritance, and specifically the inheritance of intelligence, has carried over into political and educational spheres. A more recent advocate of Galtonian-inspired ideas is Dominic Cummings, who served as a special advisor to the former secretary of state for education, Michael Gove. Cummings wrote the following in a <a href="http://s3.documentcloud.org/documents/804396/some-thoughts-on-education-and-political.pdf">237-page document</a> titled “Some thoughts on education and political priorities”:</p>&#13; &#13; <p> </p>&#13; &#13; <blockquote>&#13; <p>Raising school performance of poorer children … would not necessarily lower parent-offspring correlations (nor change heritability estimates). When people look at the gaps between rich and poor children that already exist at a young age (3-5), they almost universally assume that these differences are because of environmental reasons (“privileges of wealth”) and ignore genetics.</p>&#13; </blockquote>&#13; &#13; <h2> </h2>&#13; &#13; <h2> ֱ̽birth of twins studies</h2>&#13; &#13; <p>From the 1920s, when <a href="https://www.tandfonline.com/doi/abs/10.1080/08856559.1932.10533098?journalCode=vzpg20">twin and adoption studies</a> set out to determine the genetic and environmental origins of intelligence differences, the study of intelligence began to converse with the early stages of human behavioural genetics.</p>&#13; &#13; <p>Under the presumption that twins experience similar environmental aspects, <a href="https://link.springer.com/article/10.1023/A:1001959306025">twins studies enable researchers</a> to evaluate the variance of a given outcome – such as cognitive ability – in a large group. They can then attempt to estimate how much of this variance is due to the heritability of genes, the shared environment the twins live in, or a non-shared environment.</p>&#13; &#13; <p> ֱ̽1980s and 1990s saw another rise in twin and adoption studies on intelligence, many of which were more systematic in nature due to advances in technology. Most supported earlier research and showed intelligence to be highly heritable and polygenic, meaning that it is influenced by many different genetic markers.</p>&#13; &#13; <p> ֱ̽researchers <a href="https://kclpure.kcl.ac.uk/portal/en/publications/multivariate-behavioral-genetics-and-development-twin-studies%28f51376fe-96e6-4288-811f-9b44cead12c9%29.html">Robert Plomin</a>, <a href="https://www.annualreviews.org/content/journals/10.1146/annurev.ps.29.020178.002353">JC Defries</a>, and <a href="https://link.springer.com/article/10.1023/A:1010257512183">Nele Jacobs</a> were at the forefront of this new wave of studies. But this research was still unable to identify the specific genetic markers within the human genome that are connected to intelligence.</p>&#13; &#13; <h2>Genome – a new frontier</h2>&#13; &#13; <p>Genome sequencing technologies have taken the search for the genetic components of inheritance another step forward. Despite the seemingly endless possibilities brought forth by the <a href="https://www.genome.gov/12011239">Human Genome Project in 2001</a>, actually using DNA-based techniques to locate which genetic differences contribute to observed differences in intelligence <a href="https://onlinelibrary.wiley.com/doi/10.1002/hast.496/abstract">has been markedly more difficult</a> than anticipated.</p>&#13; &#13; <p>Genome-wide association studies (GWAS) began to take hold as a powerful tool for investigating the human genetic architecture. These studies assess connections between a trait and a multitude of DNA markers. Most commonly, they look for single-nucleotide polymorphisms, or SNPs. These are variations between genes at specific locations throughout a DNA sequence that might determine an individual’s likelihood to develop a particular disease or trait.</p>&#13; &#13; <p>Originally intended to identify genetic risk factors associated with <a href="https://doi.org/10.1126/science.1109557">susceptibility to disease</a>, GWAS have become a means through which to try and pinpoint the genetic factors responsible <a href="https://doi.org/10.1038/mp.2012.184">for cognitive ability</a>. But researchers have <a href="https://www.science.org/doi/10.1126/science.1235488">shown</a> that intelligence is a trait influenced by many different genes: they have so far been unable to locate enough SNPs to predict the IQ of an individual.</p>&#13; &#13; <h2>Ethical questions</h2>&#13; &#13; <p>There’s a long way still to go, but this field is receiving <a href="https://www.telegraph.co.uk/education/educationnews/11680895/Children-should-be-genetically-screened-at-the-age-of-4-to-aid-their-education-expert-claims.html">a great deal of publicity</a>. This raises several ethical questions. We must ask ourselves if this research can ever be socially neutral given the eugenic-Galtonian history underpinning it.</p>&#13; &#13; <p>This kind of research could have an impact on <a href="https://nautil.us/super_intelligent-humans-are-coming-235110/">human genetic engineering</a> and the choices parents make when deciding to have children. It could give parents with the money and desire to do so the option to make their offspring “smarter”. Though genetically engineering intelligence may appear to be in the realm of science fiction, if the genes associated with intelligence are identified, it could become a reality.</p>&#13; &#13; <p>Some <a href="https://www.wiley.com/en-gb/G+is+for+Genes%3A+The+Impact+of+Genetics+on+Education+and+Achievement-p-9781118482780">researchers</a> have suggested that schools which have a child’s genetic information could tailor the curriculum and teaching to create a system of “personalised learning”. But this could lead schools to expect certain levels of achievement from certain groups of children – perhaps from different socioeconomic or ethnic groups – and would raise questions of whether richer families would benefit most.</p>&#13; &#13; <p>Whether calling it “intelligence”, “cognitive ability”, or “IQ”, behavioural genetics research is still trying to identify the genetic markers for a trait that can predict, in essence, a person’s success in life. Given the history of this field of research, it’s vital it is conducted with an awareness of its possible ethical impact on all parts of society.</p>&#13; &#13; <p><em><strong><span><a href="https://theconversation.com/profiles/daphne-martschenko-238687">Daphne Martschenko</a>, PhD Candidate, <a href="https://theconversation.com/institutions/university-of-cambridge-1283"> ֱ̽ of Cambridge</a></span></strong></em></p>&#13; &#13; <p><em><strong>This article was originally published on <a href="https://theconversation.com/"> ֱ̽Conversation</a>. Read the <a href="https://theconversation.com/genetics-what-it-is-that-makes-you-clever-and-why-its-shrouded-in-controversy-56115">original article</a>.</strong></em></p>&#13; &#13; <p><em> ֱ̽opinions expressed in this article are those of the individual author(s) and do not represent the views of the ֱ̽ of Cambridge.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Daphne Martschenko (Faculty of Education) discusses the concept of intelligence and the drive to identify and quantify it.</p>&#13; </p></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Thu, 21 Apr 2016 11:00:00 +0000 Anonymous 171822 at Opinion: Governments should turn to academics for advice on radicalisation, religion and security /research/discussion/opinion-governments-should-turn-to-academics-for-advice-on-radicalisation-religion-and-security <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/151203paris.jpg?itok=wr4RnY9M" alt="Bataclan Paris attacks memorial" title="Bataclan Paris attacks memorial, Credit: Takver" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In August 1939, the operational head of Britain’s Government Communication and Cypher School, Alistair Denniston, wrote to the Foreign Office about the need <a href="https://bletchleypark.org.uk/news/v.rhtm/Wartime_Office_where_US_Special_Relationship_was_Born-740078.html">to recruit “men of the professor type”</a> into the wartime code-breaking hub at Bletchley Park in order to help combat the Nazi threat.</p>&#13; &#13; <p>Following the horror of <a href="https://theconversation.com/uk/topics/paris-attacks-2015-22621">marauding attacks in Paris</a>, the British prime minister has announced he will be <a href="http://www.bbc.co.uk/news/uk-34836925">recruiting</a> a further 1,900 personnel to the Security and Intelligence Agencies. “Professors” may also be able to add value to these organisations and wider society. ֱ̽government should not forget the wealth of talent available within our universities to offer insight and depth to the judgments of decision-makers.</p>&#13; &#13; <p>In my capacity as champion to the Partnership for Conflict, Crime &amp; Security Research, I organised a <a href="https://www.paccsresearch.org.uk/news/policy-workshop-role-religion-contemporary-security-challenges/">workshop recently</a> where four leading academics discussed how best to get research on religion and contemporary security challenges in front of politicians, policymakers and the press, to help them deliver better service to the public. ֱ̽academics were historian of <a href="http://www.islamicreformulations.net/">Muslim thought</a> Robert Gleave; Kim Knott <a href="http://www.lancaster.ac.uk/fass/projects/ideology-and-uncertainty/">who researches</a> ideologies, beliefs and decision-making; Peter Morey who <a href="https://muslimstrustdialogue.org/">explores trust</a> between Muslims and non-Muslims, and John Wolffe <a href="https://mail.google.com/_/scs/mail-static/_/js/k=gmail.main.en.5EQ-zVMXp3w.O/m=m_i,t/am=PiPeQMD8v_cHcY1xQLP0lQp77z-_-0jxkYPH_ydMAJF1BfB_s_8H8G_QXrSFAg/rt=h/d=1/t=zcms/rs=AHGWq9BFrQJNqfwGF2QGWVl1cfW9-DCDTw">who works on</a> the interface between religion and security.</p>&#13; &#13; <p>One key message from this debate was that those in positions of authority and influence must overcome the tendency to regard religious issues as marginal until they become a security risk. Religion is poorly understood, and while academic focus on definition can be dismissed as pedantry, there is a need for clarity when talking about religion and security – to avoid millions of devout people around the world being swept into a bucket labelled “terrorist”.</p>&#13; &#13; <h2>Improve religious literacy</h2>&#13; &#13; <p>For instance, <a href="https://www.open.ac.uk/arts/research/religion-martyrdom-global-uncertainties/sites/www.open.ac.uk.arts.research.religion-martyrdom-global-uncertainties/files/files/ecms/arts-rmgu-pr/web-content/Religion-Security-Global-Uncertainties.pdf">research</a> helps us to draw a distinction between religion and faith. Religion is defined by creed, doctrine, framework and practice; whereas faith is more personal, abstract, emotional and often at some distance from the teachings of established religious institutions.</p>&#13; &#13; <p>We must improve religious literacy among politicians, policymakers, the press and the general public. In a security context, this should include a more nuanced understanding of the variants of institutionalised religion, while comprehending the universe occupied by men and women of faith.</p>&#13; &#13; <p>A single office of responsibility in the government could act as a conduit for informing and shaping policy and legislation relating to religion and religious issues, including those linked to security and violence. An immediate priority for the office should be to inform efforts to address radicalisation, <a href="https://theconversation.com/after-paris-europe-must-lead-the-fight-against-islamophobia-50808">Islamophobia</a> and other forms of prejudice. This wouldn’t carry any extra cost if one of the government’s chief scientific advisors was asked to undertake this work, tapping into the wealth of expertise addressing these issues inside the nation’s universities.</p>&#13; &#13; <p>Opinion-formers, including those in the press, <a href="https://www.iengage.org.uk/live-casinos/">must also resist</a> the simplistic temptation to describe religion as the motive for acts of violence. In the same way, “Third World” insurgents during the Cold War, such as those in North Vietnam, were too easily defined by the Communist ideology they embraced.</p>&#13; &#13; <h2>How to dispel alienation</h2>&#13; &#13; <p>But closer attention needs to be paid to the relationship between faith and alienation. There is a wealth of research – including historian <a href="https://research.manchester.ac.uk/en/persons/8d006dc4-285f-41b1-a422-d2d2af123d0f">Kate Cooper’s work</a> into the radicalisation of early Christian martyrs over 1,500 years’ ago – that can help us understand how alienation, especially of young people, leads to a sense of hopelessness that translates all too readily into violent resolve.</p>&#13; &#13; <p>We must galvanise support for the public sector, faith groups and charities to promote engagement between polarised communities. But this is not a simple matter of issuing a commandment from on-high that: “thou shalt engage in mutually informative dialogue and develop trustful relationships”.</p>&#13; &#13; <p>Evidence and experience, for instance <a href="https://www.open.ac.uk/arts/research/religion-martyrdom-global-uncertainties/sites/www.open.ac.uk.arts.research.religion-martyrdom-global-uncertainties/files/files/ecms/arts-rmgu-pr/web-content/Religion-Security-Global-Uncertainties.pdf">from Northern Ireland</a>, shows how different the certainties of macro-political strategies can be from micro-realities, leading to communities being filled with mistrust and disillusionment. Interventions tailored to dispel alienation and build trust must reflect local circumstances, with a strong emphasis on “bottom-up” rather than “top-down” solutions.</p>&#13; &#13; <p>There are some powerful examples of how the arts can operate to communicate religious difference in our complex, multicultural society, but common artistic endeavor can also help heal divisions. For example, the UK-based <a href="http://theberakahproject.org/project/the-berakah-multi-faith-choir/">Berakah Choir</a> works to transcend barriers of faith and culture through collaborative activities, allowing the individual voice to be heard working in harmony with others to build a common humanity. There is much that could be achieved at a low cost to harness the arts to counter alienation.</p>&#13; &#13; <h2>Draw on academics as an asset</h2>&#13; &#13; <p>Western governments are deploying a range of strategies and tactics to deal with the threat posed by the so-called Islamic State. David Cameron is recruiting more spies, and parliament is <a href="https://theconversation.com/investigatory-powers-bill-will-remove-isps-right-to-protect-your-privacy-50178">discussing profound changes</a> to the way in which digital intelligence is collected.</p>&#13; &#13; <p> </p>&#13; &#13; <figure class="align-center "><img alt="" src="https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/104142/width668/image-20151202-22476-1lwz2fi.jpg" /><figcaption><span class="caption">Great minds were brought together at Bletchley.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/mwichary/2189535149/sizes/l">Marcin Wichary/flickr.com</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p> </p>&#13; &#13; <p>But we must not ignore the invaluable supply of knowledge and insight available from our men and women in academia. Research can provide evidence-based context to contemporary challenges, including an enlightened understanding of the place of religion and faith in a security context.</p>&#13; &#13; <p>We can stop mistakes being made in terms of misguided policies and knee-jerk reactions. And researchers can help the design and deployment of interventions that make a real difference, focusing limited resources effectively.</p>&#13; &#13; <p>It has been said that the scholars working in Bletchley Park saved countless lives and took one or more years off the duration of World War II. Let us hope that politicians, policy-makers and the press are enlightened enough to make full use of the contribution that university researchers can make to today’s security challenges.</p>&#13; &#13; <p><em><strong><span><a href="https://theconversation.com/profiles/tristram-riley-smith-210207">Tristram Riley-Smith</a>, Associate Fellow, Centre for Science and Policy; Director of Research, Department of Politics &amp; International Studies, <a href="https://theconversation.com/institutions/university-of-cambridge-1283"> ֱ̽ of Cambridge</a></span></strong></em></p>&#13; &#13; <p><em><strong>This article was originally published on <a href="https://theconversation.com/"> ֱ̽Conversation</a>. Read the <a href="https://theconversation.com/governments-should-turn-to-academics-for-advice-on-radicalisation-religion-and-security-51641">original article</a>.</strong></em></p>&#13; &#13; <p><em> ֱ̽opinions expressed in this article are those of the individual author(s) and do not represent the views of the ֱ̽ of Cambridge.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Tristram Riley-Smith (Department of Politics and International Studies) discusses how universities and academics can add insight and depth to national security decisions.</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/takver/22769318493/in/photolist-AG3H7M-BtTgsb-AG3GB8-be1mPx-6meTaM-73sbgA-9vDMWb-9vQtu5-7EJKB3-7EJKjy-7EJJmf-Ag3X2w-eVo8ur-eVo8m4-eVzxbj-eVzx7J-eVzwXf-eVo7Vx-eVo7MT-eVo7Ee-8xEtHj-8xEtFu-8xEtDs-8xBrGX-7vytHS-9P15Ga-9w6qJv-9w9ePQ-9w9dqu-9w9c9A-9w98W7-kPwc4V-9vMo7H-5npUNz-5nubrw-5nubqG-6jrnr8-9w6KJR-9w9FCN-9w6DdX-9w6BCP-9yyLDL-9w48dL-9w9KAS-9w6H8k-9w9HXJ-9w9HB7-9w6FiP-9w9GZG-9w6EeR" target="_blank">Takver</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Bataclan Paris attacks memorial</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div> Thu, 03 Dec 2015 15:30:10 +0000 Anonymous 163642 at