ֱ̽ of Cambridge - Christopher Markou /taxonomy/people/christopher-markou en Opinion: Neuralink wants to wire your brain to the internet – what could possibly go wrong? /research/discussion/opinion-neuralink-wants-to-wire-your-brain-to-the-internet-what-could-possibly-go-wrong <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/8384110298b0bc7d6435o.jpg?itok=vA1t-E-w" alt="Maintaining Brain Health" title="Maintaining Brain Health, Credit: A Health Blog" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>​<a href="https://theconversation.com/elon-musk-wants-to-merge-man-and-machine-heres-what-hell-need-to-work-out-75321">Neuralink</a> – which is “developing ultra high bandwidth brain-machine interfaces to <a href="https://neuralink.com">connect humans and computers</a>” – is probably a bad idea. If you understand the science behind it, and that’s what you wanted to hear, you can stop reading. <img alt=" ֱ̽Conversation" height="1" src="https://counter.theconversation.edu.au/content/76180/count.gif?distributor=republish-lightbox-basic" width="1" /></p>&#13; &#13; <p>But this is an absurdly simple narrative to spin about Neuralink and an unhelpful attitude to have when it comes to understanding the role of technology in the world around us, and what we might do about it. It’s easy to be cynical about everything Silicon Valley does, but sometimes it comes up with something so compelling, fascinating and confounding it cannot be dismissed; or embraced uncritically.</p>&#13; &#13; <p>Putting aside the hyperbole and <a href="https://freethoughtblogs.com/pharyngula/2017/04/24/more-criticism-of-neuralink/">hand-wringing</a> that usually follows announcements like this, Neuralink is a massive idea. It may fundamentally alter how we conceive of what it means to be human and how we communicate and interact with our fellow humans (and non-humans). It might even represent the next step in human evolution.</p>&#13; &#13; <h2>Neurawhat?</h2>&#13; &#13; <p>But what exactly is Neuralink? If you have time to read a brilliant 36,400-word explainer by genius Tim Urban, then you can do so <a href="https://waitbutwhy.com/2017/04/neuralink.html">here</a>. If you don’t, Davide Valeriani has done an excellent summary on <a href="https://theconversation.com/elon-musk-wants-to-merge-man-and-machine-heres-what-hell-need-to-work-out-75321"> ֱ̽Conversation</a>. However, to borrow a few of Urban’s words, NeuraLink is a “wizard hat for your brain”.</p>&#13; &#13; <p>Essentially, Neuralink is a company purchased by Elon Musk, the visionary-in-chief behind Tesla, Space X and Hyperloop. But it’s the company’s product that really matters. Neuralink is developing a “<a href="https://computer.howstuffworks.com/brain-computer-interface.htm">whole brain interface</a>”, essentially a network of tiny electrodes linked to your brain that the company envisions will allow us to communicate wirelessly with the world. It would enable us to share our thoughts, fears, hopes and anxieties without demeaning ourselves with written or spoken language.</p>&#13; &#13; <p>One consequence of this is that it would allow us to be connected at the biological level to the internet. But it’s who would be connecting back with us, how, where, why and when that are the real questions.</p>&#13; &#13; <p>Through his <a href="https://www.tesla.com/en_GB/">Tesla</a> and <a href="https://www.spacex.com/">Space X</a> ventures, Musk has already ruffled the feathers of some formidable players; namely, the auto, oil and gas industries, not to mention the <a href="https://www.youtube.com/watch?v=Gg-jvHynP9Y">military-industrial complex</a>. These are feathers that mere mortals dare not ruffle; but Musk has demonstrated a brilliance, stubborn persistence and a knack for revenue generation (<a href="https://fortune.com/2016/09/09/tesla-profits-musk/">if not always the profitability</a>) that emboldens resolve.</p>&#13; &#13; <p>However, unlike Tesla and Space X, Neuralink operates in a field where there aren’t any other major players – for now, at least. But Musk has now fired the starting gun for competitors and, <a href="https://waitbutwhy.com/2017/04/neuralink.html">as Urban observes</a>, “an eventual neuro-revolution would disrupt almost every industry”.</p>&#13; &#13; <h2>Part of the human story</h2>&#13; &#13; <p>There are a number of technological hurdles between Neuralink and its ultimate goal. There is reason to think they can surmount these; and reason to think they won’t.</p>&#13; &#13; <p>While Neuralink may ostensibly be lumped in with other AI/big data companies in its branding and general desire to bring humanity kicking and screaming into a brave new world of their making, what it’s really doing isn’t altogether new. Instead, it’s how it’s going about it that makes Neuralink special – and a potentially major player in the next chapter of the human story.</p>&#13; &#13; <p>Depending on who you ask, the human story generally goes like this. First, we discovered fire and developed oral language. We turned oral language into writing, and eventually we found a way to turn it into mechanised printing. After a few centuries, we happened upon this thing called electricity, which gave rise to telephones, radios, TVs and eventually personal computers, smart phones – and ultimately the <a href="https://www.juicero.com">Juicero</a>.</p>&#13; &#13; <p>Over time, phones lost their cords, computers shrunk in size and we figured out ways to make them exponentially more powerful and portable enough to fit in pockets. Eventually, we created virtual realities, and melded our sensate reality with an augmented one.</p>&#13; &#13; <p>But if Neuralink were to achieve its goal, it’s hard to predict how this story plays out. ֱ̽result would be a “whole-brain interface” so complete, frictionless, bio-compatible and powerful that it would feel to users like just another part of their cerebral cortex, limbic and central nervous systems.</p>&#13; &#13; <p>A whole-brain interface would give your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone who has a similar interface in their head. This flow of information between your brain and the outside world would be so easy it would feel the same as your thoughts do right now.</p>&#13; &#13; <p>But if that sounds extraordinary, so are the potential problems. First, Neuralink is not like putting an <a href="https://www.epilepsy.com/treatment/devices/vagus-nerve-stimulation-therapy">implant in your head designed to manage epileptic seizures</a>, or a pacemaker in your heart. This would be elective surgery on (presumably) healthy people for non-medical purposes. Right there, we’re in a completely different ball park, both legally and ethically.</p>&#13; &#13; <p>There seems to be only one person who has done such a thing, and that was a bonkers publicity stunt conducted by a Central American scientist <a href="https://www.technologyreview.com/2015/11/09/247535/to-study-the-brain-a-doctor-puts-himself-under-the-knife/">using himself as a research subject</a>. He’s since suffered life threatening complications. Not a ringing endorsement, but not exactly a condemnation of the premise either.</p>&#13; &#13; <p>Second, because Neuralink is essentially a communications system there is the small matter of regulation and control. Regardless of where you stand on the whole privacy and surveillance issue (remember <a href="https://edwardsnowden.com">Edward Snowden</a>) I cannot imagine a scenario in which there would not be an endless number of governments, advertisers, insurers and marketing folks looking to tap into the very biological core of our cognition to use it as a means of thwarting evildoers and selling you stuff. And what’s not to look forward to with that?</p>&#13; &#13; <p>And what if the tech normalises to such a point that it becomes mandatory for future generations to have a whole-brain implant at birth to combat illegal or immoral behaviour (however defined)? This obviously opens up a massive set of questions that go far beyond the technical hurdles that might never be cleared. It nonetheless matters that we think about them now.</p>&#13; &#13; <h2>Brain security</h2>&#13; &#13; <p>There’s also the issue of security. If we’ve learned one thing from this era of “smart” everything, it’s that “smart” means exploitable. Whether it’s your <a href="https://www.networkworld.com/article/2976270/internet-of-things/smart-refrigerator-hack-exposes-gmail-login-credentials.html">fridge</a>, your <a href="https://www.forbes.com/forbes/welcome/?toURL=https://www.forbes.com/sites/thomasbrewster/2017/03/07/cia-wikileaks-samsung-smart-tv-hack-security/&amp;refURL=https://www.google.com/&amp;referrer=https://www.google.com/">TV</a>, <a href="https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/">your car</a>, or your <a href="http://www.bbc.com/news/business-37551633">insulin pump</a>, once you connect something to something else you’ve just opened up a means for it to be compromised.</p>&#13; &#13; <p>What it really all comes down to is this: across a number of fields at the intersection of law, philosophy, technology and society we are going to need answers to questions no one has yet thought of asking (at least not often enough; and for the right reasons). We have faced, are facing, and will face incredibly complex and overwhelming problems that we may well not like the answers to. But it matters that we ask good questions early and often. If we don’t, they’ll be answered for us.</p>&#13; &#13; <p>And so Neuralink is probably a bad idea, but to the first person who fell into a firepit, so was fire. On a long enough time line even the worst ideas need to be reckoned with early on. Now who wants a Juicero?</p>&#13; &#13; <p><em>This article was originally published on <a href="https://theconversation.com/"> ֱ̽Conversation</a>. Read the <a href="https://theconversation.com/neuralink-wants-to-wire-your-brain-to-the-internet-what-could-possibly-go-wrong-76180">original article</a>.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>A company in Silicon Valley claims to be developing a "whole brain interface” for communicating wirelessly with the world. <br />&#13; Christopher Markou from the Faculty of Law isn't overly keen...   </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Across a number of fields at the intersection of law, philosophy, technology and society we are going to need answers to questions no one has yet thought of asking </div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Christopher Markou</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/healthblog/8384110298" target="_blank">A Health Blog</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Maintaining Brain Health</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-sharealike">Attribution-ShareAlike</a></div></div></div> Wed, 03 May 2017 11:28:18 +0000 fpjl2 187982 at Opinion: We could soon face a robot crimewave … the law needs to be ready /research/discussion/opinion-we-could-soon-face-a-robot-crimewave-the-law-needs-to-be-ready <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/59512488751236304a7bb.jpg?itok=z9nmStB_" alt="robot hand" title="robot hand, Credit: ֱ̽ of Washington Office of News and Information" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This is where we are at in 2017: sophisticated algorithms are both <a href="https://www.technologyreview.com/2014/11/13/170454/rise-of-the-robot-security-guards/">predicting</a> and <a href="https://www.ft.com/content/55f3daf4-ee1a-11e6-ba01-119a44939bb6">helping to solve crimes committed by humans</a>; <a href="https://theconversation.com/what-ai-can-tell-us-about-the-u-s-supreme-court-55352">predicting the outcome of court cases</a> and <a href="https://www.ucl.ac.uk/news/2016/oct/ai-predicts-outcomes-human-rights-trials">human rights trials</a>; and helping to do the <a href="https://www.ft.com/content/5d96dd72-83eb-11e6-8897-2359a58ac7a5">work done by lawyers</a> in those cases. By 2040, there is even a suggestion that sophisticated robots will be <a href="https://www.raconteur.net/fighting-fraud-2016/is-future-cyber-crime-a-nightmare-scenario">committing a good chunk of all the crime in the world</a>. Just ask the toddler who was <a href="https://www.huffpost.com/entry/security-robot-toddler_n_57863670e4b03fc3ee4e8f3a">run over by a security robot</a> at a California mall last year. <img alt=" ֱ̽Conversation" height="1" src="https://counter.theconversation.edu.au/content/75276/count.gif?distributor=republish-lightbox-basic" width="1" /></p>&#13; &#13; <p>How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.</p>&#13; &#13; <p>Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.</p>&#13; &#13; <p>Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot <a href="https://www.theatlantic.com/news/archive/2016/07/dallas-police-robot/490478/">exploding to kill a dangerous suspect</a> or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.</p>&#13; &#13; <p>There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a <a href="https://www.theguardian.com/world/2015/apr/22/swiss-police-release-robot-random-darknet-shopper-ecstasy-deep-web">robot was arrested</a> (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after <a href="https://www.wired.com/2017/01/probing-teslas-deadly-crash-feds-say-yay-self-driving/">his Tesla was in autopilot</a>.</p>&#13; &#13; <p>While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the <a href="https://airandspace.si.edu/explore/stories/wright-brothers">Kitty Hawk for a joyride</a>. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: <a href="https://www.cbr.cam.ac.uk/fileadmin/user_upload/centre-for-business-research/downloads/working-papers/wp424.pdf">law evolves</a>.</p>&#13; &#13; <h2>Robot guilt</h2>&#13; &#13; <p> ֱ̽<a href="http://fs2.american.edu/dfagel/www/Class%20Readings/Hart/International%20Law%20Chapter%20From%20Concept%20of%20Law.pdf">role of the law can be defined in many ways</a>, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.</p>&#13; &#13; <p>But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “<a href="https://www.law.cornell.edu/wex/legal_person">legal persons</a>”.</p>&#13; &#13; <p>To varying extents, <a href="https://www.cliffordchance.com/content/dam/cliffordchance/PDFs/Corporate_Liability_in_Europe.pdf">companies are endowed with legal personhood</a>, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.</p>&#13; &#13; <p> ֱ̽problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?</p>&#13; &#13; <h2>A guilty AI mind?</h2>&#13; &#13; <p> ֱ̽criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.</p>&#13; &#13; <p>Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or <a href="https://www.law.cornell.edu/wex/mens_rea">“<em>mens rea</em>”</a>. ֱ̽idea behind <em>mens rea</em> is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.</p>&#13; &#13; <figure class="align-center "><img alt="" src="https://cdn.theconversation.com/files/164855/width754/image-20170411-26736-125n6j2.jpg" /><figcaption><em><span class="caption">Blind justice for a AI.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/gold-lady-justice-statue-on-top-127526276?src=DFZSKGdRwUVTwgeyFSiT0w-1-25">Shutterstock</a></span></em></figcaption></figure><p><br />&#13; So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?</p>&#13; &#13; <p>Take driverless cars. Cars drive on roads and there are <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2016/573902/EPRS_BRI(2016)573902_EN.pdf">regulatory frameworks in place</a> to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.</p>&#13; &#13; <p>As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.</p>&#13; &#13; <p>So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (<em>actus reus</em>), but technically only half a crime, as it would be far harder to determine <em>mens rea</em>. How do we know the robot intended to do what it did?</p>&#13; &#13; <p>For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and <a href="https://www.theguardian.com/technology/2016/dec/19/discrimination-by-algorithm-scientists-devise-test-to-detect-ai-bias">discriminatory</a> <a href="https://www.technologyreview.com/2016/10/07/244656/algorithms-probably-caused-a-flash-crash-of-the-british-pound/">algorithmic mischief</a> already abounds.</p>&#13; &#13; <p>Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of <em>mens rea</em>.</p>&#13; &#13; <p>But what would we need to prove the existence of <em>mens rea</em>? Could we simply cross-examine the AI like we do a <a href="https://www.youtube.com/watch?v=bJF-IRbTh0Q">human defendant</a>? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.</p>&#13; &#13; <p>And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in <a href="https://sites.tufts.edu/cogstud/">killing a human in self-defense</a> or the extent of premeditation?</p>&#13; &#13; <p>Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists <a href="https://www.theguardian.com/technology/2014/dec/05/software-bot-darknet-shopping-spree-random-shopper">created a bot that purchased random items off the dark web</a>. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?</p>&#13; &#13; <p>Maybe. But what if the bot “decided” to make the purchases itself?</p>&#13; &#13; <h2>Robo-jails?</h2>&#13; &#13; <p>Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?</p>&#13; &#13; <figure class="align-center "><img alt="" src="https://cdn.theconversation.com/files/164856/width754/image-20170411-26720-mdvnhs.jpg" /><figcaption><em><span class="caption">Would robot wardens patrol robot jails?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/drone-monitoring-barbed-wire-fence-on-562264267?src=isb7kjpWe7kvF0hy2qq4og-1-0">Shutterstock</a></span></em></figcaption></figure><p><br />&#13; What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.</p>&#13; &#13; <p>AI has already helped with emergent concepts in medicine, and we are learning things about <a href="https://www.popsci.com/how-scientists-will-use-artificial-intelligence-to-find-aliens/">the universe</a> with AI systems that even an army of Stephen Hawkings might not reveal.</p>&#13; &#13; <p> ֱ̽hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.</p>&#13; &#13; <p>At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.</p>&#13; &#13; <p><span><a href="https://theconversation.com/profiles/christopher-markou-341005">Christopher Markou</a>, PhD Candidate, Faculty of Law, <em><a href="https://theconversation.com/institutions/university-of-cambridge-1283"> ֱ̽ of Cambridge</a></em></span></p>&#13; &#13; <p>This article was originally published on <a href="https://theconversation.com/"> ֱ̽Conversation</a>. Read the <a href="https://theconversation.com/we-could-soon-face-a-robot-crimewave-the-law-needs-to-be-ready-75276">original article</a>.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Are robots capable of committing crime? Yes, says Christopher Markou, PhD Candidate at the Faculty of Law, writing for ֱ̽Conversation - but what should we do if it does?</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/uwnews/5951248875/" target="_blank"> ֱ̽ of Washington Office of News and Information</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">robot hand</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Sun, 16 Apr 2017 15:15:01 +0000 cjb250 187392 at Opinion: Robots and AI could soon have feelings, hopes and rights … we must prepare for the reckoning /research/discussion/opinion-robots-and-ai-could-soon-have-feelings-hopes-and-rights-we-must-prepare-for-the-reckoning <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/convo_2.jpg?itok=iNySSXmq" alt="" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Get used to hearing a lot more about artificial intelligence. Even if you discount the utopian and dystopian hyperbole, the 21st century will broadly be defined not just by advancements in artificial intelligence, robotics, computing and cognitive neuroscience, but how we manage them. For some, <a href="https://futureoflife.org/ai/benefits-risks-of-artificial-intelligence/">the question of whether or not the human race will live to see a 22nd century</a> turns upon this latter consideration. While <a href="https://ai100.stanford.edu/sites/default/files/ai100report10032016fnl_singles.pdf">forecasting the imminence of an AI-centric future</a> remains a matter of intense debate, we will need to come to terms with it. For now, there are many more questions than answers. <img alt=" ֱ̽Conversation" height="1" src="https://counter.theconversation.edu.au/content/73462/count.gif?distributor=republish-lightbox-basic" width="1" /></p>&#13; &#13; <p>It is clear, however, that the European Parliament is making inroads towards taking an AI-centric future seriously. Last month, in a 17-2 vote, the parliament’s legal affairs committee voted to to begin drafting a set of regulations to govern the development and use of artificial intelligence and robotics. Included in this <a href="https://www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf?redirect">draft proposal</a> is preliminary guidance on what it calls “electronic personhood” that would ensure corresponding rights and obligations for the most sophisticated AI. This is a start, but nothing more than that.</p>&#13; &#13; <p>If you caught any of the debate on the issue of “electronic” or “robot” personhood, you probably understand how murky the issues are, and <a href="https://www.theguardian.com/technology/2017/jan/16/giving-rights-to-robots-is-a-dangerous-idea">how visceral reactions to it can be</a>. If you have not caught any of it, now is a good time to start paying attention.</p>&#13; &#13; <p> ֱ̽idea of robot personhood is similar to the concept of <a href="https://www.theatlantic.com/politics/archive/2015/02/if-corporations-are-people-they-should-act-like-it/385034/">corporate personhood</a> that allows companies to take part in legal cases as both claimant and respondent – that is, to sue and be sued. ֱ̽report identifies a number of areas for potential oversight, such as the formation of a European agency for AI and robotics, a legal definition of “smart autonomous robots”, a registration system for the most advanced ones, and a mandatory insurance scheme for companies to cover damage and harm caused by robots.</p>&#13; &#13; <p> ֱ̽report also addresses the possibility that both AI and robotics will play a central role in catalysing massive job losses and calls for a “serious” assessment of <a href="https://www.theguardian.com/society/2017/feb/19/basic-income-finland-low-wages-fewer-jobs">the feasibility of universal basic income</a> as a strategy to minimise the economic effects of <a href="https://www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work_2.pdf">mass automation of entire economic sectors</a>.</p>&#13; &#13; <h2>We, Robots</h2>&#13; &#13; <p>As daunting as these challenges are – and they are certainly not made any more palatable given the increasingly woeful state of geopolitics – lawmakers, politicians and courts are only beginning to skim the surface of what sort of problems, and indeed opportunities, artificial intelligence and robotics pose. Yes, driverless cars are problematic, but only in a world where traditional cars exist. Get them off the road, and a city, state, nation, or continent populated exclusively by driverless cars is essentially a really, really elaborate railway signalling network.</p>&#13; &#13; <figure class="align-center "><img alt="" src="https://cdn.theconversation.com/files/158152/width754/image-20170223-32714-1czx7re.jpg" style="height: 377px; width: 565px;" /><figcaption><span class="caption">Artificial minds will need very real rights.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/illustration-thought-processes-brain-340384811">Shutterstock</a></span></figcaption></figure><p><br />&#13; I cannot here critique the feasibility of things such as general artificial intelligence, or even the Pandora’s Box that is <a href="http://shanghailectures.org/sites/default/files/uploads/2013_Sandberg_Brain-Simulation_34.pdf">Whole Brain Emulation</a> – whereby an artificial, software-based copy of a human brain is made that functions and behaves identically to the biological one. So let’s just assume their technical feasibility and imagine a world where both bespoke sentient robots and robotic versions of ourselves imbued with perfect digital copies of our brains go to work and “<a href="https://www.theguardian.com/media/shortcuts/2015/sep/29/how-netflix-and-chill-became-code-for-casual-sex">Netflix and chill</a>” with us.</p>&#13; &#13; <p>It goes without saying that the very notion of making separate, transferable, editable copies of human beings embodied in robotic form poses both conceptual and practical legal challenges. For instance, basic principles of contract law would need to be updated to accommodate contracts where one of the parties existed as a digital copy of a biological human.</p>&#13; &#13; <p>Would a contract in Jane Smith’s name, for example, apply to both the biological Jane Smith and her copy? On what basis should it, or should it not? ֱ̽same question would also need to be asked in regard to marriages, parentage, economic and property rights, and so forth. If a “robot” copy was actually an embodied version of a biological consciousness that had all the same experiences, feelings, hopes, dreams, frailties and fears as their originator, on what basis would we deny that copy rights if we referred to existing human rights regimes? This sounds like absurdity, but it is nonetheless an absurdity that may soon be reality, and that means we cannot afford to laugh it off or overlook it.</p>&#13; &#13; <p>There is also the question of what fundamental rights a copy of a biological original should have. For example, how should democratic votes be allocated when copying people’s identities into artificial bodies or machines becomes so cheap that an extreme form of “ballot box stuffing” – by making identical copies of the same voter – becomes a real possibility?</p>&#13; &#13; <p>Should each copy be afforded their own vote, or a fractional portion determined by the number of copies that exist of a given person? If a robot is the property of its “owner” should they have any greater moral claim to a vote than say, your cat? Would rights be transferable to back-up copies in the event of the biological original’s death? What about when copying becomes so cheap, quick, and efficient that entire voter bases could be created at the whim of deep-pocketed political candidates, each with their own moral claim to a democratic vote?</p>&#13; &#13; <p>How do you feel about a voter base comprised of one million robotic copies of <a href="https://www.nytimes.com/2017/02/21/opinion/milo-is-the-mini-donald.html">Milo Yiannopolous</a>? Remember all that discussion in the US about phantom voter fraud, well, imagine that on steroids. What sort of democratic interests would non-biological persons have given that they would likely not be susceptible to ageing, infirmity, or death? Good luck sleeping tonight.</p>&#13; &#13; <h2>Deep thoughts</h2>&#13; &#13; <p>These are incredibly fascinating things to speculate on and will certainly lead to major social, legal, political, economic and philosophical changes should they become live issues. But it is because they are increasingly likely to be live issues that we should begin thinking more deeply about AI and robotics than just driverless cars and jobs. If you take any liberal human rights regime at face value, you’re almost certainly led to the conclusion that, yes, sophisticated AIs should be granted human rights if we take a strict interpretation of the conceptual and philosophical foundations on which they rest.</p>&#13; &#13; <figure class="align-center "><img alt="" src="https://cdn.theconversation.com/files/158154/width754/image-20170223-32718-1v5yh25.jpg" style="height: 352px; width: 565px;" /><figcaption><span class="caption">Who will win the AI vote?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/humanoid-robot-clicking-network-computer-3d-451280680?src=qif1ZsQmAFjCqn44NlViPA-1-5">Shutterstock</a></span></figcaption></figure><p><br />&#13; Why then is it so hard to accept this conclusion? What is it about it that makes so many feel uneasy, uncomfortable or threatened? Humans have enjoyed an exclusive claim to biological intelligence, and we use ourselves as the benchmark against which all other intelligence should be judged. At one level, people feel uneasy about the idea of robotic personhood because granting rights to non-biological persons means that we as humans would become a whole lot less special.</p>&#13; &#13; <p>Indeed, our most deeply ingrained religious and philosophical traditions revolve around the very idea that we are in fact beautiful and unique snowflakes imbued with the spark of life and abilities that allow us to transcend other species. That’s understandable, even if you could find any number of ways to take issue with it.</p>&#13; &#13; <p>At another level, the idea of robot personhood – particularly as it relates to the example of voting – makes us uneasy because it leads us to question the resilience and applicability of our most sacrosanct values. This is particularly true in a time of “fake news”, “alternative facts”, and the gradual erosion of the once proud edifice of the liberal democratic state. With each new advancement in AI and robotics, we are brought closer to a reckoning not just with ourselves, but over whether our laws, legal concepts, and the historical, cultural, social and economic foundations on which they are premised are truly suited to addressing the world as it will be, not as it once was.</p>&#13; &#13; <p> ֱ̽choices and actions we take today in relation to AI and robotics have path-dependent implications for what we can choose to do tomorrow. It is incumbent upon all of us to engage with what is going on, to understand its implications and to begin to reflect on whether efforts such as the European Parliament’s are nothing more than pouring new wine into old wine skins. There is no science of futurology, but we can better see the future and understand where we might end up in it by focusing more intently on the present and the decisions we have made as society when it comes to technology.</p>&#13; &#13; <p>When you do that, you realise we as a society have made no real democratic decisions about technology, we have more or less been forced to accept that certain things enter our world and that we must learn to harness their benefits or get left behind and, of course, deal with their fallout. Perhaps the first step, then, is not to take laws and policy proposals as the jumping-off point for how to “deal” with AI, but instead start thinking more about correcting the democratic deficit as to whether we as a society, or indeed a planet, really want to inherit the future Silicon Valley and others want for us.</p>&#13; &#13; <p><em>To hear more about the future of AI and whether robots will take our jobs, listen to episode 10 of ֱ̽Conversation’s monthly podcast, ֱ̽Anthill – which is all <a href="https://theconversation.com/uk/topics/the-conversation-documentaries-podcast-formerly-the-anthill-27460">about the future</a>.</em></p>&#13; &#13; <p><span><a href="https://theconversation.com/profiles/christopher-markou-341005">Christopher Markou</a>, PhD Candidate, Faculty of Law, <em><a href="https://theconversation.com/institutions/university-of-cambridge-1283"> ֱ̽ of Cambridge</a></em></span></p>&#13; &#13; <p>This article was originally published on <a href="https://theconversation.com/"> ֱ̽Conversation</a>. Read the <a href="https://theconversation.com/robots-and-ai-could-soon-have-feelings-hopes-and-rights-we-must-prepare-for-the-reckoning-73462">original article</a>.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Is artificial intelligence a benign and liberating influence on our lives – or should we fear an impending rise of the machines? And what rights should robots share with humans? Christopher Markou, a PhD candidate at the Faculty of Law, suggests an urgent need to start considering the answers.</p>&#13; </p></div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 28 Feb 2017 12:07:33 +0000 ljm67 185542 at