
Are robots capable of committing crime? Yes, says Christopher Markou,听PhD Candidate at the听Faculty of Law, writing for 探花直播Conversation - but what should we do if it does?
Are robots capable of committing crime? Yes, says Christopher Markou,听PhD Candidate at the听Faculty of Law, writing for 探花直播Conversation - but what should we do if it does?
This is where we are at in 2017: sophisticated algorithms are both and ; and ; and helping to do the in those cases. By 2040, there is even a suggestion that sophisticated robots will be . Just ask the toddler who was at a California mall last year.
How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.
Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.
Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it鈥檚 a military drone with a full payload, a law enforcement robot or something altogether more innocent that causes harm through accident, error, oversight, or good ol鈥 fashioned stupidity.
There鈥檚 a cynical saying in law that 鈥渨heres there鈥檚 blame, there鈥檚 a claim鈥. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let鈥檚 not forget that a (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after .
While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the . Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: .
Robot guilt
探花直播, but ultimately it is a system within society for stabilising people鈥檚 expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.
But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit 鈥 and so humans are considered by the law to be 鈥溾.
To varying extents, , too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.
探花直播problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?
A guilty AI mind?
探花直播criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.
Second, criminal law requires that an accused is culpable for their actions. This is known as a 鈥済uilty mind鈥 or . 探花直播idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the 鈥済uilty mind鈥 of a non-human? Can this be done be referring to and adapting existing legal principles?
Take driverless cars. Cars drive on roads and there are to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.
As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of 鈥渞obots gone awry鈥 but the legal realities should not be left to Hollywood.
So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?
For now, we are nowhere near the level of building a fully sentient or 鈥渃onscious鈥 humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and already abounds.
Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let鈥檚 say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.
But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a ? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine 鈥渢ick鈥.
And what would 鈥渋ntent鈥 look like in a machine mind? How would we go about proving an autonomous machine was justified in or the extent of premeditation?
Let鈥檚 go even further. After all, we鈥檙e not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card 鈥 and it decided to buy contraband. This isn鈥檛 fiction; it has happened. Two London-based artists . And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?
Maybe. But what if the bot 鈥渄ecided鈥 to make the purchases itself?
Robo-jails?
Even if you solve these legal issues, you are still left with the question of punishment. What鈥檚 a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to 鈥渞eflect鈥 on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty鈥檚 leisure. And what would building 鈥渞emorse鈥 into machines say about us as their builders?

What we are really talking about when we talk about whether or not robots can commit crimes is 鈥渆mergence鈥 鈥 where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.
AI has already helped with emergent concepts in medicine, and we are learning things about with AI systems that even an army of Stephen Hawkings might not reveal.
探花直播hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.
At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can鈥檛 help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.
, PhD Candidate, Faculty of Law,
This article was originally published on . Read the .
探花直播text in this work is licensed under a . For image use please see separate credits above.