Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it鈥檚 time for new thinking about new technology.

With penalties including fines of up to 鈧20 million, people are realising that they need to take data protection much more seriously

Jat Singh

Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new , which brings together science, technology and humanities researchers from across the 探花直播.

In fact, Singh, a researcher in Cambridge鈥檚 Department of Computer Science and Technology, has been collaborating with lawyers for several years: 鈥淎 legal perspective is paramount when you鈥檙e researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.鈥

Governance and public trust present some of the greatest challenges in technology today. 探花直播European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a 鈥榬ight to an explanation鈥 regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. 鈥淲ith penalties including fines of up to 4% of global turnover or 鈧20 million, people are realising that they need to take data protection much more seriously,鈥 he says.

Singh is particularly interested in how data-driven systems and algorithms 鈥 including machine learning 鈥 will soon underpin and automate everything from transport networks to council services.

As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the 鈥業nternet of Things鈥 continues to instrument the physical world, machines will increasingly mediate and influence our lives.

It鈥檚 a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: 鈥淲e work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they鈥檙e doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.鈥

What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at 探花直播Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.

鈥淣ot long ago, many markets were traded on exchanges by people in pits screaming and yelling,鈥 Weller recalls. 鈥淭oday, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets 鈥 and liquid markets are good for society.鈥

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. 鈥 探花直播flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,鈥 he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

How much we trust the 鈥榖lack box鈥 of machine learning systems, both as individuals and society, is clearly important. 鈥淭here are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion 鈥 to check that appropriate process was followed, and to enable meaningful challenge,鈥 says Weller. 鈥淓qually, to have effective real-world deployment of algorithmic systems, people will have to trust them.鈥

But even if we can lift the lid on these black boxes, how do we interpret what鈥檚 going on inside? 鈥淭here are many kinds of transparency,鈥 he explains. 鈥淎 user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.鈥

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it 鈥榯hinks鈥 we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard 探花直播, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that 鈥渂lack-sounding鈥 names were 25% more likely to result in the delivery of this kind of advertising.

Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. 鈥淚t鈥檚 a worry,鈥 he acknowledges. 鈥淎nd people sometimes stop there 鈥 they assume it鈥檚 a case of garbage in, garbage out, end of story. In fact, it鈥檚 just the beginning, because we鈥檙e developing techniques that can automatically detect and remove some forms of bias.鈥

Transparency, reliability and trustworthiness are at the core of Weller鈥檚 work at the Leverhulme Centre for the Future of Intelligence and 探花直播Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible 鈥 or desirable 鈥 in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. 探花直播stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning 鈥 and what it would be wise to guard against.

Weller believes the future of work is a huge issue: 鈥淢any jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.鈥滱nd academics must keep talking as well as thinking. 鈥淲e鈥檙e grappling with pressing and important issues,鈥 he concludes. 鈥淎s technical experts we need to engage with society and talk about what we鈥檙e doing so that policy makers can try to work towards policy that鈥檚 technically and legally sensible.鈥

Inset image: read more about our AI research in the 探花直播's research magazine;听download听a听pdf;听惫颈别飞听辞苍听.

Want to hear more?

Join us at the Cambridge Science Festival to hear Adrian Weller听discuss听how we can ensure AI systems are transparent, reliable and trustworthy.听

Thursday 15 March 2018,听7:30pm听- 8:30pm

Mill Lane Lecture Rooms, 8 Mill Lane, Cambridge, UK, CB2 1RW



探花直播text in this work is licensed under a . For image use please see separate credits above.