In the shadow of Silicon Valley’s scandals, tech innovators are faced with developing moral principles to govern their software alongside algorithms that run our lives.
The past few weeks have seen multiple discussions about the need for responsible technology and a regulatory framework for the use of artificial intelligence (AI), particularly following the Facebook/Cambridge Analytica revelations.
In April, three events approached this from different angles, and they all focused on transparency, accountability and trust.
At TICTeC – the civic technology conference organised by mySociety – keynote speaker Martha Lane Fox discussed the findings of research by thinktank Doteveryone – underlining that in order for AI to be generally accepted, tech companies and platforms need to be more transparent about how they collect and use people’s data. She highlighted the need for digital education to help users understand what they’re signing up for, and regulation and industry standards that keep users safe and informed, but do not stifle innovation or the fantastic opportunities that tech has to offer.
This was echoed at the House of Lords, where the Big Innovation Centre hosted a debate on the 74 recommendations of the Select Committee on Artificial Intelligence, whose report, AI in the UK: ready, willing and able?, was published on 16 April. Again, the focus was on responsibility, accountability and transparency, with one of the panellists being artificial general intelligence (AGI) safety expert Nick Bostrom. Dr Joanna Bryson of Bath University observed that if companies are made accountable for the outcomes of AI, transparency will become a desirable asset rather than a compliance cost.
And on 27 April, the Law Society’s AI and Ethics Summit brought the debate to legal services, with a keynote by Richard Susskind followed by three panel discussions covering the key legal dilemmas around AI: how to develop a multidisciplinary approach; the role of global standards and regulation; and no-go and must-go zones for AI.
These high-profile debates challenge the claim of some law firms that once AI goes mainstream it will ‘just be software’.
As Susskind said in his keynote, it is difficult for lawyers, who use case law to back up their intuition, to ask questions about what is right or wrong. In other words, law firms tend to be risk-averse and resist change. However, as Dr Nóra Ni Loideain of the Institute of Advanced Legal Studies observed, society has dealt with new tech before and will again. And tech has already raised ethical issues, said Dr Adrian Weller of the University of Cambridge, although AI is different in terms of its trade-offs around personalisation (when software uses your data and online behaviour to show you information relevant to you) and its potential to make decisions that directly affect you based on your online profile and history (eg if this information is used for an insurance quote, a job application or by the criminal justice system).
This raises legal dilemmas. As Susskind said, while we might feel comfortable with Amazon using our data to recommend a book or film, or Spotify mixing up our musical preferences with those of our friends to create new playlists, we would probably feel less comfortable about an AI judge imposing a life sentence.
Professor Sylvie Delacroix of Birmingham University asked the audience whether they would be comfortable with a robot providing their personal care in their old age or teaching their children. The discussions underline that our level of comfort, which changes according to tech developments and digital awareness, ultimately depends on context and our level of trust in tech. If we’re uncomfortable with tech, we will reject it and its potential benefits to individuals and society – and businesses. Lane Fox is on the board of Twitter, which is working to become a ‘better place’ online, focusing on where it is useful, relevant and positive. At TICTeC, she expressed her hope that ‘responsible tech’ will become the ‘new normal’.
The Law Society quote of the day goes to Patricia Christias of Microsoft, for her comment in the global standards and regulation panel: ‘Computers aren’t unethical: humans are.’
Christias outlined the importance of addressing the human prejudices that are inherent in the data that machines use and set out Microsoft’s principles, based on timeless values: privacy and security, safety, reliability, inclusiveness, transparency, and accountability. Ben Gardner of Wavelength.law focused on the importance of effective data management – data is the lifeblood of AI, not all correlations are significant, and context is key. Raj Chatila of the Institute of Electrical and Electronic Engineers advocated setting industry standards and Kay Firth-Butterfield of the World Economic Forum highlighted the necessity of agile governance structures, with ethics at their core.
This is precisely what the House of Lords select committee report recommends; in addition to specific standards and regulations for AI that take account of the opportunities as well as the challenges. Ultimately it is about: representatives of different stakeholder communities – and legal is an important constituent – working together; co-creation of responsible tech; and collaboration between tech companies and civic organisations to make AI work for business and society.
In the (Canadian innovation) zone I had the honour of participating in the Canadian Corporate Counsel Association conference in Toronto. This was refreshingly hands-on for a legal event, with highly interactive panels and workshops bringing together a great mix of skills and insights. AI and ethics was an important focus. During our plenary discussion – Designing your Future: Building Blocks for the Ultimate In-House Counsel – Philippe Coen, vice-president and director, legal, at the Walt Disney Company in France, described the role of the in-house counsel as Jiminy Cricket, or ‘the conscience of the organisation’.
The last afternoon was devoted to the second edition of The Pitch, a start-up competition hosted by Jason Moyse of Law Made and Elevate Services and Martine Boucher, president and founder of Simplex Legal. The winner was Digitory Legal, a cost analytics and management platform founded by Catherine Krow, a former partner at Orrick. Digitory Legal will shortly be joining Mishcon de Reya’s MDR LAB 2018 cohort in London, so it is one to watch.
TAKING RESPONSIBILITY
Following the House of Lords Select Committee on Artificial Intelligence report, the Big Innovation Centre conducted a survey. Respondents felt governments and individuals should take primary responsibility for how personal data is used, but in order to make the UK ‘AI ready’ the government’s first priority should be to make organisations accountable for decisions made by algorithms.
This brings in ethical and legislative issues around AI regulation and responsibility. GDPR will help by tightening the rules around consent, but it does not get around the transparency and accountability issues. The committee’s recognition of the need for an appropriate ethical and legal framework for AI was echoed by the European Commission a week later.
The Law Society AI and Ethics Summit was titled ‘plotting a path to the unanswered questions’. Chancery Lane is setting up a new multidisciplinary Public Policy Technology and Law Commission, which will focus on the use of algorithms in the justice system and associated human rights implications.
Three of the five finalists – Digitory Legal; Evichat, a mobile app which collects instant messages (texts, WhatsApp etc) for evidence, which won the audience favourite award; and Founded, a platform which supports entrepreneurs’ legal needs – were residents of Ryerson University’s Legal Innovation Zone (LIZ), a co-working space that is the world’s first lawtech incubator.
Hersh Perlis, director of LIZ emphasised that it is a genuine incubator; start-ups can stay for a two-year period, after which they have to move on. In just under three years, LIZ has achieved an impressive record. One of the biggest successes is Diligens, a contract analysis tool which is winning Canadian and international clients. Although LIZ is sponsored by a law firm and attached to a university, it is business-focused, ensuring that its graduate companies are commercially viable when they leave the centre. Ryerson University’s new law school programme, launching in 2020, promises ‘a leading-edge, future-focused legal education’ that includes tech, legal innovation and the business of law.
Canadian AI contract analysis business Kira Systems, which recently joined Allen & Overy’s Fuse programme in London, has a global client base and has grown from 35 people in January 2017 to more than 90 people today. Kira is a great example of transitional legal tech that is transforming the way legal services are delivered (by law firms and legal departments) into a new normal that includes intelligent automation. What differentiates Kira from the competition in the crowded contract analysis market? Co-founder Noah Waisberg explained that Kira’s machine-learning capabilities are at the heart of an agile system – firms have trained it to do tasks that its designers never envisaged. One example is Freshfields Bruckhaus Deringer training Kira to analyse German lease agreements – out-of-the-box Kira does not work in Germa n! Another firm has taught Kira to negotiate individual contracts when it was originally designed to deal with volume tasks.
At the heart of progress in AI is the concept of co-creation: regulators, law firms and tech developers working together to build an intelligent framework for the future of legal services.
The author would like to thank MySociety, The Big Innovation Centre, The Law Society, the Canadian Corporate Counsel Association and Jason Moyse of Law Made
No comments yet