Automated decision-making systems in the justice system must be governed by a framework of standards to protect fundamental freedoms, the president of the Law Society has said, in an indication of the findings of the Society's ongoing investigation into the topic.
Christina Blacklaws was speaking this week at a panel on artificial intelligence at the International forum on Online Courts organised by the Society for Computers and Law and HM Courts and Tribunals Service. She revealed that the initial findings of the Society's Public Policy Technology and Law Commission include the need to supervise automated decision-making.
AI systems used by UK police forces to assess the risk of reoffending, and especially facial recognition systems, have not lived up to expectations in terms of fairness or accuracy, she said. Meanwhile the new data protection regime requires that automated decisions that affect individuals must have a human in the loop to oversee the decisions and the processes behind them. This and the potential impact of algorithmic tools on a justice system that is based on case law and precedents highlights the need for a multidisciplinary approach bringing together civil society and government institutions to protect the public interest, she said.
'There is no shared approach to building fairness and ethics into AI systems,' Blacklaws said. 'When it comes to regulating AI, we need a principle-led approach. We need flexibility, but we also need a framework of standards, building in ethics by design and protecting fundamental freedoms.'
Dr Sandra Wachter, a lawyer and a research fellow at the Oxford Internet Institute, presented her work on embedding principles of fairness in automated decision-making, making black-box systems more explainable, achieving better transparency and managing competing interests.
Dr Wachter said that AI’s ability to identify patterns and its consistency does not mean that it necessarily makes ‘correct’ decisions. While we expect AI systems to be fair, explainable and transparent, they use inferential analytics that may reflect biases in the data they handle.
1 Reader's comment