Artificial intelligence must not be deployed in justice systems in a way that would contribute to discrimination against individuals or groups under a set of ethical principles adopted by a body of the Council of Europe human rights watchdog.
The European Ethical Charter on the use of artificial intelligence in judicial systems, adopted by the European Commission for the Efficiency of Justice, is the latest response to growing concerns that the automation of processes through machine-learning technology could damage fairness and accountability. It sets out five core principles intended to guide 'policy makers, legislators and justice professionals when they grapple with the rapid development of AI in national judicial processes'.
Heading the list is a 'principle of respect of fundamental rights'. This is followed by a principles of non-discrimination, quality and transparency, which would ensure that data processing methods and algorithms are accessible and understandable. A final principle requires systems to be 'under user control'.
The parliamentary assembly of the Council of Europe, the 47-nation body which oversees the European Convention on Human Rights, is currently meeting in Strasbourg.
A Law Society public policy commission investigating the use of algorithms in the justice system will hold an evidence session in Cardiff on 7 February.
2 Readers' comments