The risk that automated decision-making by police forces will emulate racial prejudice and other human biases is among the topics to be probed by a government inquiry into algorithms announced today. The study, by the Centre for Data Ethics and Innovation, a government body which sits in the Department for Culture, Media and Sport, is likely to echo some of the work done by the Law Society's commission on algorithms and the justice system, which has been taking evidence over the past nine months. 

Concerns about the quality and accountability of decisions made by computer algorithms has been mounting steadily over the past two years. Human rights pressure group Liberty last month called for ban on 'predictive policing' systems, saying that by learning from past data they can entrench human bias. 

The inquiry announced today will consider the potential for bias in algorithms deployed in financial services, recruitment and local government as well as in crime and justice. The aim is to ensure those using such technology can understand the potential for bias and have measures in place to address it, the announcement said. It is one of the first projects to be announced by the Centre for Data Ethics and Innovation, the establishment of which was announced in last year's budget. 

Roger Taylor, the centre's chair, said: 'We want to work with organisations so they can maximise the benefits of data driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.'

The Cabinet Office's Race Disparity Unit, set up in 2016 to monitor disparities in outcomes across different ethnic groups, will also contribute to the inquiry. It is due to make an interim report later this year and a final report next year. 

The Law Society’s technology and the law policy commission on algorithms in the justice system is due to report on 4 June.