A survey of procurement data carried out for this report shows automated decision-making (ADM) in use or being introduced in a wide range of applications across the UK public sector, from military command and control to the supervision of schoolchildren while they use the Internet.
Civil society and academia are playing an important scrutiny role in certain areas – this chapter looks at examples from social care and policing – although with no central register of the use of ADM, there is likely much more going on without oversight.
A parliamentary inquiry dedicated to the topic of ADM took place in 2018. It identified the key issues but was thin on specific policy recommendations.
Existing legislation has limited bearing on ADM, although reviews in some areas, such as law that would affect self-driving cars, are underway.
Several new government institutions are being established which include ADM in their terms of reference. They are aimed variously at support for Artificial Intelligence (AI) as a source of economic growth and the development of rules and oversight mechanisms. The government professes its ambition to be a world leader in the research and development of AI and its regulation.
Political debates on aspects of automation – Government and Parliament
AI Sector Deal
Office for Artificial Intelligence
In April 2018, the government announced that it would establish an Office for Artificial Intelligence as part of the “AI Sector Deal”. The latter is a package of policies including financial support and tax breaks for research and development, aimed at boosting the Artificial Intelligence industry in the UK. The Office will be responsible for overseeing the implementation of the UK’s AI strategy.
Demis Hassabis, co-founder of DeepMind, an Artificial Intelligence company now owned by Google, was appointed as an advisor to the Office for AI in June 2018. The announcement of Hassabis’ appointment attracted some critical press coverage, which raised the issue of whether it is appropriate for an employee of a private company (Google) to be advising the government. [UK 1]
Also announced as part of the AI Sector Deal, the AI Council will be a counterpart to the Office for AI, composed of “leading figures from industry and academia”. [UK 4] In June 2018, entrepreneur Tabitha Goldstaub was announced as its chair and spokesperson. [UK 5] The council is tasked to promote the growth of the AI sector.
Centre for Data Ethics & Innovation (CDEI)
The government is in the process of establishing this new advisory body. It is expected to lead the work of dealing with ethical issues raised by new data technology, agreeing best practice and identifying potential new regulations. The government says it wants to lead the debate on these issues, not just in the UK, but around the world.
In its public consultation document on the role and objectives for the centre, ADM is one of three areas identified by the government as an example of an ethical issue raised by the use of data and AI. [UK 6] While recognising the potential benefits to society of ADM, it mentions discrimination against job applicants and inequities within the criminal justice system as examples of issues that may arise as a result of ADM.
One of the six proposed themes for the centre’s work is transparency, which is described with reference to the ability to interpret or explain automated decisions.
The creation of the centre was announced in autumn 2017. In June 2018, the government named Roger Taylor, co-founder of the healthcare data company Dr. Foster, as its chair [UK 7], and launched a consultation on the Centre’s remit and its priority areas of work. The consultation closed in September 2018, and a response from the government is now pending.
House of Commons
Algorithms in Decision-Making Inquiry
The Algorithms in Decision-Making Inquiry [UK 8] was launched in September 2017 by the Science and Technology Committee. As a Commons select committee inquiry, it consisted of a series of investigative public hearings carried out by a cross-party group of MPs. Its terms of reference included:
- The extent of current and future use of ADM across both the public and private sectors
- The scope for bias and discrimination in ADM, and how this might be overcome
- Whether ADM can be made transparent and accountable
- How ADM might be regulated
The inquiry published a report of its findings in May 2018. It found that the trends for big data and machine learning had led to an increase in the use of ADM across many areas, arguing that algorithms tend to increase in effectiveness and value as more data is used and combined.
The report identified many problem areas, but was rarely specific in advocating solutions. Instead, it mostly called on existing or forthcoming regulatory bodies to carry out further research.
The report recommended:
- Algorithms that affect the public should generally be transparent.
- New tools for algorithm accountability should be considered, perhaps including codes of practice, audits, ethics boards, or certification of algorithm developers.
- Britain’s privacy regulator should be adequately funded.
- The publication of Data Protection Impact Assessments should be encouraged.
- A procurement model for algorithms should be developed.
It suggested that the following areas should be reviewed or evaluated further:
- The scope for people to challenge the results of ADM
- Whether new data protection laws are needed
- Oversight of ADM by regulators in specific sectors
One concrete recommendation was that the government should publish and keep updated a list of where algorithms “with significant impacts” are being used in Central Government, along with projects aimed at introducing public service algorithms.
The report was not covered in the British press1, suggesting there is currently little political momentum behind ADM as a national issue.
All-Party Parliamentary Group on Artificial Intelligence (APPG AI)
This informal group of MPs and peers was established in January 2017. It has attracted sponsorship from a consortium of firms that serve as the group’s secretariat through a private company called the Big Innovation Centre. The firms are Accenture, Barclays, BP, CMS Cameron McKenna Nabarro Olswang, Deloitte, EDF Energy, Ernst and Young, KPMG, Microsoft and PricewaterhouseCoopers. [UK 9]
The APPG published the first annual summary of its findings in December 2017. [UK 10] One of seven ‘focus areas’ was accountability. Here the report suggested organisations should be made accountable for decisions made by the algorithms they use; that the Centre for Data Ethics and Innovation (CDEI) should establish AI auditing mechanisms; that ethics boards inside organisations should be incentivised; and that industry-led international collaborations were needed, such as a forum on AI global governance, which should lead the global debate.
House of Lords
Select Committee on Artificial Intelligence
The Select Committee on Artificial Intelligence was formed in June 2017. It reported in April 2018 and the government published its response in June 2018.
The report was titled AI in the UK: ready, willing and able? [UK 11]
It was relatively lukewarm towards the idea of algorithmic transparency, arguing that achieving full technical transparency is difficult and often not helpful. It accepted that there would be “particular safety-critical scenarios where technical transparency is imperative”, such as healthcare, autonomous vehicles and weapons systems. It said regulators in the relevant sectors must have the power to enforce this.
However, the report drew a distinction between transparency and the explicability of algorithmic decisions. Here the recommendations were more wary of new ADM technology, with the committee stating its belief that “it is not acceptable to deploy any AI system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take”. In cases such as deep neural networks, this may mean that the system should not be deployed at all, the report suggests.
The report also expressed concern about the qualifications to the ADM safeguards in what was then the Data Protection Bill (now the Data Protection Act 2018), which mean the rules only apply to decisions ‘solely’ made by machines.
National Data Strategy
In June 2018, the government announced that it would produce a National Data Strategy “to unlock the power of data in the UK economy and government, while building public confidence in its use”. [UK 12] No further details have been announced at the time of writing, although other government initiatives described in the next section, such as the Centre for Data Ethics & Innovation, are laying some of the groundwork for the strategy.
Political debate on aspects of automation – Civil Society and Academia
Alan Turing Institute
The Alan Turing Institute was established as the UK’s national interdisciplinary research institute for data science in 2015. Thirteen British universities are members. It has started a research project called “Developing an ethical framework for explaining algorithmic decision-making”, in conjunction with the Information Commissioner's Office (ICO). [UK 13]
Big Brother Watch
Big Brother Watch is an independent research and campaigning group, founded in 2009. Its mission is to expose and challenge threats to privacy, freedoms and civil liberties amid technological change in the UK. One of its main campaigns is FaceOff, concerning the use of automated facial recognition in policing. It has also responded to government consultations concerning ADM and AI. [UK 14]
British Computer Society Specialist Group on Artificial Intelligence
The British Computer Society (BCS) is a professional organisation for information technology practitioners and academics, with official status as a chartered institute. It is committed to “making IT good for society”. Within the BCS, the Specialist Group on Artificial Intelligence was founded in 1980. It organises an international conference on AI. [UK 15]
Data Justice Lab
The Data Justice Lab is a research lab at Cardiff University’s School of Journalism, Media and Culture. It seeks to examine the relationship between what it calls ‘datafication’—the collection and processing of massive amounts of data for decision-making and governance across more and more areas of social life—and social justice. Its major research project DATAJUSTICE is looking into this question at a European level. It also investigates citizen scoring, the regulation of data-driven online platforms, and big data, inter alia. [UK 16]
Privacy International is a charity dedicated to challenging overreaching state and corporate surveillance in the interests of security and freedom. It scrutinises UK government policy [UK 17] as part of its global remit. Artificial Intelligence is one of the charity’s topics of interest, and it identifies ADM as a problem area. The organisation lobbies for strong data and privacy regulations. [UK 18]
The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB)
The AISB was founded in 1964 and counts academics and professionals among its members. It organises an annual convention and publishes a quarterly newsletter. [UK 19]
Regulatory and self-regulatory Measures
Data Ethics Framework
The Data Ethics Framework [UK 20] is guidance from the Department for Digital, Culture, Media & Sport (DCMS), a central government department. It was published in June 2018 and sets out “clear principles for how data should be used in the public sector”. It replaces the earlier Data Science Ethical Framework, which was published in 2016.
The guidance specifically addresses the question of ADM. As guidance, it does not carry the legal weight of statute. [UK 21]
At the centre of the framework are seven data ethics principles. They cover public benefit, legislation and codes of practice, proportionality, understanding data limitations, robust practices and skill sets, transparency/accountability, and responsible use of insights.
Principle six is of particular relevance to ADM. Under the heading ‘Make your work transparent and be accountable’, it encourages civil servants to publish their data and algorithms.
The framework contains more detailed guidance for each principle and a workbook that civil servants can use to record ethical decisions made about a particular data project.
When it comes to algorithms, the workbook suggests publishing their methodology, metadata about the model and/or the model itself, e.g. on Github, an open software repository.
When procuring a system involving an algorithm from a private sector supplier, a series of questions is set out for civil servants to ask. These cover a range of issues that can lead to bias or lack of explicability and transparency.
Data Protection Act 2018
The Data Protection Act 2018 (DPA) became law in May 2018. It transposes the GDPR but also goes further than the EU legislation in relation to automated decision-making.
The Act states that decisions that “significantly affect” an individual may not be “based solely on automated processing unless that decision is required or authorised by law”.
When decisions are made solely through automated processing, the act stipulates that data controllers must notify data subjects in writing. [UK 22] It also provides for a right of appeal against such decisions, under which a data subject can request that the decision be reconsidered or made anew without solely automated processing.
Privacy International has argued that the act contains “insufficient safeguards” in relation to ADM. [UK 23]
Review of laws on automated vehicles
In March 2018, the government announced [UK 24] a review of driving laws “to examine any legal obstacles to the widespread introduction of self-driving vehicles”. The review will be conducted by the Law Commission of England and Wales [UK 25] and the Scottish Law Commission [UK 26] and is set to take three years.
The Law Commissions are state-funded bodies charged with ensuring that the law in general is “as fair, modern, simple and as cost-effective as possible”. It can make recommendations that are then considered by parliament.
The commissions said they aim to publish consultation papers by the end of 2018, which will seek to identify key issues to be investigated. The first year of the project will also include an audit of the current law.
The review may lead to increased public debate around self-driving cars and the automated decisions they make. In particular, the Law Commissions say they will highlight decisions to be made around ethical issues. Assuming the commissions identify the need for new legislation, this is likely to push ethical questions around self-driving cars into the political arena.
Information Commissioner’s Office
The Information Commissioner’s Office (ICO) is the UK’s data protection regulator, funded by the government and directly answerable to parliament. It oversees and enforces the proper use of personal data by the private and public sectors.
The ICO website provides guidance for organisations on all aspects of data protection, including requirements deriving from the GDPR and Data Protection Act. Its pages on ADM explain the requirements of GDPR Article 22 and encourage businesses to go beyond this, by telling their customers about the use of ADM to make decisions that affect them. [UK 27]
Under the Data Protection Act 2018, the ICO was given increased powers to issue fines to organisations that break the law. At the time of writing, the Information Commissioner’s Office was still working on updating its advice for data controllers to reflect the content of the DPA 2018 including new provisions relating to ADM. [UK 28]
ADM in Action
Facial recognition used by police forces
The civil society organisation Big Brother Watch has researched the introduction of automatic facial recognition systems by police forces in the UK. [UK 29]
These systems take images from CCTV cameras and scan the faces of passers-by to see if they appear on databases of individuals of interest to the police. In a typical application, when the system detects a match, police officers may apprehend the person for questioning, search or arrest.
Unless the match can be readily discounted, human police officers are likely to follow the decision and act on suspicions raised by the system.
After raising concerns over its discriminatory potential, Big Brother Watch was granted limited access to observe the operation of the system at the Notting Hill Carnival 2017. According to the NGO’s report, the police said that in the course of a day around 35 people had been falsely matched by the system. In these cases, officers took no action because in reviewing the images it was obvious that the wrong person had been identified. However, “around five” people—who later turned out to have been identified by the system in error—were apprehended and asked to prove their identity.
Big Brother Watch reported that there was only one true positive match over the entire weekend—but even then there was a catch:
“The person stopped was no longer wanted for arrest, as the police’s data compiled for the event was outdated.”
In this case a civil society organisation has been able to draw attention to the flawed use of ADM. But the figures that support their case are not routinely published. It was only through concerted lobbying and tenacity that they were able to obtain them. This shows the important role that civil society organisations can play in ADM accountability. However, in the absence of a central public register of ADM, no one can say how many other systems are being implemented without oversight.
Personalised budgets for social care
In the UK, people who need social care—practical support because of illness, disability or age—can approach their local town hall for help.
Town halls in England have started using automated decision-making systems to help determine how much money should be spent on each person, depending on their individual needs. The resulting figure is known as a personal budget.
The impact of these decisions on people’s lives is enormous. There has been no discussion in the media about the specific role of ADM in personal budgets. But this BBC report from 2010 [UK 30] illustrates what is at stake:
Graeme Ellis, who is registered blind and is a wheelchair user, has been on a personal budget for more than a year and was originally assessed as needing £21,000.[...]
But then after being reassessed, he got an email from his social worker telling him that his council would have to cut their contribution by £10,000.
He told the BBC’s You and Yours programme he was frightened he was going to end back in the position he was in four years ago.
“I’m frightened about the effect that being housebound will have on my well-being because being able to get out of the house and do things is one of the things that enables me to carry on.”
Since 2007, governments of both main political parties in the UK have encouraged town halls and the NHS to start using personal budgets. [UK31] By 2014-15, around 500,000 people receiving social care through a town hall were subject to a personal budget. [UK32]
It is not known exactly how many people have had their personal budgets decided with the help of ADM. However, one private company, Imosphere, provides systems to help decide personal budgets for many town halls and National Health Service (NHS) regions. [UK33] Its website says that around forty town halls (local authorities) and fifteen NHS areas (Clinical Commissioning Groups, which also have the power to allocate personal budgets) across England currently use the system, and that it has allocated personal budgets worth a total of over 5.5 bn. [UK34]
What is the impact of the ADM?
Research in the Journal of Social Welfare & Family Law [UK34] found that automated personal budget decisions did not always correspond to people’s needs; that they could be used as a mechanism for implementing spending cuts; and that the algorithmic nature of the system led to a lack of transparency.
The paper serves not only to illustrate how flawed ADM decisions can adversely impact people’s lives, but also how ADM systems might be scrutinised and what obstacles are sure to arise in other domains of ADM accountability research.
In addition, it shows the importance of the social and political climate in which ADM systems are used. In this instance, it can be argued that an ADM system has served as a Trojan horse for spending cuts and the outsourcing of decision-making to the private sector. This might not have been so if the decision-making rules behind ADM were made transparent to the many charities and campaigning organisations that advocate for the rights of social care service users.
Procurement in the UK public sector
For this report chapter, an analysis of procurement notices published in the Official Journal of the European Union (OJEU) was conducted.2 This provides a snapshot of some of the areas of the UK public sector where ADM is in use, planned to be introduced or under research and development.
It is not intended to be a comprehensive or a representative sample, but it does give an indication of the wide scope of ADM in the UK in terms of the sectors and the types of problems to which it is applied. The notices may be solicitations to tender or relate to systems already contracted.
The table shows that automation is being applied to high-stakes decisions in a diverse range of areas including industrial control systems, healthcare and the safeguarding of children.
Table: Selected OJEU procurement notices for ADM systems from UK public bodies, 2018
|Finance||Sanctions list screening [UK 35]||Financial Services Comp. Scheme|
|Health||High Risk Human Papillomavirus|
testing [UK 36]
|Health||Physiotherapy triage [UK 37]||NHS - West Suffolk CCG|
|Health||Ventilator control [UK 38]||NHS Scotland|
|Health||Symptom checker (Babylon) [UK 39]||NHS – Hammersmith & Fulham CCG|
|Biometric matching [UK 40]||Home Office|
|Detection of migrants in lorries|
|Military||Command and control [UK 42]||Ministry of Defence|
|Schools||Alert unsafe computer activity of children [UK 43]||Education Scotland|
|Transport||Rail traffic management [UK 44]||Network Rail|
|Transport||Transport management [UK 45]||Cambridgeshire County Council|
|Utilities||Control of gas network [UK 46]||Northern Gas Networks|
|Utilities||Control of water network [UK 47]||South East Water|
|Utilities||Control of water network [UK 48]||Bristol Water|
Source: OJEU, AlgorithmWatch research
OR FT=[“artificial intelligence”]
OR FT=[“machine learning”])