People analytics in the workplace – how to effectively enforce labor rights

Increasingly, complex computational systems are being used to monitor, score, manage, promote and even fire employees. These systems are often referred to as people analytics. They have the potential to profoundly influence, alter, and redirect the lives of people at work, and therefore impact their life chances in general. In order for people to not suffer from the use of these systems but ideally benefit from it, there needs to be a comprehensive governance framework. Here’s how we will get there.

“How Amazon automatically tracks and fires warehouse workers for ‘productivity’“, was the headline on a 2019 story published by the US magazine The Verge. The magazine had obtained documents from a court filing in a labor dispute between a fired Amazon worker and the company. In the filing, the attorney representing Amazon said that “the company fired ‘hundreds’ of employees at a single facility between August of 2017 and September 2018 for failing to meet productivity quotas.” While this was bad enough, it was not uncommon news, given that the ecommerce giant has regularly been accused of maltreating workers.

What caught the attention of experts for automated decision-making (ADM) systems–whether they use so-called AI or not–was the audacity with which the company tried to deflect the fired worker’s claims. The worker said they were terminated for engaging in legally protected activity. Amazon responded to the National Labor Relations Board that the employee had been fired for failing to reach productivity benchmarks. And the “criteria for receiving a production-related notice”, the company argued, “is entirely objective – the production system generates all production-related warning and termination notices automatically with no input from supervisors.”

Even the most fervent proponents of the usefulness of data-driven analysis would have to admit that this is a ridiculous line of argumentation, as the benchmarks are of course defined by humans – in this case Amazon’s managers – and therefore not only not objective, but entirely political, in that they are used to maximize the company’s profits. (See “The most important things about machine learning” on page 6 of our Guideline for reviewing essential features of AI-based systems for works councils and other staff representatives.)

GAME: HR Puzzle

Be a human resources manager and find out for yourself whether you can make sense of a Machine-Learning-based system’s recommendations and “decisions” – play our interactive HR Puzzle.

This crass example of surveillance and control in the workplace would most likely be illegal in the European Union. But it illustrates some companies’ ambition to automate parts of human resources management. The impact of these ADM tools, often using Machine Learning methods, also called Artificial Intelligence, on HR management practices is important because they have the potential to profoundly influence, alter, and redirect the lives of people at work, and significantly and permanently change the power structures between employers and employees. At the same time, such systems can present a chance to improve both employer and employee satisfaction and may even be used to enhance human decision-making that has always been influenced by subconscious or even conscious prejudices.

The claim of objectivity, technicity or scientificity is often used by operators of automated systems to discredit criticism. While it is true that the math involved can be complex, the key aspect of an automated system is not how it works, but what it was programmed for. At Amazon and elsewhere, this is a political issue that stakeholders, starting with employees, are much more expert at than computer scientists.

A two-tiered approach: Recommendations for responsible use and an assessment tool to better understand ADM systems

This is why we developed a two-tiered approach to dealing with the fact that these systems exist, are adopted and refined. Based on an ethical analysis by University of Zurich’s Michele Loi, we devised recommendations that should be followed by companies who develop people analytics systems, companies who use them, employee representatives (like works councils), and lawmakers in order to enable a responsible use of these tools.

People Analytics must benefit the people

We also developed a practical assessment tool that consists of a series of questions about automated HR systems. Works councils or other employee representatives can use it to ask management about the properties of automated decision-making systems used in HR. Ideally, these questions should be asked before implementing new systems, but can also be asked about systems already in place. The guide explains how to classify answers and what they should include, and how to make sense of these answers.

Reviewing essential features of AI-based systems for works councils and other staff representatives

Questions that works councils should ask management before implementing new systems, or about systems already in place.

Recommendations for the responsible use of ADM systems in the workplace

The recommendations are in short (a detailed version is part of the study):

Rules for data collection for human resource analytics should go beyond GDPR.

The key ethical questions to be answered before implementing data collection are:

Key implementation steps

Practical recommendations to implement ethical process improvements are:

  1. Build an internal or independent ethics board to carefully assess the purposes of data collection from an ethical point of view. An internal ethics board should include all the key competences in the organization (including, but not limited to, HR, data protection, and compliance). An independent ethics board should include stakeholders able to provide different perspectives.
  2. Engage employees to provide their opinion. The voice of employees considering the purposes for which their data are analyzed should not be ignored. An employee discussion panel, which can also involve trade union representatives as facilitators, can help identify ways of using data in HR that are regarded as more problematic and those that are regarded as more desirable.

The development of data-driven HR tools needs adequate technical competence to generate knowledge about the algorithm

The key ethical questions that should be answered before this stage of data processing are:

Key implementation steps

  1. Ensure you have adequate competences to build and implement ADM ethically.
  2. Involve civil society organizations and expertise from academia.
  3. Engage in AI and ethics education.
    a. Educate potential end-users (e.g. HR professionals) to ensure that they have the know-how and skills necessary to operate ADM systems in HR correctly.
    b. Educate employees potentially affected by ASM systems or their representatives (e.g. labor councils) to ensure that they have the know-how and skills necessary to correctly understand how ADM tools are used in HR.

The impact of using the tool on employees should be carefully monitored

Key implementation questions

Key implementation steps

  1. Develop an (adequately privacy-protected) mechanism to record high-stake decisions about employees that are made with the help of algorithmic recommendations or predictions.
  2. Develop an (adequately privacy-protected) mechanism to determine if particular populations (non-native speakers, pregnant women, minority members, etc.) are negatively or positively affected by algorithmic decisions.
  3. Develop a process by which employees can correct errors in input data or outputs.
  4. Develop a process by which employees can challenge decisions that are fully automated (if any).
  5. If a subgroup of your employees (non-native speakers, parents with kids, members of religious groups, etc.) appears to be systematically disadvantaged by the introduction of algorithmic decision-making, set up a procedure of redress, or even better, improve the outcome for them in the long term

HR and management should guarantee adequate transparency about the data-driven tools used in HR

Because of its complexity, an adequate transparency strategy needs to be planned, designed, and achieved with different solutions tailored for different contexts. Different kinds of transparency should be addressed to the right kind of stakeholders (e.g. workers’ representatives and HR managers in the case of HR), by engaging management in designing a transparency strategy side-by-side with the introduction of ADM systems. It is also possible that different forms of transparency should be used simultaneously. For example, the documentation of the machine learning process may be sufficient for some purposes and some stakeholders, but useless for others; detailed knowledge of the features and how they affect the outcome may be fair, useful and possible in some contexts (e.g. where the algorithms are not “black boxes” and knowledge of them does not allow for gaming the system in order to produce desired outcomes), but a higher-level explanation of the logic of the decisions involved, and full knowledge about the features, may be more appropriate in other cases (e.g. where detailed knowledge would be used to game the system).

Guidelines for reviewing essential features of ADM systems

How can human resources managers and employee representatives assess whether transparency is adequate? How can they know whether the impact of using the tool on employees is adequately monitored? Whether the explainability and fairness of ADM tools are acceptable – and so many more questions related to complex systems? The idea that these systems are “black boxes” that cannot be meaningfully understood by non-experts in Machine Learning, advanced statistics or computer science is a useful myth. Sometimes, it is used to argue that the company building or the company using these systems cannot take responsibility for their outcomes. Many times, it is used to argue that people using the systems and the ones subjected to them need not try to understand more about their qualities than what is described in flashy marketing brochures.

Videos: Correlation, causation & proxy variables?

What’s a proxy variable, how do you distinguish correlation from causation, and why is it important to understand error rates? Watch our animated tutorials to find out.

But how such a process works is only to a lesser, often insignificant, extent a matter for experts. It is always an issue of procedures that arrive at statements on the basis of data. Why and in what sense these statements are true or useful depends on the justification for the procedure. Substantial parts of these justifications are not mathematical and can be discussed equally by users and those affected, as by developers. In addition, many procedures do not simply make true or false statements, but rather provide hints and recommendations. To use such hints responsibly, one must know how they came about. Such an understanding is possible.

And ultimately, the reasoning behind a statement must be good enough to convince those who bear the consequences and responsibility for it. Therefore, in the context of labor rights, it must be possible to discuss how AI processes are used in HR functions. Our Guideline for Reviewing Essential Features of AI-based Systems for Works Councils and other Staff Representatives contains questions that make this possible by helping to drill down to the justificatory, discussion-worthy properties and contexts of a software system. These questions can be used to better understand a system's properties and qualities, what purposes it is used for, and consequently help decide whether it complies with the above ethical recommendations and the expectations of the employees.

The EU AI regulation as an opportunity for employees to co-determine how ADM systems are used

But what if a system is not in line with the recommendations outlined above? Companies may not always, or even rarely, be willing to voluntarily forgo the advantages they expect to gain from the use of people analytics systems. Depending on the values of the company or the workplace culture, a mere objection by employees to being subjected to such a system, or their demands for changes, could fall on deaf ears. The EU’s General Data Protection Regulation (GDPR) can be used to some extent to address this legally, but it has major shortcomings regarding automated decision-making processes. In Germany, labor law provides employees and their representatives with tools they can use to demand information about IT systems from companies. This provision can be equally applied to ADM systems (see our analysis of labor rights and data protection, available in German only). How well these rights can be enforced in practice needs to be tested.

But many countries do not provide the legal basis for such claims in the first place. We therefore welcome the European Commission’s decision to include “AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships“ in their list of systems that “should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons” in the recently proposed draft AI regulation. Those high-risk AI systems are only “permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment”, and are subject to legal requirements “in relation to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security.”

We will, with support from the European AI Fund and the Hans Böckler Foundation, evaluate the requirements for high-risk AI systems that are laid out in the draft regulation, e.g. Article 13, “Transparency and provision of information to users”, that demands that “high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.”

Based on this analysis, we will assess whether it is realistic that the AI Regulation, in case of its adoption, will in practice provide employees with enforceable rights to transparency. We will also analyse whether–and if yes, how–the Regulation could be used to guarantee that employees get a say in how and for what purposes AI-based systems can be deployed in the workplace. Consequently, we will develop actionable policy recommendations and advocate for them vis-à-vis the European institutions, namely the Parliament and the Council, to ensure that these rights are guaranteed.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.