Explainer: AI in the workplace
Managed by the algorithm: how AI is changing the way we work
Automated decision-making systems control our work, whether in companies or via platforms that allocate jobs to independent contractors. Companies can use them to increase their efficiency, but such systems have a downside: They can also be used to surveil employees and often conceal the exploitation of workers and the environment.
Algorithmic systems have long been part of our everyday work life. Many companies use software in HR management to access employee data. Such systems check and discard CVs during application processes. But they also determine shift schedules and help decide whether employees have a future in the company.
Another area of application is the “gig economy.” In this part of the labor market, self-employed people and people with marginal jobs (“mini-jobs”) receive small orders at short notice via an online platform. The platform mediates between the customers and the contractors, who define the framework conditions for processing the jobs. The platform operators keep a commission for the placement. It is a relatively new phenomenon. The term became popular in 2005, when platforms such as Amazon Mechanical Turk and later Uber emerged. In the gig economy, platform workers are controlled algorithmically without having contact with a human boss.
These two applications show that algorithmic systems are generally used to make processes more efficient and productive. It should be in the interest of the employees if a system takes over boring routine work. However, if a system aims to maximize productivity, this can increase work density and even lead to monitoring or excessively screening employees.
Algorithmic management
Automated decision-making systems in HR management can profoundly change people's lives in the workplace and permanently increase the power gap between employers and employees. One example of how these systems can be used is employee retention, where HR managers want to find out who might quit in the near future. Employers can also use such evaluations to receive suggestions on who should be dismissed and who should be kept in the event of a mass layoff. The algorithmic systems analyze data, find patterns, and make decisions or recommendations based on them. As technology advances, more and more data can be collected. There is a well-known example: Many companies use Microsoft 365, which includes the option of collecting data on employee productivity via the Microsoft Viva application. Employers can evaluate this data if they decide to do so.
As “people analytics” procedures are about making decisions about employees, clear rules need to be established. There are various laws that restrict how employee data may be used by employers. The best known one is the General Data Protection Regulation. In many cases, employers must obtain their employees’ consent before using their data. Surveillance applications are not permitted even if consent has been obtained.
Current laws stipulate accountability for the use of algorithmic systems in the workplace. Employers must generally ensure that algorithmic systems in the workplace are used transparently. This means that all affected employees must know which of their data is used in which system for which purpose and what impact the system in place has on them. Recommendations and decisions based on analyses from algorithmic systems must be explainable and comprehensible. Employees must be informed about this, the information must be presented in an understandable way and made easily accessible.
Companies are not always or even rarely willing to voluntarily forego the benefits they expect from using people analytics systems. Data protection law barely covers automated decision-making processes. In Germany, the Works Constitution Act provides employees and their representatives with instruments they can use to request information from companies about IT systems. In principle, this provision can also be applied to people analytics systems even though there haven’t been many precedents of people making use of this possibility.
Potential for discrimination
The use of algorithmic systems in the workplace and especially in the recruitment process must not have any discriminatory effects on employees. There are many historical inequalities in the world of work in particular, the best known of which is probably the gender pay gap. Such inequalities can be reproduced by using historical data in an algorithmic system.
Algorithmic systems can also give rise to new types of discrimination. Employers − whether private companies or public authorities − need to address this. One of the necessary precautions they need to apply is to systematically carry out impact assessments in order to anticipate a system’s possible effects on those affected and their rights, to assess risks, and to reduce them.
Co-determination
Users of algorithmic systems in the workplace (i.e. companies and their HR managers) and employee representatives need to develop skills in order to understand how the systems work. This is unknown territory for many people.
Trade unions and works councils should provide employees with practical advice on how they can protect their own interests when algorithmic systems are introduced. To do this, they need to develop a basic understanding of how such systems work.
The advantages of using algorithmic systems in the workplace should not only benefit company owners and managers. All employees should equally benefit from them. One prerequisite for this would be to involve them in decisions on the use of algorithmic systems in HR management, operational processes, and other relevant business areas. The employees should have a right of objection in case of doubt.
“AI will take care of it”? Platform work in the shadow of technology
Platform work in the gig economy generally weakens the position of workers. They are often completely monitored and controlled by algorithmic management. Platform workers should have the same rights and receive the same protection as employees in other dependent employment relationships. They must also be protected from the specific risks that arise due to the lack of transparency on platforms. For example, the systems used make it impossible to understand whether the principle of equal pay for equal work is being observed. There is also a lack of basic co-determination options and reasons for sanctions are not comprehensible on platforms. Those affected can therefore hardly defend themselves against automated decisions.
Exploitation along the supply chain
Workers involved in the production of the platforms’ technical infrastructure are exposed to precarious conditions. Minerals are needed along the supply chain of hardware and software production. These minerals are used in batteries and microprocessors. The people who mine the minerals are exposed to terrible working conditions. That is why they are also called “blood minerals.” The platform workers who process the data also suffer from poor working conditions. The data sets that are needed to train AI systems usually first have to be labeled, i.e. classified and annotated, by so-called crowd or click workers. These people often perform micro tasks on the computer (by clicking) under precarious conditions and without being permanently employed. When developing and purchasing AI, measures should be taken to ensure that working conditions are fair throughout the entire life cycle of an AI. This includes adequate remuneration, good working conditions and opportunities for promotion and further training - including for clickworkers.
The regulation of platform work is only just beginning. The first laws are in place to help these workers secure their rights. We have summarized what these are and what they are intended to achieve in another explainer.
Read more on our policy & advocacy work on ADM in the workplace.