Explainer: AI & labor law
What works and what doesn’t: AI systems and labor law
In many companies, employees are controlled by automated systems, especially in platform work. The use of such systems in the (virtual) workplace is not yet comprehensively regulated by law. New (draft) laws are intended to help workers protect their rights.
The Platform Work Directive: for labor rights, against false self-employment
Platform work is fundamentally changing labor relations. If the poor working conditions in the gig economy are not addressed anytime soon, they could also gain a foothold in traditional employment relationships and undermine hard-fought achievements in labor law. The regulation of platform work is still in its infancy. But first laws are in place to help workers defend their interests.
The European Union adopted the Platform Work Directive in spring 2024. The EU member states must implement it until 2026. The directive is designed to make the use of algorithms by human resources management more transparent, to ensure that qualified personnel monitor automated systems, and to safeguard employees’ rights to challenge automated decisions.
The directive is also intended to better regulate the use of algorithms by digital work platforms and to be a safeguard against false self-employment in the gig economy. Determining their employment status enables people who work for platforms to make use of all the labor rights to which they are entitled. It is the digital platforms that are obliged to prove that there is no permanent employment relationship. People who work for platforms, their representatives, or national authorities can invoke this legal obligation when taking action against a misclassification of employees.
If the directive will actually be effective against false self-employment remains to be seen. Critics fear that the law has been watered down too much. Platform companies have tried to influence the negotiations and weaken the requirements to combat false self-employment.
The AI Act: forbidden emotion recognition
The use of AI systems at the workplace is one of the legal fields that the EU's AI Act covers. How strict specific AI systems are regulated depends on the respective risks that the systems pose to health, safety, and fundamental rights. The following emotion recognition systems are prohibited at the workplace under Article 5 since their risks have been classified as unacceptably high:
- AI systems to monitor the performance of employees and their emotional state while, for example, they are attending meetings or performing tasks. Such systems are equipped to analyze facial expressions, tone of voice, and other biometric data in order to detect emotional states such as stress, satisfaction, or frustration.
- AI-based tools used in the recruitment process to assess the emotional state or the sincerity of applicants by analyzing their facial expressions, voice modulation, or body language during an interview.
- Tools that derive workers' emotional states to predict stress or potential safety risks, unless the tools are specifically designed for medical or safety reasons.
- Mental health assessment tools that derive information on emotional states or mental illness from biometric data, provided the tools are not used exclusively for medical purposes.
The AI Act also prohibits biometric categorization systems that individually categorize natural persons based on their biometric data in order to determine whether they belong to a trade union.
The prohibitions laid down by the AI Act do not cover all possible AI applications for emotion recognition. Emotion recognition systems are prohibited at the workplace and in education but not in other high-risk areas such as law enforcement or migration.
If an existing application is deemed to be prohibited, it must be redesigned or withdrawn from the European market within six months of the adoption of the AI Act. A fine of up to 7 percent of total turnover may be imposed for each breach of this deadline. −
In addition to prohibited practices, the AI Act defines areas in which the use of AI systems is associated with an increased risk to safety, health, and fundamental rights. As the workplace is such a high-risk area, certain AI applications must comply with stricter regulations:
- AI systems that are to be used in the recruitment of natural persons, in particular to place targeted job advertisements, analyze and filter applications, and evaluate applicants.
- AI systems designed to take over work decisions: decisions on working conditions, promotions, or dismissals; AI systems that distribute work tasks based on individual behavior or personal characteristics; AI systems that monitor and evaluate people’s performance and behavior in employment relationships.
The AI Act also stipulates that employers must, for the sake of transparency, inform their employees and their representatives if they use high-risk AI systems that have an impact on the work or employment relationship, for example when hiring, promoting, or dismissing employees. While providing such information, employers must follow the rules and procedures prescribed by trade union agreements and national legislation, if possible. However, employers are not obliged to consult employees. Trade unions and employee representatives are generally not granted any right of co-determination in decisions on the introduction of AI systems.
Employers must comply with further obligations if they use high-risk AI systems:
- They must control their AI systems throughout their entire life cycle.
- They may only use AI systems as foreseen in the operating manual and follow the control measures set out therein.
- They must ensure that the data entered serves the system’s purpose and that automatically generated system logs are stored.
- Before introducing the system, they must examine the extent to which fundamental rights are affected by using and operating AI systems, in particular the fundamental rights of the employees concerned.
The current legal situation is replete with ambiguities and loopholes. Employees, for example, are only to be informed about the introduction of high-risk AI systems. An AI system is considered high-risk if it significantly poses a risk to fundamental rights. This wording leaves a lot of room for interpretation. Article 3 of the AI Act’s definition of "significant risk" is not very specific as it is difficult to foresee possible scenarios before the system is introduced.
The AI Liability Directive: who pays for the damage?
The AI Act does not address all the problems that can arise for employees when AI is implemented. For instance, it does not specify what sanctions apply if AI systems make incorrect or discriminatory decisions. Nor does it clarify who is liable if AI systems cause damage.
In order to answer such questions, the EU is working on a draft AI Liability Directive to regulate claims for damages arising from the use of AI systems. In its current version, the AI Liability Directive does not apply if liability is already regulated contractually. The European Commission published its law proposal in September 2022.
According to this draft AI Liability Directive, employers are to be held responsible if AI systems used or operated by them produce errors that lead to damage. The employers must disclose all records and documentation that has to be produced under the AI Act to prove that they have fully complied with their obligations.
It will take some time before the AI Liability Directive comes into force. In the meantime, significant changes are likely to be made.
Read more on our policy & advocacy work on ADM in the workplace.