#work (25 results)

Explainer: AI in the workplace

Managed by the algorithm: how AI is changing the way we work

Automated decision-making systems control our work, whether in companies or via platforms that allocate jobs to independent contractors. Companies can use them to increase their efficiency, but such systems have a downside: They can also be used to surveil employees and often conceal the exploitation of workers and the environment.

Explainer: AI & labor law

What works and what doesn’t: AI systems and labor law

In many companies, employees are controlled by automated systems, especially in platform work. The use of such systems in the (virtual) workplace is not yet comprehensively regulated by law. New (draft) laws are intended to help workers protect their rights.

Join the ride side

You ride for Lieferando and their app leaves you in the dark about how decisions are made on trips, salaries, bonuses? You want more transparency, better working conditions, and more money? Then take part in our survey!

Yet to be delivered: labor rights in the gig economy

Digitally controlled platform work is fundamentally changing working conditions while current legislation is lagging behind this development. Our joint campaign "Liefern am Limit" is advocating for the rights of Lieferando drivers.

The 5 Best Podcasts on Algorithms and Work

Interested in how algorithmic systems affect us at work? Here are some well-researched podcast episodes to get drawn into.

Food delivery service Glovo: tracking riders’ private location and other infringements

A recent investigation by Tracking Exposed shows that Glovo’s subsidiary in Italy, Foodinho, registers couriers’ off-shift location and shares it with unauthorized parties. The delivery app provider has also been found to have created a “hidden” credit score for their riders.

Country Analyses

New Study: Data Practices and Surveillance in the World of Work

Workers are increasingly being digitally surveilled, datafied and algorithmically managed in Italy, Poland, Sweden and the United Kingdom, a qualitative analysis by AlgorithmWatch shows.

Help us fight injustice in hiring!

Donate your CV to fight together against automated discrimination in job application procedures!

Press release

New study on AI in the workplace: Workers need control options to ensure co-determination

Employees must be included in the implementation process, if so-called Artificial Intelligence (AI) systems are introduced to their workplace. Such systems are used by many companies for automated decision-making (ADM) already. They are often based on Machine Learning (ML) algorithms. The European Union’s Artificial Intelligence Act (AI Act) draft is designed to safeguard workers’ rights, but such legislative measures won’t be enough. An AlgorithmWatch study funded by the Hans Böckler Foundation explains how workers’ co-determination can be practically achieved in this regard.

A dollar for your face: Meet the people behind Machine Learning models

To train machine learning models, tech companies are hiring a Germany-based service provider to buy selfies and pictures of ID cards from underpaid gig workers, whose rights are often disregarded.

New study highlights crucial role of trade unions for algorithmic transparency and accountability in the world of work

Our report shows that trade unions are now called upon to focus on practical advice and guidance to empower union representatives and negotiators to deal with the challenges that automation puts onto workers.

Wolt: Couriers’ feelings don’t always match the transparency report

In August, the Finnish delivery service Wolt published its first “algorithmic transparency report”. We asked three couriers about their experiences. They don't always match the report’s contents.

Digital Bouncers: AI in Recruiting

Automated decision-making systems are increasingly used by companies to decide who is best for a job. Applicants are worried about being rejected by a machine, based on programmed prejudices. In Switzerland, employers are especially reluctant to speak about the hiring algorithms that they use.

LinkedIn automatically rates “out-of-country” candidates as “not fit” in job applications

A feature on LinkedIn automatically rates candidates applying from another EU country as “not a fit”, which may be illegal. I asked 6 national and European agencies about the issue. None seemed interested in enforcing the law.

Correlation, causation & proxy variables?

Short explanatory videos introduce terms and concepts relevant for automation in HR

Reviewing essential features of AI-based systems for works councils and other staff representatives

by Prof. Dr. Sebastian Stiller, Jule Jäger & Sebastian Gießler

People analytics in the workplace – how to effectively enforce labor rights

Introduction and recommendations

Finland: How to unionize when your boss is an algorithm and you’re self-employed

A group of Finnish couriers launched the Justice4Couriers campaign in 2018. Although they are technically self-employed, they must obey the whims of their platform’s algorithm. They are fighting back.

People Analytics must benefit the people

An ethical analysis of data-driven algorithmic systems in human resources management, by Michele Loi

The year the wrong Amazon burnt: 2019 in review

As the hype around “artificial intelligence” leveled off, the impact of automated decision-making made itself seen. Regulators and civil society fought hard to rein in Big Tech, but much remains to be done to achieve a good balance.

Controversial service that ranked job seekers based on personal emails folds following AlgorithmWatch investigation

A Finnish company that automatically parsed the personal emails of job applicants to assess their corporate “fit” discontinued its service after reports by AlgorithmWatch and others raised questions about its legality.

Austria’s employment agency rolls out discriminatory algorithm, sees no problem

AMS, Austria's employment agency, is about to roll out a sorting algorithm that gives lower scores to women and to the disabled. It is very likely illegal under current anti-discrimination law.

Defective computing: How algorithms use speech analysis to profile job candidates

Some companies and scientists present Affective Computing, the algorithmic analysis of personality traits also known as “artificial emotional intelligence”, as an important new development. But the methods that are used are often dubious and present serious risks for discrimination.

Personal Scoring in the EU: Not quite Black Mirror yet, at least if you’re rich 

A centralized, permanent and public personal scoring system is very unlikely to appear in EU countries, but this does not mean that a large share of the European population is not or will not be subject to invasive scoring mechanisms.

Mind The Algorithm

The question of whether automation benefits or damages us citizens is primarily a political one. No one should let themselves be told that only those who have studied mathematics or computer science can take part in the discussion.