For trans people, online transitioning can be a nightmareJust a formality? Updating one’s online identity can be like running the gauntlet and a flawed data architecture only makes it worse for the transgender community.
What is algorithmic discrimination?Discrimination by algorithms and Artificial Intelligence (AI): we give an overview of the topic.
How and why algorithms discriminateAutomated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.
Platform regulation Not a solution: Meta’s new AI system to contain discriminatory adsMeta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.
France: the new law on the 2024 Olympic and Paralympic Games threatens human rightsFrance proposed a new law on the 2024 Olympic and Paralympic Games (projet de loi relatif aux jeux Olympiques et Paralympiques de 2024) which would legitimize the use of invasive algorithm-driven video surveillance under the pretext of “securing big events”. This new French law would create a legal basis for scanning public spaces to detect specific suspicious events.
AutoCheck workshops on Automated Decision-Making Systems and DiscriminationUnderstanding causes, recognizing cases, supporting those affected: documents for implementing a workshop.
How to combat algorithmic discrimination? A guidebook by AutoCheckWe are faced with automated decision-making systems almost every day and they might be discriminating, without us even knowing about it. A new guidebook helps to better recognize such cases and support those affected.
Algorithmic Discrimination – How to adjust German anti-discrimination lawIn their coalition treaty, the new German government has signaled their intention to evaluate the German anti-discrimination law (Allgemeine Gleichbehandlungsgesetz – AGG). We demand for them to account for the special features of algorithmic discrimination, for instance by considering the right to collective redress mechanisms to better protect the rights of those affected.
Costly birthplace: discriminating insurance practiceTwo residents in Rome with exactly the same driving history, car, age, profession, and number of years owning a driving license may be charged a different price when purchasing car insurance. Why? Because of their place of birth, according to a recent study.
Fixing Online Forms Shouldn’t Wait Until RetirementA new Unding Survey is investigating discrimination in online forms. But operators are already getting angry emails. Behind some: a recently retired IT consultant with one of the most common surnames in the world and 30 years experience of not being able to sign up.
LinkedIn automatically rates “out-of-country” candidates as “not fit” in job applicationsA feature on LinkedIn automatically rates candidates applying from another EU country as “not a fit”, which may be illegal. I asked 6 national and European agencies about the issue. None seemed interested in enforcing the law.
“We’re looking for cases of discrimination through algorithms in Germany.”The project AutoCheck investigates the risks for discrimination inherent in automated decision-making systems (ADMS). In this interview, project manager Jessica Wulf talks about the search for exemplary cases and how the project will support counselling centers and further education on the topic.
Europeans can’t talk about racist AI systems. They lack the words.In Europe, several automated systems, either planned or operational, actively contribute to entrenching racism. But European civil society literally lacks the words to address the issue.
Health algorithms discriminate against Black patients, also in SwitzerlandAlgorithms used to assess kidney function or predict heart failure use race as a central criterion. Continue reading the story on the AlgorithmWatch Switzerland website
Automated discrimination: Facebook uses gross stereotypes to optimize ad deliveryAn experiment by AlgorithmWatch shows that online platforms optimize ad delivery in discriminatory ways. Advertisers who use them could be breaking the law.
Female historians and male nurses do not exist, Google Translate tells its European usersAn experiment shows that Google Translate systematically changes the gender of translations when they do not fit with stereotypes. It is all because of English, Google says
Algorithmic grading is not an answer to the challenges of the pandemicGraeme Tiffany is a philosopher of education. He argues that replacing exams with algorithmic grading, as was done in Great Britain, exacerbates inequalities and fails to assess students' abilities.
Undress or fail: Instagram’s algorithm strong-arms users into showing skinAn exclusive investigation reveals that Instagram prioritizes photos of scantily-clad men and women, shaping the behavior of content creators and the worldview of 140 millions Europeans in what remains a blind spot of EU regulations.
Automated moderation tool from Google rates People of Color and gays as “toxic”A systematic review of Google’s Perspective, a tool for automated content moderation, reveals that some adjectives are considered more toxic than others.
Unchecked use of computer vision by police carries high risks of discriminationAt least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimination run high but authorities ignore them.
Google apologizes after its Vision AI produced racist resultsA Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.