Stories

Medical devices using AI/ML are poorly regulated: study

article

A review of 338 AI-powered medical devices approved in Europe and in the United States reveals holes in the European review process.

China’s social credit system was due by 2020 but is far from ready

Story

By Qian Sun Six years after the government announced plans for a national social credit score, Chinese citizens face dozens of systems that are largely incompatible with each other. The central govern…

In Poland, a law made loan algorithms transparent. Implementation is nonexistent.

Story

By Konrad Szczygieł Since May 2019, and as a first in the EU, Polish consumers have the right to know in detail why a bank decided to grant or refuse them a loan, even for small amounts. But in practi…

New report highlights the risks of AI on fundamental rights

Positions

The European watchdog for fundamental rights published a report on Artificial Intelligence. AlgorithmWatch welcomes some of the recommendations, and encourages a bolder approach. The European Union Ag…

Despite transparency, the Nutri-Score algorithm faces strong resistance

op-ed

The Nutri-Score summarizes basic nutritional information on a 5-letter scale. Despite its many qualities, it faces a strong backlash that could hold a lesson for operators of automated systems.

Health algorithms discriminate against Black patients, also in Switzerland

Story

Algorithms used to assess kidney function or predict heart failure use race as a central criterion. Continue reading the story at the AlgorithmWatch Switzerland website…

Dutch city uses algorithm to assess home value, but has no idea how it works

Story

In a seemingly routine case at the Amsterdam court of appeal, a judge ruled that it was acceptable for a municipality to use a black-box algorithm, as long as the results were unsurprising.

French tax authority pushes for automated controls despite mixed results

Story

Since 2014, a team of data-scientists supports local tax offices to help them identify complex fraud. But the motive could be more base: to make tax collectors redundant.

Spanish police plan to extend use of its lie-detector while efficacy is unclear

Story

Veripol is a software that assesses the veracity of complaints filed with the Spanish national police. It was introduced in 2018, but it’s unclear if it works as intended.

Spam filters are efficient and uncontroversial. Until you look at them.

Story

An experiment reveals that Microsoft Outlook marks messages as spam on the basis of a single word, such as “Nigeria”. Spam filters are largely unaudited and could discriminate unfairly.

Automated discrimination: Facebook uses gross stereotypes to optimize ad delivery

Story

An experiment by AlgorithmWatch shows that online platforms optimize ad delivery in discriminatory ways. Advertisers who use them could be breaking the law.

In French daycare, algorithms attempt to fight cronyism

Story

In many cities, it is unclear whose children can hope for a place in a public daycare facility. Algorithms could make the allocation of places more transparent, but not all politicians are happy.

Female historians and male nurses do not exist, Google Translate tells its European users

An experiment shows that Google Translate systematically changes the gender of translations when they do not fit with stereotypes. It is all because of English, Google says…

In Italy, an appetite for face recognition in football stadiums

Story

Right before the pandemic, the government and top sports authorities were planning a massive deployment of face recognition and sound surveillance technologies in all Italian football stadiums. The re…

Suzhou introduced a new social scoring system, but it was too Orwellian, even for China

Story

By Qian Sun A city of 10 million in eastern China upgraded its Covid-tracking app to introduce a new “civility” score. It had to backtrack after a public outcry. Suzhou is a city with a population of…

For researchers, accessing data is one thing. Assessing its quality another.

Story

By Nicolas Kayser-Bril • kayser-bril@algorithmwatch.org • GPG Key Online platforms often provide data that is riddled with errors. Rather than launching quixotic attempts at fixing them, researchers i…

GPT-3 is a lot of fun, but no game-changer

op-ed

We usually do not write about newly-released software, especially when there is no way to audit it. But the hype over GPT-3, a natural language generator, was such that several readers asked for a rev…

Pre-crime at the tax office: How Poland automated the fight against VAT fraud.

Story

By Konrad Szczygieł This story is part of AlgorithmWatch’s upcoming report Automating Society 2020, to be published later this year. Subscribe to our newsletter to be alerted when the report is…

Under the Twitter streetlight: How data scarcity distorts research

Story

By Nicolas Kayser-Bril • kayser-bril@algorithmwatch.org • GPG Key As part of our #LeftOnRead campaign, several researchers testified to the reluctance of online platforms to provide useful data. Many…

Algorithmic grading is not an answer to the challenges of the pandemic

op-ed

By Graeme Tiffany Graeme Tiffany is a philosopher of education. He argues that replacing exams with algorithmic grading, as was done in Great Britain, exacerbates inequalities and fails to assess stud…

Spain’s largest bus terminal deployed live face recognition four years ago, but few noticed

Story

By Naiara Bellio López-Molina Madrid South Station’s face recognition system automatically matches every visitor’s face against a database of suspects, and shares information with the Spanish police.

In a quest to optimize welfare management, Denmark built a surveillance behemoth

Story

By Nicolas Kayser-Bril • kayser-bril@algorithmwatch.org • GPG Key This story is part of AlgorithmWatch’s upcoming report Automating Society 2020, to be published later this year. Subscribe to o…

Broken Horizon: In Greece, research in automation fails to find applications

Story

By Nikolas Leontopoulos This story is part of AlgorithmWatch’s upcoming report Automating Society 2020, to be published later this year. Subscribe to our newsletter to be alerted when the repor…

Swiss police automated crime predictions but has little to show for it

Story

A review of 3 automated systems in use by the Swiss police and judiciary reveals serious issues. Real-world effects are impossible to assess due to a lack of transparency.

AlgorithmWatch is supported by