#eu (25 results)

Server with a little "lock" Icon

“Risks come not just from technology” – Input to the EU on systemic risks and the DSA

AlgorithmWatch has submitted expert input to the EU on systemic risks stemming from online platforms and search engines. We argued risks come not just from technology, but also (1) the attitudes of companies and (2) lack of transparency around enforcement. This input was at the invite of the European Board of Digital Services and the EU Commission, to help prepare their first report on systemic risks under the Digital Services Act. Read our full response here:

AI Action Summit in Paris – a missed opportunity?

Our Executive Directors, Angela Müller and Matthias Spielkamp, were in Paris last week representing us at the international AI Action Summit in Paris hosted by the French Government. So, what to make of the summit, the billion-dollar promises made at it, and the Big Tech party beats that never stop pounding? These are their main observations.

As of February 2025: Harmful AI applications prohibited in the EU

Bans under the EU AI Act become applicable now. Certain risky AI systems which have been already trialed or used in everyday life are from now on – at least partially – prohibited.

A Dual Track Approach to Systemic Risks ensures flexibility and transparency in the Digital Services Act

Joint Statement

Upcoming Commission Guidelines on the AI Act Implementation: Human Rights and Justice Must Be at Their Heart

The Artificial Intelligence Act establishes rules for the development and use of AI concerning the EU. Now that the law is being implemented, civil society calls on the EU Commission to put human rights and justice at the forefront when interpreting the law.

A Year of Challenging Choices – 2024 in Review

2024 was a "super election" year and it marked the rise of generative Artificial Intelligence. With the adoption of the AI Act, it seemed poised to be the moment we finally gained control over automated systems. Yet, that certainty still feels out of reach.

Automation on the Move

Systems based on Artificial Intelligence (AI) and automated decision-making (ADM) are increasingly being experimented with and used on migrants, refugees, and travelers. Too often, this is done without adequate democratic discussion or oversight. In addition, their use lacks transparency and justification to those who are subjected to these systems, as a growing body of literature and evidence shows. This needs to change.

Automation on the Move

The Automation of Fortress Europe: Behind the Black Curtain

The European Union poured 5 million euros into the development of a border surveillance system called NESTOR. When we tried to look into it, we were presented hundreds of redacted, blacked out pages.

Automation on the Move

Automating EU Borders, Broken Checks and Balances

For over a year, we have been looking into EU-funded border security research projects to assess their methodological approaches and ethical implications. We failed.

Automation on the Move

EMERALD – The One That Fell from Grace

Only one proposed border security research project did not meet the EU’s ethical requirements and was rejected, AlgorithmWatch found. What was so unique about this surveillance system?

Image Generators Are Trying to Hide Their Biases – And They Make Them Worse

In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.

Campaign: ADM and People on the Move

Borders without AI

29,000 people have died in the Mediterranean over the past ten years while trying to reach the EU. You would think that the EU wanted this tragedy to stop and scientists across Europe were working feverishly on making this happen with the latest technology. The opposite is the case: With the help of so-called Artificial Intelligence, the walls are being raised, financed with taxpayers' money.

Campaign: ADM and People on the Move

The Automated Fortress Europe: No Place for Human Rights

29,000 people have died in the Mediterranean over the past ten years while trying to reach the EU. You would think that the EU wanted this tragedy to stop and scientists across Europe were working feverishly on making this happen with the latest technology. The opposite is the case: With the help of so-called Artificial Intelligence, digital border walls are being raised, financed with taxpayers' money.

EU’s AI Act fails to set gold standard for human rights

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. Here’s our round-up of how the final law fares against our collective demands.

Press release

EU Parliament votes on AI Act; member states will have to plug surveillance loopholes

Today, the European rulebook on AI - the so-called EU Artificial Intelligence Act (AI Act) - will be adopted in the European Parliament. It aims to regulate AI development and usage in the European Union. Although the Act takes basic steps in safeguarding fundamental rights, it includes glaring loopholes that threaten to undermine its very purpose.

Press release

AI Act about to finally become law?

Press release

Europe’s Approach to AI regulation: Embracing Big Tech and Security Hardliners

Europe is about to adopt two major regulations on Artificial Intelligence: the EU’s AI Act and the Council of Europe’s Convention on AI. Yet, while both rulebooks were initially meant to turn the tables on Big Tech and to effectively protect people against governments' abuse of AI technology, interests of tech companies and governments' security hardliners may win out.

Press release

AI Act deal: Key safeguards and dangerous loopholes

After a negotiation marathon in the global spotlight, EU lawmakers closed a deal late Friday night on the AI Act, the EU’s law on regulating the development and use of Artificial Intelligence systems. The EU Parliament managed to introduce a number of key safeguards to improve the original draft text, but Member States still decided to leave people without protection in situations where they would need it most.

Press release

AI Act drama: Illegitimate deals, irresponsible negotiation hours, and unacceptable pressure games

After a negotiation marathon on its Artificial Intelligence Act, EU Member States put the screws on the EU Parliament to put national security and industry interests before the protection of people’s rights. Having been negotiating for over 20 hours, sleep-deprived EU lawmakers were pressuring each other today to close an unacceptable deal – on some of the most fundamental implications of AI on people and society.

Op-Ed

Generative AI must be neither the stowaway nor the gravedigger of the AI Act

Apparently, adoption of the AI Act as a whole is at risk because the EU Council and Parliament are unable to reach a common position on generative AI, with some Member States wanting to exempt generative AI from any kind of regulation. This is highly irresponsible, as it threatens effective prevention of harms caused by AI-driven systems in general.

Civil society calls on the EU to draw limits on surveillance technology

Police and migration authorities must respect fundamental rights when using AI

As AI systems are increasingly used by law enforcement, migration control and national security authorities, the EU Artificial Intelligence Act (AI Act) is an urgent opportunity to prevent harm, protect people from rights violations and provide legal boundaries for authorities to use AI within the confines of the rule of law.

Statement with 118 organizations

EU legislators must close dangerous loophole in AI Act

The European Union is entering the final stage of negotiations on its Artificial Intelligence Act (AI Act), but Big Tech and other industry players have lobbied to introduce a major loophole to the high-risk classification process, undermining the entire legislation. We call on EU legislators to remove this loophole and maintain a high level of protection in the AI Act.

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Final EU negotiations: we need an AI Act that puts people first

As the final stage of negotiations on the Artificial Intelligence Act (AI Act) enters AlgorithmWatch together with 149 civil society organisations call on EU institutions to improve the regulation. In our civil society statement we spell out clear suggestions how the regulation should be changed in order to protect people and our human rights effectively.

Political Ads: EU Lawmakers must uphold human rights to privacy and free expression

In light of a leaked “non-paper” from the European Commission, AlgorithmWatch and 26 other civil society organizations have called on EU co-legislators to address our serious concerns about the proposed regulation on Targeting and Transparency of Political Advertising.

Page 1 of 2