Positions

Researching Systemic Risks under the Digital Services Act

AlgorithmWatch has been investigating the concept of "systemic risks" under the Digital Services Act (DSA), the new EU regulation which aims to minimize risks and increase accountability for online platforms and search engines.

Meta’s elections dashboard: A very disappointing sign

On 3 June, Meta released an EU election monitoring dashboard, responding to investigations by the EU Commission under the Digital Services Act. It is riddled with basic errors, raising severe concerns about Meta’s engagement with risks of electoral interference.

Recommendations for the EU Elections 2024

Tech governance has become a key focus for the European Union. New laws have been introduced to reshape how technology and the internet are regulated. Success in EU tech governance hinges on effectively implementing and evolving these new laws to bridge gaps and adapt to technological advances.

Stance

If the UN wants to help humanity, it should not fall for AI hype

How should the international governance of AI look like? This is the thorny question the UN Secretary General’s AI Advisory Body tries to address in its first interim report. We have highlighted some concerning aspects of the report in a recent consultation process.

EU’s AI Act fails to set gold standard for human rights

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. Here’s our round-up of how the final law fares against our collective demands.

AlgorithmWatch proposals on mitigating election risks for online platforms

Despite hopes that the Digital Services Act could protect against online risks during upcoming elections, this looks increasingly unlikely due to delays and issues in implementation. The EU Commission has sought input on how to mitigate election risks, and AlgorithmWatch has responded.

Press release

EU Parliament votes on AI Act; member states will have to plug surveillance loopholes

Today, the European rulebook on AI - the so-called EU Artificial Intelligence Act (AI Act) - will be adopted in the European Parliament. It aims to regulate AI development and usage in the European Union. Although the Act takes basic steps in safeguarding fundamental rights, it includes glaring loopholes that threaten to undermine its very purpose.

Press release

The Council of Europe’s Convention on AI: No free ride for tech companies and security authorities!

The Convention on AI is intended to be the first legally binding international agreement on AI. The final round of negotiations will take place in Strasbourg starting 11 March 2024. The members of the Council of Europe (including the EU member states) and non-members such as the US, Japan and Canada will also be sitting around the negotiating table. AlgorithmWatch, over 90 civil society organizations, and prominent academics are calling on the negotiating states to regulate companies’ and national security authorities’ use of AI.

Press release

Europe’s Approach to AI regulation: Embracing Big Tech and Security Hardliners

Europe is about to adopt two major regulations on Artificial Intelligence: the EU’s AI Act and the Council of Europe’s Convention on AI. Yet, while both rulebooks were initially meant to turn the tables on Big Tech and to effectively protect people against governments' abuse of AI technology, interests of tech companies and governments' security hardliners may win out.

Op-Ed

Generative AI must be neither the stowaway nor the gravedigger of the AI Act

Apparently, adoption of the AI Act as a whole is at risk because the EU Council and Parliament are unable to reach a common position on generative AI, with some Member States wanting to exempt generative AI from any kind of regulation. This is highly irresponsible, as it threatens effective prevention of harms caused by AI-driven systems in general.

Civil society calls on the EU to draw limits on surveillance technology

Police and migration authorities must respect fundamental rights when using AI

As AI systems are increasingly used by law enforcement, migration control and national security authorities, the EU Artificial Intelligence Act (AI Act) is an urgent opportunity to prevent harm, protect people from rights violations and provide legal boundaries for authorities to use AI within the confines of the rule of law.

Expert Policy Proposal

The AI Act and General Purpose AI

Key Recommendations to inform EU's AI Act Negotiations regarding General Purpose AI

Statement with 118 organizations

EU legislators must close dangerous loophole in AI Act

The European Union is entering the final stage of negotiations on its Artificial Intelligence Act (AI Act), but Big Tech and other industry players have lobbied to introduce a major loophole to the high-risk classification process, undermining the entire legislation. We call on EU legislators to remove this loophole and maintain a high level of protection in the AI Act.

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Final EU negotiations: we need an AI Act that puts people first

As the final stage of negotiations on the Artificial Intelligence Act (AI Act) enters AlgorithmWatch together with 149 civil society organisations call on EU institutions to improve the regulation. In our civil society statement we spell out clear suggestions how the regulation should be changed in order to protect people and our human rights effectively.

Battle in Strasbourg: Civil society fights for safeguards against AI harms

With negotiations on a Convention on Artificial Intelligence (AI) within the Council of Europe entering a crucial stage, a joint statement by AlgorithmWatch and ten other civil society organizations reminds negotiating states of their mandate : to protect human rights, democracy, and the rule of law. To adhere to this mandate and to counter both narrow state interest and companies’ lobbying, the voice of civil society must be listened to.

Political Ads: EU Lawmakers must uphold human rights to privacy and free expression

In light of a leaked “non-paper” from the European Commission, AlgorithmWatch and 26 other civil society organizations have called on EU co-legislators to address our serious concerns about the proposed regulation on Targeting and Transparency of Political Advertising.

Press release

EU Parliament vote on AI Act: Lawmakers chose to protect people against harms of AI systems

After long months of intense negotiations, members of the European Parliament voted on the EU’s Artificial Intelligence Act (AI Act). AlgorithmWatch applauds the Parliament for strengthening fundamental rights protection against the negative impacts of AI, such as that of face recognition in public spaces. Yet, the Parliament missed the opportunity to enhance protection for some people who would need it most.

Joint statement

A diverse auditing ecosystem is needed to uncover algorithmic risks

The Digital Services Act (DSA) will force the largest platforms and search engines to pay for independent audits to help check their compliance with the law. But who will audit the auditors? Read AlgorithmWatch and AI Forensics' joint feedback to the European Commission on strengthening the DSA’s independent auditing rules via a Delegated Act.

Open letter

DSA must empower public interest research with public data access

Access to “public data” is key for researchers and watchdogs working to uncover societal risks stemming from social media—but major platforms like Facebook and Twitter are cutting access to important data analytics tools to study them. The EU must now step in to ensure that researchers aren’t left in the dark.

Call for Evidence: new rules must empower researchers where platforms won’t

The ink may have dried on the Digital Services Act (DSA), but key data access provisions are still being written with input from researchers and civil society experts. Read AlgorithmWatch’s submission to the European Commission.

The EU now has the means to rein in large platforms. It should start with Twitter.

The European Commission today announced the platforms that will have to comply with the strictest rules the Digital Services Act imposes on companies. Twitter has to be on top of its list in enforcing these rules.

Civil society statement: we call on members of the EU Parliament to ensure the AI Act protects people and our rights

AlgorithmWatch demands the regulation of General Purpose AI in the AI Act

There was a lot of speculation to hear recently about large language models being a doom’s bell. This is just a red herring that distracts from the real harm people suffer from such models. To prevent an escalation of harmful effects, AlgorithmWatch calls on the German Government and the EU to implement an array of effective precautions.

Risky business: How do we get a grip on social media algorithms?

Public scrutiny is essential to understand the risks that personalized recommender systems pose to society. The DSA’s new transparency regime is a promising step forward, but we still need external, adversarial audits to hold platforms accountable.