#publicsphere (138 results)

Page 1 of 14
Clarote & AI4Media / Better Images of AI / Power/Profit / Licenced by CC-BY 4.0

Blog, 14 February 2024

DSA Day and platform risks

Got Complaints? Want Data? Digital Service Coordinators will have your back – or will they?

The Digital Services Act (DSA) is the EU’s new regulation against risks from online platforms and search engines. It has been in effect since 2023, but 17 February 2024 marks “DSA Day,” on which many of the regulation’s most impactful provisions come into force.

Read more
Photo by Michael Dziedzic on Unsplash

Publication, 14 February 2024

Ensuring Legitimacy in Stakeholder Engagement: The ‘5 Es’ Framework

The DSA foresees that external stakeholders – such as independent experts, civil society groups, and industry representatives – engage in its rollout and enforcement. To help ensure these processes' legitimacy, we have developed the “5 E’s Framework” encompassing the guiding principles: Equity, Expertise, Effectiveness, Empowering, and Expanding Competencies.

Read more

Publication, 15 December 2023

New study: Research on Microsoft Bing Chat

AI Chatbot produces misinformation about elections

Bing Chat, the AI-driven chatbot on Microsoft’s search engine Bing, makes up false scandals about real politicians and invents polling numbers. Microsoft seems unable or unwilling to fix the problem. These findings are based on a joint investigation by AlgorithmWatch and AI Forensics, the final report of which has been published today. We tested if the chatbot would provide factual answers when prompted about the Bavarian, Hessian, and Swiss elections that took place in October 2023.

Read more

15 December 2023

Press release

Microsoft‘s Bing Chat: A source of misinformation on elections

Microsoft‘s AI-driven chatbot Copilot, formerly known as Bing Chat, generates factually inaccurate and fabricated information about elections in Switzerland and Germany. This raises concerns about potential damage to the reputation of candidates and news sources. In making search engine results less reliable, generative AI impacts one of the cornerstones of democracy: access to reliable and transparent public information on the internet.

Read more
Khari Slaughter for AlgorithmWatch

Project, 5 October 2023

New research

ChatGPT and Co: Are AI-driven search engines a threat to democratic elections?

A new study by AlgorithmWatch and AI Forensics shows that using Large Language Models like Bing Chat as a source of information for deciding how to vote is a very bad idea. As their answers to important questions are partly completely wrong and partly misleading, the likes of ChatGPT can be dangerous to the formation of public opinion in a democracy.

Read more
A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / Licenced by CC-BY 4.0

Publication, 1 August 2023

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are being conducting at this very moment.

Read more
Illustration: Julia Schwarz

Story, 19 July 2023

Algorithmic Accountability Reporting

Peeking into the Black Box

Welfare fraud scoring, predictive policing, or ChatGPT: Lawmakers and government officials around the world are increasingly relying on algorithms, and most of them are completely opaque. Algorithmic Accountability Reporting takes a closer look at how they work and the effects they have. But only very few media outlets conduct such reporting. Why?

Read more
Foto von Christian Lue auf Unsplash

Position, 4 July 2023

Battle in Strasbourg: Civil society fights for safeguards against AI harms

With negotiations on a Convention on Artificial Intelligence (AI) within the Council of Europe entering a crucial stage, a joint statement by AlgorithmWatch and ten other civil society organizations reminds negotiating states of their mandate : to protect human rights, democracy, and the rule of law. To adhere to this mandate and to counter both narrow state interest and companies’ lobbying, the voice of civil society must be listened to.

Read more
Foto von Ricardo Arce auf Unsplash

Position, 21 June 2023

Political Ads: EU Lawmakers must uphold human rights to privacy and free expression

In light of a leaked “non-paper” from the European Commission, AlgorithmWatch and 26 other civil society organizations have called on EU co-legislators to address our serious concerns about the proposed regulation on Targeting and Transparency of Political Advertising.

Read more
Photo by wim hoppenbrouwers on Flickr

Position, 5 June 2023

Joint statement

A diverse auditing ecosystem is needed to uncover algorithmic risks

The Digital Services Act (DSA) will force the largest platforms and search engines to pay for independent audits to help check their compliance with the law. But who will audit the auditors? Read AlgorithmWatch and AI Forensics' joint feedback to the European Commission on strengthening the DSA’s independent auditing rules via a Delegated Act.

Read more
Page 1 of 14
If you want to learn more about our policy & advocacy work on ADM in the public sphere, get in touch with:
Clara Helming
Senior Advocacy & Policy Manager