#publicsphere (143 results)

If you want to learn more about our policy & advocacy work on ADM in the public sphere, get in touch with:

Clara Helming
Senior Advocacy & Policy Manager

Meta’s elections dashboard: A very disappointing sign

On 3 June, Meta released an EU election monitoring dashboard, responding to investigations by the EU Commission under the Digital Services Act. It is riddled with basic errors, raising severe concerns about Meta’s engagement with risks of electoral interference.

Recommendations for the EU Elections 2024

Tech governance has become a key focus for the European Union. New laws have been introduced to reshape how technology and the internet are regulated. Success in EU tech governance hinges on effectively implementing and evolving these new laws to bridge gaps and adapt to technological advances.

10 Questions about AI and elections

2024 is an important election year. While citizens all over the world get ready to cast their ballot, many people worry about AI: Are chatbots and fake images a threat to democracy? Can we still trust what we see online? We explain why the hype around AI and elections is somewhat overblown and which real risks we need to watch out for instead.

Stance

If the UN wants to help humanity, it should not fall for AI hype

How should the international governance of AI look like? This is the thorny question the UN Secretary General’s AI Advisory Body tries to address in its first interim report. We have highlighted some concerning aspects of the report in a recent consultation process.

AlgorithmWatch proposals on mitigating election risks for online platforms

Despite hopes that the Digital Services Act could protect against online risks during upcoming elections, this looks increasingly unlikely due to delays and issues in implementation. The EU Commission has sought input on how to mitigate election risks, and AlgorithmWatch has responded.

DSA Day and platform risks

Got Complaints? Want Data? Digital Service Coordinators will have your back – or will they?

The Digital Services Act (DSA) is the EU’s new regulation against risks from online platforms and search engines. It has been in effect since 2023, but 17 February 2024 marks “DSA Day,” on which many of the regulation’s most impactful provisions come into force.

Ensuring Legitimacy in Stakeholder Engagement: The ‘5 Es’ Framework

The DSA foresees that external stakeholders – such as independent experts, civil society groups, and industry representatives – engage in its rollout and enforcement. To help ensure these processes' legitimacy, we have developed the “5 E’s Framework” encompassing the guiding principles: Equity, Expertise, Effectiveness, Empowering, and Expanding Competencies.

New study: Research on Microsoft Bing Chat

AI Chatbot produces misinformation about elections

Bing Chat, the AI-driven chatbot on Microsoft’s search engine Bing, makes up false scandals about real politicians and invents polling numbers. Microsoft seems unable or unwilling to fix the problem. These findings are based on a joint investigation by AlgorithmWatch and AI Forensics, the final report of which has been published today. We tested if the chatbot would provide factual answers when prompted about the Bavarian, Hessian, and Swiss elections that took place in October 2023.

Press release

Microsoft‘s Bing Chat: A source of misinformation on elections

Microsoft‘s AI-driven chatbot Copilot, formerly known as Bing Chat, generates factually inaccurate and fabricated information about elections in Switzerland and Germany. This raises concerns about potential damage to the reputation of candidates and news sources. In making search engine results less reliable, generative AI impacts one of the cornerstones of democracy: access to reliable and transparent public information on the internet.

New research

ChatGPT and Co: Are AI-driven search engines a threat to democratic elections?

A new study by AlgorithmWatch and AI Forensics shows that using Large Language Models like Bing Chat as a source of information for deciding how to vote is a very bad idea. As their answers to important questions are partly completely wrong and partly misleading, the likes of ChatGPT can be dangerous to the formation of public opinion in a democracy.

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Algorithmic Accountability Reporting

Peeking into the Black Box

Welfare fraud scoring, predictive policing, or ChatGPT: Lawmakers and government officials around the world are increasingly relying on algorithms, and most of them are completely opaque. Algorithmic Accountability Reporting takes a closer look at how they work and the effects they have. But only very few media outlets conduct such reporting. Why?

Battle in Strasbourg: Civil society fights for safeguards against AI harms

With negotiations on a Convention on Artificial Intelligence (AI) within the Council of Europe entering a crucial stage, a joint statement by AlgorithmWatch and ten other civil society organizations reminds negotiating states of their mandate : to protect human rights, democracy, and the rule of law. To adhere to this mandate and to counter both narrow state interest and companies’ lobbying, the voice of civil society must be listened to.

Political Ads: EU Lawmakers must uphold human rights to privacy and free expression

In light of a leaked “non-paper” from the European Commission, AlgorithmWatch and 26 other civil society organizations have called on EU co-legislators to address our serious concerns about the proposed regulation on Targeting and Transparency of Political Advertising.

Joint statement

A diverse auditing ecosystem is needed to uncover algorithmic risks

The Digital Services Act (DSA) will force the largest platforms and search engines to pay for independent audits to help check their compliance with the law. But who will audit the auditors? Read AlgorithmWatch and AI Forensics' joint feedback to the European Commission on strengthening the DSA’s independent auditing rules via a Delegated Act.

Open letter

DSA must empower public interest research with public data access

Access to “public data” is key for researchers and watchdogs working to uncover societal risks stemming from social media—but major platforms like Facebook and Twitter are cutting access to important data analytics tools to study them. The EU must now step in to ensure that researchers aren’t left in the dark.

Call for Evidence: new rules must empower researchers where platforms won’t

The ink may have dried on the Digital Services Act (DSA), but key data access provisions are still being written with input from researchers and civil society experts. Read AlgorithmWatch’s submission to the European Commission.

The EU now has the means to rein in large platforms. It should start with Twitter.

The European Commission today announced the platforms that will have to comply with the strictest rules the Digital Services Act imposes on companies. Twitter has to be on top of its list in enforcing these rules.

France: the new law on the 2024 Olympic and Paralympic Games threatens human rights

France proposed a new law on the 2024 Olympic and Paralympic Games (projet de loi relatif aux jeux Olympiques et Paralympiques de 2024) which would legitimize the use of invasive algorithm-driven video surveillance under the pretext of “securing big events”. This new French law would create a legal basis for scanning public spaces to detect specific suspicious events.

A joint statement on Digital Services Act implementation at the national level

As the political process of negotiating the landmark new set of EU rules for a safer and more accountable online environment has concluded, civil society organisations from across Europe joined forces to offer suggestions on how to strengthen the harmonization of the DSA implementation process across EU member states.

Platforms’ promises to researchers: first reports missing the baseline

An initial analysis shows that platforms have done little to “empower the research community” despite promises made last June under the EU’s revamped Code of Practice on Disinformation.

A guide to the EU’s new rules for researcher access to platform data

Thanks to the Digital Services Act (DSA), public interest researchers in the EU have a new legal framework to access and study internal data held by major tech platforms. What does this framework look like, and how can it be put into practice?

Mastodon could make the public sphere less toxic, but not for all

The open-source social network gained millions of new users following Twitter’s takeover. While some of its features could improve the quality of public discourse, disadvantaged communities might be excluded.

Open Letter: EU must protect fundamental freedoms for online political speech

As EU lawmakers negotiate important new transparency rules for online political ads, AlgorithmWatch and 8 other civil society organizations are calling on the German government to address serious risks to democratic pluralism and freedom of expression contained in the Council’s most recent proposal.

Civil society responds to the Council of Europe Treaty on AI

Together with other observer civil society organizations in the Committee on AI in the Council of Europe, AlgorithmWatch stresses the importance of that legal framework on AI based on human rights, democracy, and the rule of law that is currently being elaborated in Strasbourg. We urge the EU not to delay this process in light of the negotiations on its own AI Act currently ongoing in Brussels. The two frameworks have a different purpose and should complement rather than copy-paste each other.

Page 1 of 6