Recommendations for the EU Elections 2024

Tech governance has become a key focus for the European Union. New laws have been introduced to reshape how technology and the internet are regulated. Success in EU tech governance hinges on effectively implementing and evolving these new laws to bridge gaps and adapt to technological advances.

Clara Helming
Senior Advocacy & Policy Manager

Challenges persist that require full commitment from the European Parliament, the Commission, and the Member States.

In the upcoming legislative period, the EU must prioritize tech policies that safeguard social justice and a sustainable society. It should continue introducing policies that hold big tech companies accountable, strengthen democracy, and prioritize the rights, well-being, and autonomy of all individuals in society.

The EU must:

1. Protect civic spaces online and offline

Enforce proper platform governance! A key achievement of the Digital Services Act, the right to access research data, must finally take effect. Researchers have not yet been given access to the promised data and the platforms are still too reluctant to implement transparency requirements, risk assessments, and risk mitigation measures appropriately. The Commission must actively support researchers so that the power that the DSA promises on paper can actually be executed.

Strengthen oversight of tech companies through independent research! Protections from algorithmic harms require effective external scrutiny. At present, this is not possible. The European institutions must support auditing of algorithmic and AI systems, including research into best practices for end-to-end auditing. This will require enforcing cooperation with researchers and transparency from providers, independence and collaborative opportunities for researchers, as well as expert and well-resourced bodies (including the European Centre for Algorithmic Transparency and the new AI Office). EU regulations increasingly refer to “systemic risks.” We need clarity on how to recognize these, as well as fast and effective enforcement when systems create such risks.

2. Protect people’s fundamental rights in the digital world

Close loopholes for law enforcement bodies and migration authorities that threaten privacy! While the AI Act introduces important steps towards more transparency on the use of high-risk AI, it allows for concerning exceptions. Law enforcement, migration authorities, and national security agencies could exploit these by enabling widespread biometric identification in public spaces. This directly undermines fundamental human rights, including privacy, freedom of speech, and freedom of assembly. Left unchecked, such practices could pave the way for mass surveillance across Europe.

Fight algorithmic bias and discrimination! AI-based discrimination poses a threat to justice. We need effective guardrails and a comprehensive action plan for just AI that go beyond mere technical approaches. This includes strengthening access to legal redress mechanisms and shifting the burden of proof away from individuals affected. Given the prominent role that AI standardization plays in defining approaches to scrutinize AI systems for bias, the EU should strengthen legal requirements on democratic participation in standard-setting, including standard-setting on gender equality and non-discrimination. We must empower people and hold developers accountable to ensure AI benefits everyone.

3. Hold tech companies accountable

Counter environmental harms in the tech sector! Generative AI and other algorithmic technologies consume huge amounts of energy, water, and rare earth minerals. They enable business models that contribute significantly to exacerbating the climate crisis. Neglecting the environmental cost of this software is unsustainable and burdens society. Transparency is the first essential step towards developing meaningful regulation. Companies should be required to provide information about their technologies’ climate impact, including water use, greenhouse gas emissions, and mineral extraction. This transparency should apply to the entire AI production and deployment process. EU policymakers must incentivize and penalize the AI industry, demand comprehensive reporting, set ambitious sustainability benchmarks, mandate sustainable procurement, and establish independent oversight.

Protect workers’ rights along the AI value chain! The wealth created from technology is unevenly distributed: Many tech workers around the world, such as content moderators and data labelers, face exploitation, while the companies they work for make a lot of money. We need regulation that protects these workers in the EU and beyond. In addition, the growing and often top-down implemented use of algorithmic management practices risks undermining labor law achievements. Unchecked, poor working conditions, which we see for example in the gig economy, may increasingly spread across different sectors and undermine hard-won labor protections and co-determination.

Curb tech monopolies. Microsoft, Google, Amazon, and other tech giants are spending billions to control the world's tech infrastructure. This market concentration is dangerous for democracy and social cohesion. It must be stopped. A thorough review of current competition laws is necessary. New regulations should promote a more diverse and open landscape in AI. This includes breaking up monopolies, fostering a level-playing field, ensuring contractual fairness along the AI value chain, and fostering a level-playing field.

Read more on our policy & advocacy work on ADM in the public sphere.