Risks and Dangers Coming With AI

Artificial Intelligence is the biggest technological revolution since the internet. However, we must avoid new technological possibilities that lead to people being disadvantaged. Read our overview of how fundamental rights, democracy, economic competition, and the environment might be impaired by AI.

Illustrierung zu Gefahren und Risiken von KI. Person auf einer Seite, KI-Netz auf der anderen, in der Mitte ein Ausrufezeichen.
Yasmin Dwiputri & Data Hazards Project / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Select a topic to find out more

AI and Democracy: Social Media Algorithms and Elections

Good news first: AI-generated “fake news” and other misleading content have, so far, hardly influenced elections. Many factors other than online content shape voting behavior. Generative AI, however, is already being used to deceive voters. So-called deepfakes are so good at imitating politicians’ voices that one can hardly tell the difference, and images of them can look astoundingly authentic, which can be used to discredit individual politicians with false statements and made-up appearances. In the course of several investigations, AlgorithmWatch has found that AI chatbots provide false and inaccurate information, such as data on elections, and even invent political scandals to depict them as fact.

Gedruckte Broschüre LLM-Report als Mockup

We investigated Large Language Models (LLMs) in the context of elections at different times. In our third research report, we explain how AI chatbots impact information for voters.

The public is exposed to disinformation campaigns that are designed to manipulate opinions based on social media algorithms. Content that triggers emotions generally generates more interactions. Due to this, polarizing content is displayed more frequently and reaches more users on the platforms. This can have an impact on the public’s view on certain topics. Social media platforms make money with polarization: the longer users are active on the platforms, the more advertising they are shown.

AlgorithmWatch takes a clear stance on the spread of misinformation and polarizing content on social media platforms.

Please note: Although the term “fake news” describes deliberately falsified and misleading news, we do not use it since it is often misused for libel, slander, and propaganda.

Mark Zuckerberg läuft an der Aufschrift "Fighting Fake News" vorbei, im Vintage-Style

AI and algorithms themselves are not a threat to democracy. The social polarization that AI and algorithms reinforce is dangerous. Users are often only shown information that matches the content that they previously preferred, while platforms block different content in their feeds. The term “filter bubble” refers to this.

To prevent AI from becoming a polarization and manipulation tool, it must be effectively controlled. One measure to do this is the EU's AI Act. The AI Act bans AI systems that are very likely to cause significant harm, such as systems with which people can be manipulated or deceived.

AI systems are prone to error and even discrimination. The infamous Dutch child benefit scandal led to many innocent persons being treated as fraudsters, as a result of which they were financially ruined. Face recognition systems provide another example. They have been the cause of people unjustifiably being accused of a crime and imprisoned.

However, the problem goes beyond such errors. The identification and monitoring of people in public spaces with AI at any time not only violates their right to privacy. AI surveillance might also deter people from taking part in demonstrations or visiting places that could, for example, provide information about their political or sexual orientation, and thereby curtail fundamental rights, such as the right to freedom of expression or the right to freedom of assembly.

Face recognition is supposed to provide more security but instead, such systems put fundamental rights at risk, discriminate, and can even pave the way to mass surveillance.

Repressive governments in particular rely on this deterrent effect, as recently seen in Argentina. Biometric mass surveillance, such as AI face recognition, particularly affects already disadvantaged individuals and groups as well as political activists. In Russia, police arrested people who had attended the funeral of dissident Alexei Navalny. Face recognition software had identified them by analyzing images of the funeral. The images circulated on social media or stemmed from surveillance camera footage.

AI discrimination results from AI being used incorrectly. For example, the technology can illegally exclude people in recruitment or loan procedures because of their ethnicity, gender, or age. AI evaluates data, but this does not necessarily make automated decision-making fact-based, objective, and precise.

We are researching where and how algorithmic discrimination occurs. Do you have evidence for algorithmic discrimination having taken place? Then tell us about it.

Computing for AI applications consumes energy. It is computationally very demanding to find usable patterns in data sets during AI training and then to check whether the AI predictions actually come true during the operation of the systems. The required server hardware in data centers is built with minerals contained in batteries and microprocessors. The conditions under which humans have to work in order to acquire these minerals are horrific. The electronic waste that the servers eventually turn into is often not recyclable and ends up in landfills in Asia, where people have to suffer from the environmental consequences.

It is commonly known that data centers, as well as the production and operation of hardware, contribute significantly to global carbon dioxide emissions. On the other hand, information on the actual energy consumption of AI and the subsequent emissions is scarce, which makes it more difficult to find regulatory and technological solutions to reduce these emissions.

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

In this explainer, we go into more detail about why AI and sustainability do not yet go well together.

When servers operate in data centers, electrical energy is converted into heat. To prevent the servers from overheating, they need to be cooled, oftentimes with water. Without millions of liters of fresh water to cool power plants and servers, the training of large AI models could not be performed. The growing demand for AI also leads to an ever-increasing need for water. Google's water consumption for AI increased by 20 percent between 2021 and 2022 and even doubled in some drought areas. During this period, Microsoft saw a 34 percent increase in its water consumption for AI training and development.

Google, Amazon, Apple, Meta, and Microsoft: These five US IT companies not only dominate our everyday digital lives but the entire digital services market. They provide the most widely used operating systems, programs, and online services: the digital infrastructure. Large parts of the economic and state infrastructure are dangerously dependent on these US companies’ products.

Public administration in Germany, for example, has become dependent on Microsoft. Many of the federal authorities and the federal states – with the exception of Schleswig-Holstein – rely on Microsoft products such as Office 365, Azure, or Teams. These services only work reliably as long as Microsoft updates them. Such a situation is prone to a “lock-in effect,” which binds customers to products, services, or providers due to a lack of alternatives or high costs in case of a provider change.

“Digital companies have become very powerful in our society. Their interference is not always easy to spot. They form monopolies that create major risks, especially in spheres such as political opinion-forming and critical digital infrastructure. We need more market diversity and real opportunities for providers that are willing to protect users' rights and promote the common good.”

Matthias Spielkamp, AlgorithmWatch’s Managing Director and co-founder

With the five IT giants’ market domination comes great negotiating power: They can dictate prices and conditions for suppliers, which makes it extremely difficult for competitors to develop independent alternatives. At the same time, AI developing companies are successfully presenting their products as a universal solution for everything. This coinciding of both developments requires extra caution: Hasty use of AI without verifiable benefits will only lead to bigger dependency on already too powerful corporations.


Community Newsletter

Would you like to find out more about the work of AlgorithmWatch and be informed about our campaigns and upcoming events? Then subscribe to our Community Newsletter and stay up to date.

Sign up for our Community Newsletter

I agree to receive this newsletter and know that I can easily unsubscribe at any time.

For more detailed information, please refer to our privacy policy.

About once a month. What can I expect? Read a sample issue here.