Where the army does not use AI
Wargames allow military officers to explore scenarios and teach them how to make decisions as if they were in combat. Although specialists have thought about it, so far they are not rushing to stuff these games with AI.

Kriegsspiel. For over two centuries, military officers have used a method which the army is rarely known for: play. Wargaming, however, is nothing like the boardgame RISK. In a wargame, officers control an army unit in a given scenario. Rules are loose and an arbiter (or referee) with extensive military knowledge decides on the outcome of a player’s action. (If you want to attempt something similar at home, you can try Neustart, a board game in which you coordinate emergency services during a city-wide power cut. It is used in some military academies.)
Wargames were first known as “Kriegsspiele”, as they were initially developed in Prussia. The method of simulating war in a game was so effective that outside observers credited the Prussian victories of 1864, 1866 and 1870 to the skills that military officers had acquired while playing. In the 1920s and 1930s, US Navy officers used wargames to imagine what a war in the Pacific would look like. Some historians reckon that when the US entered World War II in late 1941, the military followed plans that were hatched at the board game table. Today, most NATO armies have handbooks and dedicated personnel for wargaming.
Automation. Although it is play in its essence, wargaming serves two specific goals: training cadets in decision-making and delivering insights into which tactics and strategies to adopt in the real world. If AI can make both faster and more efficient, the general staff – and their political authorities – should certainly become interested. However, AI plays a very small role in wargames, as Jan Landsiedel, a wargaming specialist in the Bundeswehr Office for Defence Planning of the German army told me.
There are several reasons: One is that wargames are largely analog. Because a game aims at answering a question (for instance: “How would the Japanese go about conquering the Pacific?” for the US Navy of the 1930s), each game is unique and digitization expensive. More importantly, AI is not good at simulating politics, as a University study from 2020 found out. The large language models (LLMs) developed since then did not make software much better at playing open-ended games.
Fun factor. AI is used for small tasks though, Landsiedel told me. Tasks like automating the behavior of specific units on the virtual battlefield or analyzing reports from a game.
When I asked whether cadets or officers reached out to ChatGPT during a game in order to cheat, Landsiedel told me that participants do not – and for a good reason: They are having fun, and they take the exercise seriously. In any case, smartphones are forbidden in most settings, as secret information is used.
GPS. Wargames are far from the only area where the military avoids employing AI. Orientation without a routing app, for instance, is a key competency for soldiers and officers alike and part of any military training. Since the full-scale invasion of Ukraine in 2022 and the deployment of GPS-spoofing (the manipulation of GPS signals), all army units reintroduced training in navigation without algorithmic help, as an officer in a major NATO military told me.
Wargames and orientation are certainly domain-specific and might not be relevant for all organizations. However, the fact that the military recognizes the limitations of automated tools and, in some cases, forbids them completely, examplifies how institutions can handle the AI wave. LLMs have some usefulness, but have been shown to make users less skilled and more gullible. Other institutions, education and health for instance, should take note.
This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.
