Check out Hedgemony. Marine Corps University has ordered three copies and will be #wargaming defense strategy in their professional military education (PME). The rule book, player guide, and abbreviations & glossary are also free as downloads. [1/6] rand.org/pubs/tools/TL3…
Hedgemony gets to a strategic level while many other games stay at the operational level. This means many operational & tactical details are abstracted out, but the point is to be able to talk about deterrence and bigger picture issues. [2/6]
This game should also be helpful to civilian national security students who may not know much about the Pentagon and what it does. It spells out many key terms and choices that defense planners face. Researching country roles to play the game is also educational. [3/6]
Central to this game is the idea that even the Pentagon cannot spend money on everything it wants. Are you going to modernize your forces, invest in future technology, increase readiness, have forces forward? You don’t get to do it all. You don’t get go everywhere at once. [4/6]
Hedgemony has thinking adversaries (and allies) with their own goals who are making their own strategic choices. Many an armchair DC strategist can wax loquaciously about what the Pentagon should do, but what will the rest of the world do in response? [5/6]
A shout out also to @thegamecrafter for the quality game components in Hedgemony. A good resource for hobby and professional wargamers alike. [6/6] thegamecrafter.com
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today we’ll talk about how to achieve negative learning in wargames. Because who says more #wargaming is always good? [1/8]
Negative learning is “the acquisition of erroneous conceptual and procedural knowledge and understanding from unwarranted information, which leads to faulty mental models and reasoning." A lot is needed to "unlearn such misconceptions or biases.” [2/8] plato.uni-mainz.de/definition/
One way to achieve negative learning in your wargame is to teach participants how to fight the last war, even though you know future conflicts will be different. I.e. bring in adjudicators who explain to junior officers how they did it in Iraq. [3/8]
An important milestone towards using AI in the force: DARPA’s AI beats a human USAF pilot in a dogfight. There are caveats to be sure, but this is consistent with the unmanned future many have predicted. [1/9] breakingdefense.com/2020/08/ai-sla…
What questions does this already raise for deterrence and escalation? One is the potential deterrent value of simply having such tests and publicizing them. The “Hollywood effect” means many U.S. adversaries overestimate U.S. tech dominance. [2/9]
This means that years or even decades ahead of practically implementing unmanned fighters into U.S. forces, AI could help increase U.S. deterrence by making others more risk adverse about confronting U.S. capabilities. This is especially true because you can’t “see” AI. [3/9]
All right everyone, it seems we need to revisit the dangers of launching a large & complex computer modeling effort. Don’t do it without first understanding the phenomena and asking if a computer model is even appropriate for the phenomena. [1/8]
During Iraq & Afghanistan, we built complex computer models on counterinsurgency. Common sequence of events: 1) quickly build prototype as proof of concept without actual experts, 2) use it in a wargame, 3) ask for more money, 4) assume you’ll add real social science later. [2/8]
You see the cascading problems with this. Starting without validated social science in your computer model can create negative learning. Hiring programmers because you’re excited about having a computer model should not come before understanding the problem. [3/8]
War is an extension of politics. But let’s be honest: wargames can also be extension of politics. So today we’ll talk about how to deliberately mislead with wargames. [1/8]
Let’s say you want to advance funding for a magical widget (MW). One way to mislead is to deliberately conflate the players learning how to play the game with the benefit from said MW. [2/8]
You do this innocuously enough: 1) Make sure the players haven’t seen the game before. 2) Run a “baseline” wargame with current CONOPS and equipment. 3) Give the blue cell the MW and rerun the game. 4) Attribute any blue improvement entirely to the MW. [3/8]
I’m running the working group at this year’s Connections Wargaming Conference on Enhancing Wargaming through AI/ML. (AI = artificial intelligence, ML = machine learning). Here are some thoughts about the topic. [1/9]
Why this topic to begin with? DoD’s focus on introducing AI into operating forces & concepts + DoD’s renewed emphasis on #wargaming makes this a natural subject of interest. Currently there is very little ML in defense wargames. [2/9]
Two distinct problems are within this topic of AI and wargaming: 1) representing AI in wargames, and 2) using AI in defense wargames. Entire articles could be written about #1 but our WG will deal with #2. [3/9]
Wonder how they wargamed logistics in the old days? Here are the instructions for Monopologs, a 1957 logistics game that RAND ran for the Air Force. Report written by Jean Renshaw and Annette Heuston. [1/6] rand.org/pubs/research_…
Monopologs was developed by the RAND Logistics Department. The game system is a simple simulation of one depot and five bases. Players practice inventory management and gain insight into inventory control problems. [2/6]
The 1957 game Baselogs was next. It deals with Air Force base interactions between squadron operations, maintenance, and supply. Report written by Leon Gainen, Robert Levine, and William McGlothlin. [3/6] rand.org/pubs/research_…