Tired of pandemic baking? Looking for things to do while you pretend to pay attention in Zoom meetings? Here’s a reminder of #wargaming resources around the web. [1/6]
Visit PAXSims, the de facto front page for the #wargaming community. It’s run by @RexBrynen at McGill University and has the latest and greatest. Today’s post discusses actual research showing that miniatures wargamers are not anti-social misfits. [2/6] paxsims.wordpress.com
Check out the King’s College London Wargaming Network (@kclwargaming) lectures on YouTube. Except for my lecture. Don’t watch that one. It’s embarrassing and my family says it was just too boring to hold their attention. [3/6] youtube.com/channel/UCgHWL…
Surf @BoardGameGeek’s website, which has the authoritative list of commercial boardgames. Absolutely, this counts as research for #wargaming. You’re welcome. [4/6] boardgamegeek.com
Consider registering for Games for Change, which is virtual and free this year. Because you’re not a TOTAL Luddite. You know digital games exist! Why do people insist ALL wargamers are afraid of technology?? [5/6] hopin.to/events/g4c2020
Today we’ll talk about how to achieve negative learning in wargames. Because who says more #wargaming is always good? [1/8]
Negative learning is “the acquisition of erroneous conceptual and procedural knowledge and understanding from unwarranted information, which leads to faulty mental models and reasoning." A lot is needed to "unlearn such misconceptions or biases.” [2/8] plato.uni-mainz.de/definition/
One way to achieve negative learning in your wargame is to teach participants how to fight the last war, even though you know future conflicts will be different. I.e. bring in adjudicators who explain to junior officers how they did it in Iraq. [3/8]
Check out Hedgemony. Marine Corps University has ordered three copies and will be #wargaming defense strategy in their professional military education (PME). The rule book, player guide, and abbreviations & glossary are also free as downloads. [1/6] rand.org/pubs/tools/TL3…
Hedgemony gets to a strategic level while many other games stay at the operational level. This means many operational & tactical details are abstracted out, but the point is to be able to talk about deterrence and bigger picture issues. [2/6]
This game should also be helpful to civilian national security students who may not know much about the Pentagon and what it does. It spells out many key terms and choices that defense planners face. Researching country roles to play the game is also educational. [3/6]
An important milestone towards using AI in the force: DARPA’s AI beats a human USAF pilot in a dogfight. There are caveats to be sure, but this is consistent with the unmanned future many have predicted. [1/9] breakingdefense.com/2020/08/ai-sla…
What questions does this already raise for deterrence and escalation? One is the potential deterrent value of simply having such tests and publicizing them. The “Hollywood effect” means many U.S. adversaries overestimate U.S. tech dominance. [2/9]
This means that years or even decades ahead of practically implementing unmanned fighters into U.S. forces, AI could help increase U.S. deterrence by making others more risk adverse about confronting U.S. capabilities. This is especially true because you can’t “see” AI. [3/9]
All right everyone, it seems we need to revisit the dangers of launching a large & complex computer modeling effort. Don’t do it without first understanding the phenomena and asking if a computer model is even appropriate for the phenomena. [1/8]
During Iraq & Afghanistan, we built complex computer models on counterinsurgency. Common sequence of events: 1) quickly build prototype as proof of concept without actual experts, 2) use it in a wargame, 3) ask for more money, 4) assume you’ll add real social science later. [2/8]
You see the cascading problems with this. Starting without validated social science in your computer model can create negative learning. Hiring programmers because you’re excited about having a computer model should not come before understanding the problem. [3/8]
War is an extension of politics. But let’s be honest: wargames can also be extension of politics. So today we’ll talk about how to deliberately mislead with wargames. [1/8]
Let’s say you want to advance funding for a magical widget (MW). One way to mislead is to deliberately conflate the players learning how to play the game with the benefit from said MW. [2/8]
You do this innocuously enough: 1) Make sure the players haven’t seen the game before. 2) Run a “baseline” wargame with current CONOPS and equipment. 3) Give the blue cell the MW and rerun the game. 4) Attribute any blue improvement entirely to the MW. [3/8]
I’m running the working group at this year’s Connections Wargaming Conference on Enhancing Wargaming through AI/ML. (AI = artificial intelligence, ML = machine learning). Here are some thoughts about the topic. [1/9]
Why this topic to begin with? DoD’s focus on introducing AI into operating forces & concepts + DoD’s renewed emphasis on #wargaming makes this a natural subject of interest. Currently there is very little ML in defense wargames. [2/9]
Two distinct problems are within this topic of AI and wargaming: 1) representing AI in wargames, and 2) using AI in defense wargames. Entire articles could be written about #1 but our WG will deal with #2. [3/9]