C-WAM 3

Now, in the article by Michael Peck introducing C-WAM, there was a quote that got our attention:

“We tell everybody: Don’t focus on the various tactical outcomes,” Mahoney says. “We know they are wrong. They are just approximations. But they are good enough to say that at the operational level, ‘This is a good idea. This might work. That is a bad idea. Don’t do that.’”

Source: https://www.govtechworks.com/how-a-board-game-helps-dod-win-real-battles/#gs.ifXPm5M

I am sorry, but this line of argument has always bothered me.

While I understand that no model is perfect, that is the goal that modelers should always strive for. If the model is a poor representation of combat, or parts of combat, then what are you teaching the user? If the user is professional military, then is this negative training? Are you teaching them an incorrect understanding of combat? Will that understanding only be corrected after real combat and loss of American lives? This is not being melodramatic…..you fight as you train.

We have seen the argument made elsewhere that some models are only being used for training, so…….

I would like to again bring your attention to the “base of sand” problem:

https://dupuyinstitute.dreamhosters.com/2017/04/10/wargaming-multi-domain-battle-the-base-of-sand-problem/

As always, it seems that making the models more accurate seems to take lower precedence to whatever. Validating models tends to never be done. JICM has never been validated. COSAGE and ATCAL as used in JICM have never been validated. I don’t think C-WAM has ever been validated.

Just to be annoyingly preachy, I would like to again bring your attention to the issue of validation:

Military History and Validation of Combat Models

 

 

Share this:
Christopher A. Lawrence
Christopher A. Lawrence

Christopher A. Lawrence is a professional historian and military analyst. He is the Executive Director and President of The Dupuy Institute, an organization dedicated to scholarly research and objective analysis of historical data related to armed conflict and the resolution of armed conflict. The Dupuy Institute provides independent, historically-based analyses of lessons learned from modern military experience.

Mr. Lawrence was the program manager for the Ardennes Campaign Simulation Data Base, the Kursk Data Base, the Modern Insurgency Spread Sheets and for a number of other smaller combat data bases. He has participated in casualty estimation studies (including estimates for Bosnia and Iraq) and studies of air campaign modeling, enemy prisoner of war capture rates, medium weight armor, urban warfare, situational awareness, counterinsurgency and other subjects for the U.S. Army, the Defense Department, the Joint Staff and the U.S. Air Force. He has also directed a number of studies related to the military impact of banning antipersonnel mines for the Joint Staff, Los Alamos National Laboratories and the Vietnam Veterans of American Foundation.

His published works include papers and monographs for the Congressional Office of Technology Assessment and the Vietnam Veterans of American Foundation, in addition to over 40 articles written for limited-distribution newsletters and over 60 analytical reports prepared for the Defense Department. He is the author of Kursk: The Battle of Prokhorovka (Aberdeen Books, Sheridan, CO., 2015), America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Casemate Publishers, Philadelphia & Oxford, 2015), War by Numbers: Understanding Conventional Combat (Potomac Books, Lincoln, NE., 2017) and The Battle of Prokhorovka (Stackpole Books, Guilford, CT., 2019)

Mr. Lawrence lives in northern Virginia, near Washington, D.C., with his wife and son.

Articles: 1455

2 Comments

  1. Chris,

    I understand your post, but prediction is not the reason most Army simulations exist. They exist as a vehicle to enable students/staffs to maintain and improve readiness. They are used to let units improve their staff skills, SOPs, reporting procedures, and planning, specifically in the MDMP. As such, in terms of achieving a learning objective, it doesn’t really matter if you win a battle in a simulation or at the NTC so long as you identify mistakes and strong points and seek to minimize the former and maintain the latter. The US Army did pretty well in Desert Storm, defying most predictions, but many of those same units did not do so well at the National Training Center a year later in spite of their combat experiences, some leadership continuity, and training both in the field and in simulation.

    As the National Simulations Center Training with Simulations Handbook puts it:

    “C2 training simulations should not be employed to analyze plans in specific terms of outcome. They can be used in a training environment, however, to assist in learning about generalizations on maneuver and logistics, but they should not be relied on to provide specifics on how much and what type of ‘widgets’ to use in a particular scenario or operational situation.

    C2 training simulations should not be relied upon to validate war plans and they must not be used for that purpose. C2 training simulations simulate the real world and provide a simulation of the real-world conditions of approximately 80-85%, depending on which simulation is used. However, this 80-85% simulation is not 100% and should not be seen as an exact replication of the real-world; it is not. However, the 80-85% “solution” that C2 training simulations represent means that they can perform their assistance and support role to C2 elements very well if the exercise is designed around legitimate training objectives.

    It is a mistake, repeat mistake, and a misuse of these simulations to attempt to validate war plans. The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training of the units) and a myriad of human and environmental impacts are not modeled in sufficient detail to provide the types of plans analysis usually associated with warfighting decisions used to validate war plans.”

    —Chapter 3, page 13. Written in approximately 1997-1998. (This is from an old copy I have. I don’t know if it has been updated yet.)

    Thus, training simulations have a different purpose than an analytical simulation.

    As I am sure you know, prediction is hard. If it was easy, the National Hurricane Center would not be using large ensembles of simulations for hurricane prediction and the fact that as the storm lasts longer and nears land, they are able to narrow the “cone of death” for fairly accurate landfall predictions. But they frequently are unable to predict last minute turns prompted by some “burble” in the ocean or air that causes a storm that looks like it going to go up the West Coast of Florida, instead go up the center and exit on the East Coast.

    And the storm is not purposefully training to thwart the forecast, whereas in a battle prediction, we may have even less knowledge of enemy doctrine, command personalities, and even weapon capabilities than the NHS has of the weather.

    As I would say to commanders of units I evaluated, “Winning is more fun, but you learn more when you lose.”

    This does not mean the analytical side of simulations can not strive for more precision, and maybe one day, the two will converge, but I don’t think that day is here yet.

  2. Mike,

    See the blog post imaginatively called “response” for my response. I figured it was too important of a discussion to leave buried in the comments. Also, I would recommend that you take a look at Chapter 18 (Modeling Warfare) in my book War by Numbers.

Leave a Reply

Your email address will not be published. Required fields are marked *