A Strategy Page Article and Trevor Dupuy and Validation

An article appeared this week in the Strategy Page, which while a little rambling and unfocused, does hit on a few points of importance to us. The article is here: Murphy’s Law: What is Real on the Battlefield. Not sure of the author. But let me make a few rambling and unfocused comments on the article.

First they name-checked Trevor Dupuy. As they note: “Some post World War II historians had noted and measured the qualitative differences but their results were not widely recognized. One notable practitioner of this was military historian and World War II artillery officer Trevor Dupuy.”

“Not widely recognized” is kind of an understatement. In many cases, his work was actively resisted, with considerable criticism (some of it outright false), and often arrogantly and out-of-hand dismissed by people who apparently knew better. This is the reason why four chapters of my book War by Numbers focuses on measuring human factors.

I never understood the arguments from combat analysts and modelers who did not want to measure the qualitative differences between military forces. I would welcome someone who does not think this is useful to make the argument on this blog or maybe at our historical analysis conference. Fact of the matter was that Trevor Dupuy’s work was underfunded and under-resourced throughout the 33 years he pursued this research. His companies were always on the verge of extinction, kept going only by his force of will. 

Second, they discussed validation and the failure of the U.S. DOD to take it into account. Their statement was that “But, in general, validation was not a high priority and avoided as much as possible during peacetime.”  They discuss this as the case in the 1970s, but it was also true in the 1980s, the 1990s and into the current century. In my first meeting at CAA in early 1987, a group of analysts showed up for the purpose of getting the Ardennes Campaign Simulation Data Base (ACSDB) cancelled. There was open hostility at that time to even assembling the data to conduct a validation among the analytical community. We have discussed the need for validation a few times before here:  Summation of our Validation Posts | Mystics & Statistics (dupuyinstitute.org) and here: TDI Friday Read: Engaging The Phalanx | Mystics & Statistics (dupuyinstitute.org) and here: TDI Friday Read: Battalion-Level Combat Model Validation | Mystics & Statistics (dupuyinstitute.org) and here: No Action on Validation In the 2020 National Defense Act Authorization | Mystics & Statistics (dupuyinstitute.org) and in Chapters 18 and 19 of War by Numbers.

Nominally, I am somewhat of a validation expert. I have created four+ large validation databases: the Ardennes Campaign Simulation Data Base, the Kursk Data Base, and Battle of Britian Data Base (primarily done by Richard Anderson) and the expansion of the various DuWar databases. I have actually conducted three validations also. This is the fully documented battalion-level validation done for the TNDM (see International TNDM Newsletters Volume I, numbers 2 – 6 at http://www.dupuyinstitute.org/tdipub4.htm), the fully documented test of various models done in our report CE-1 Casualty Estimation Methodologies Study (May 2005) at http://www.dupuyinstitute.org/tdipub3.htm and the fully documented test of division and corps level combat at Kursk using the TNDM (see Chapter 19 of War by Numbers and reports FCS-1 and FCS-2 here: http://www.dupuyinstitute.org/tdipub3.htm). That said, no one in DOD has ever invited me to discuss validation. I don’t think they would really agree with what I had to say. On the other hand, if there have been some solid documented validations conducted recently by DOD, then I certainly would invite them to post about it to our blog or present them at our Historical Analysis conference. There has been a tendency for certain agencies to claim they have done VVA and sensitivity tests, but one never seems to find a detailed description of the validation they have conducted.

I will not be specifically discussing these databases or validation at the Historical Analysis conference, but my discussion on the subject in War by Numbers and in over 40 blog posts on this blog.

Share this:
Christopher A. Lawrence
Christopher A. Lawrence

Christopher A. Lawrence is a professional historian and military analyst. He is the Executive Director and President of The Dupuy Institute, an organization dedicated to scholarly research and objective analysis of historical data related to armed conflict and the resolution of armed conflict. The Dupuy Institute provides independent, historically-based analyses of lessons learned from modern military experience.

Mr. Lawrence was the program manager for the Ardennes Campaign Simulation Data Base, the Kursk Data Base, the Modern Insurgency Spread Sheets and for a number of other smaller combat data bases. He has participated in casualty estimation studies (including estimates for Bosnia and Iraq) and studies of air campaign modeling, enemy prisoner of war capture rates, medium weight armor, urban warfare, situational awareness, counterinsurgency and other subjects for the U.S. Army, the Defense Department, the Joint Staff and the U.S. Air Force. He has also directed a number of studies related to the military impact of banning antipersonnel mines for the Joint Staff, Los Alamos National Laboratories and the Vietnam Veterans of American Foundation.

His published works include papers and monographs for the Congressional Office of Technology Assessment and the Vietnam Veterans of American Foundation, in addition to over 40 articles written for limited-distribution newsletters and over 60 analytical reports prepared for the Defense Department. He is the author of Kursk: The Battle of Prokhorovka (Aberdeen Books, Sheridan, CO., 2015), America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Casemate Publishers, Philadelphia & Oxford, 2015), War by Numbers: Understanding Conventional Combat (Potomac Books, Lincoln, NE., 2017) and The Battle of Prokhorovka (Stackpole Books, Guilford, CT., 2019)

Mr. Lawrence lives in northern Virginia, near Washington, D.C., with his wife and son.

Articles: 1455

3 Comments

  1. Fascinating stuff. This prompts a question that’s been in the back of my mind since your “A Second Independent Effort to use the QJM/TNDM to Analyze the War in Ukraine” post. It seems to me that North American perspectives to Russian threats in Eastern Europe seem to be heavily influenced by the 2015 RAND report “Reinforcing Deterrence on NATO’s Eastern Flank: Wargaming the Defense of the Baltics.” (Also by the 2015 DRAFT Karber Report. I’ve never seen a final version, but plenty of references to the draft – rarely with any corroborating references, but regardless, it’s offers bogeyman anecdotes that have been easy to latch onto, whether they’re actually accurate or not). Anyway, RAND uses some kind of wargame modelling, which gets to my question: Do you know if RANDs methods are at all similar to your Tactical Numerical Deterministic Model (TNDM)? If not, have you ever run something similar to RAND concerning the Baltics?

  2. Not long ago, I asked Dave Ochmanek about the wargaming methodology RAND used back in 2016 to game a Russian invasion of the Baltics (https://www.rand.org/content/dam/rand/pubs/research_reports/RR1200/RR1253/RAND_RR1253.pdf). At the time, they said they would provide more detail on it, but never got around to it, I guess. Anyway, Ochmanek said that the game used was “home-brewed,” using the standard board game methodology, with counters, hexes, dice rolls, etc. The CRTs were all developed by in-house SMEs for each type of combat, ground, air, space, etc., using classified sources.

    As far as combat modeling goes, the Army continues to use JICM for campaign analysis, but the Navy, Marines, Air Force, and DOD CAPE use STORM. Both JICM and STORM use COSAGE 2.0 for ground combat attrition calculation.

Leave a Reply

Your email address will not be published. Required fields are marked *