Category Modeling, Simulation & Wargaming

Summation of our Validation Posts

This extended series of posts about validation of combat models was originally started by Shawn Woodford’s post on future modeling efforts and the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

This post apparently irked some people at TRADOC and they wrote an article in the December issue of the Phalanx referencing his post and criticizing it. This resulted in the following seven responses from me:

Engaging the Phalanx

Validation

Validating Attrition

Physics-based Aspects of Combat

Historical Demonstrations?

SMEs

Engaging the Phalanx (part 7 of 7)

This was probably overkill…..but guys who write 1,662 page books sometimes tend to be a little wordy.

While it is very important to identify a problem, it is also helpful to show the way forward. Therefore, I decided to discuss what data bases were available for validation. After all, I would like to see the modeling and simulation efforts to move forward (and right now, they seem to be moving backward). This led to the following nine posts:

Validation Data Bases Available (Ardennes)

Validation Data Bases Available (Kursk)

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base

Battles versus Campaigns (for Validation)

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

Other TDI Data Bases

Other Validation Data Bases

There were also a few other validation issues that had come to mind while I was writing these blog posts, so this led to the following series of three posts:

Face Validation

Validation by Use

Do Training Models Need Validation?

Finally, there were a few other related posts that were scattered through this rather extended diatribe. It includes the following six posts:

Paul Davis (RAND) on Bugaboos

Diddlysquat

TDI Friday Read: Engaging The Phalanx

Combat Adjudication

China and Russia Defeats the USA

Building a Wargamer

That kind of ends this discussion on validation. It kept me busy for while. Not sure if you were entertained or informed by it. It is time for me to move onto another subject, not that I have figured out yet what that will be.

Building a Wargamer

Interesting article from Elizabeth Bartels of RAND from November 2018. It is on the War on the Rocks website. Worth reading: Building a Pipeline of Wargaming Talent

Let me highlight a few points:

  1. “On issues ranging from potential conflicts with Russia to the future of transportation and logistics, senior leaders have increasingly turned to wargames to imagine potential futures.”
  2. “The path to becoming a gamer today is modeled on the careers of the last generation of gamers — most often members of the military or defense analysts with strong roots in the hobby gaming community of the 1960s and 1970s.”
    1. My question: Should someone at MORS (Military Operations Research Society) nominate Charles S. Roberts and James F. Dunnigan for the Vance R. Wanner or the Clayton J. Thomas awards? (see: https://www.mors.org/Recognition).
  3. One notes that there is no discussion of the “Base of Sand” problem.
  4. One notes there is no discussion of VVA (Verification, Validation and Accreditation)
  5. The picture heading her article is of a hex board overlaid by acetate.

Do Training Models Need Validation?

Do we need to validate training models? The argument is that as the model is being used for training (vice analysis), it does not require the rigorous validation that an analytical model would require. In practice, I gather this means they are not validated. It is an argument I encountered after 1997. As such, it is not addressed in my letters to TRADOC in 1996: See http://www.dupuyinstitute.org/pdf/v1n4.pdf

Over time, the modeling and simulation industry has shifted from using models for analysis to using models for training. The use of models for training has exploded, and these efforts certainly employ a large number of software coders. The question is, if the core of the analytical models have not been validated, and in some cases, are known to have problems, then what are the models teaching people? To date, I am not aware of any training models that have been validated.

Let us consider the case of JICM. The core of the models attrition calculation was the Situational Force Scoring (SFS). Its attrition calculator for ground combat is based upon a version of the 3-to-1 rule comparing force ratios to exchange ratios. This is discussed in some depth in my book War by Numbers, Chapter 9, Exchange Ratios. To quote from page 76:

If the RAND version of the 3 to 1 rule is correct, then the data should show a 3 to 1 force ratio and a 3 to 1 casualty exchange ratio. However, there is only one data point that comes close to this out of the 243 points we examined.

That was 243 battles from 1600-1900 using our Battles Data Base (BaDB). We also tested it to our Division Level Engagement Data Base (DLEDB) from 1904-1991 with the same result. To quote from page 78 of my book:

In the case of the RAND version of the 3 to 1 rule, there is again only one data point (out of 628) that is anywhere close to the crossover point (even fractional exchange ratio) that RAND postulates. In fact it almost looks like the data conspire to leave a noticeable hole at that point.

So, does this create negative learning? If the ground operations are such that an attacking ends up losing 3 times as many troops as the defender when attacking at 3-to-1 odds, does this mean that the model is training people not to attack below those odds, and in fact, to wait until they have much more favorable odds? The model was/is (I haven’t checked recently) being used at the U.S. Army War College. This is the advanced education institute that most promotable colonels attend before advancing to be a general officer. Is such a model teaching them incorrect relationships, force ratios and combat requirements?

You fight as you train. If we are using models to help train people, then it is certainly valid to ask what those models are doing. Are they properly training our soldiers and future commanders? How do we know they are doing this. Have they been validated?

Validation by Use

Sacrobosco, Tractatus de Sphaera (1550 AD)

Another argument I have heard over the decades is that models are validated by use. Apparently the argument is that these models have been used for so long, and so many people have worked with their outputs, that they must be fine. I have seen this argument made in writing by a senior army official in 1997 in response to a letter addressing validation that we encouraged TRADOC to be send out:

See: http://www.dupuyinstitute.org/pdf/v1n4.pdf

I doubt that there is any regulation discussing “validation by use,” and I doubt anyone has ever defended this idea in public paper. Still, it is an argument that I have heard used far more than once or twice.

Now, part of the problem is that some of these models have been around a few decades. For example, the core of some of the models used by CAA, for example COSAGE, first came into existence in 1969. They are using a 50-year updated model to model modern warfare. My father worked with this model. RAND’s JICM (Joint Integrated Contingency Model) dates back to the 1980s, so it is at least 30 years old. The irony is that some people argue that one should not use historical warfare examples to validate models of modern warfare. These models now have a considerable legacy.

From a practical point of view, it means that the people who originally designed and developed the model have long since retired. In many cases, the people who intimately knew the inner workings of the model have also retired and have not really been replaced. Some of these models have become “black boxes” where the users do not really know the details of how the models calculate their results. So suddenly, validation by use seems like a reasonable argument, because these models pre-date the analysts, and they assume that there is some validity to them, as people have been using them. They simple inherited the model. Why question it?

Illustration by Bartolomeu Velho, 1568 AD

China and Russia Defeats the USA

A couple of recent articles on that latest wargaming effort done by RAND:

https://www.americanthinker.com/blog/2019/03/rand_corp_wargames_us_loses_to_combined_russiachina_forces.html

The opening line states: “The RAND Corporation’s annual ‘Red on Blue’ wargame simulation found that the United States would be a loser in a conventional confrontation with Russia and China.”

A few other quotes:

  1. “Blue gets its ass handed to it.”
  2. “…the U.S. forces ‘suffer heavy losses in one scenario after another and still can’t stop Russia or China from overrunning U.S. allies in the Baltics or Taiwan:”

Also see: https://www.asiatimes.com/2019/03/article/did-rand-get-it-right-in-its-war-game-exercise/

A few quotes from that article:

  1. “The US and NATO are unable to stop an attack in the Balkans by the Russians,….
  2. “…and the United States and its allies are unable to prevent the takeover of Taiwan by China.

The articles do not state what simulations were used to wargame this. The second article references this RAND study (RAND Report) but my quick perusal of it did not identify what simulations were used. A search on the words “model” and “wargame” produced nothing. The words “simulation” and “gaming” leads to the following:

  1.  “It draws on research, analysis, and gaming that the RAND Corporation has done in recent years, incorporating the efforts of strategists, regional specialists, experts in both conventional and irregular military operations, and those skilled in the use of combat simulation tools.”
  2. “Money, time, and talent must therefore be allocated not only to the development and procurement of new equipment and infrastructure, but also to concept development, gaming and analysis, field experimentation, and exploratory joint force exercises.”

Anyhow, curious as to what wargames they were using (JICM – Joint Integrated Contingency Model?). I was not able to find out with a cursory search.

Face Validation

The phrase “face validation” shows up in our blog post earlier this week on Combat Adjudication. It is a phrase I have heard many times over the decades, sometimes by very established Operation Researchers (OR). So what does it mean?

Well, it is discussed in the Department of the Army Pamphlet 5-11: Verification, Validation and Accreditation of Army Models and Simulations: Pamphlet 5-11

Their first mention of it is on page 34: “SMEs [Subject Matter Experts] or other recognized individuals in the field of inquiry. The process by which experts compare M&S [Modeling and Simulation] structure and M&S output to their estimation of the real world is called face validation, peer review, or independent review.”

On page 35 they go on to state: “RDA [Research, Development, and Acquisition]….The validation method typically chosen for this category of M&S is face validation.”

And on page 36 under Technical Methods: “Face validation. This is the process of determining whether an M&S, on the surface, seems reasonable to personnel who are knowledgeable about the system or phenomena under study. This method applies the knowledge and understanding of experts in the field and is subject to their biases. It can produce a consensus of the community if the number of breadth of experience of the experts represent the key commands and agencies. Face validation is a point of departure to determine courses of action for more comprehensive validation efforts.” [I put the last part in bold]

Page 36: “Functional decomposition (sometimes known as piecewise validation)….When used in conjunction with face validation of the overall M&S results, functional decomposition is extremely useful in reconfirming previous validation of a recently modified portions of the M&S.”

I have not done a survey of all army, air force, navy, marine, coast guard or Department of Defense (DOD) regulations. This one is enough.

So, “face validation” is asking one or more knowledgeable (or more senior) people if the model looks good. I guess it really depends on whose the expert is and to what depth they look into it. I have never seen a “face validation” report (validation reports are also pretty rare).

Who’s “faces” do they use? Are they outside independent people or people inside the organization (or the model designer himself)? I am kind of an expert, yet, I have never been asked. I do happen to be one of the more experienced model validation people out there, having managed or directly created six+ validation databases and having conducted five validation-like exercises. When you consider that most people have not done one, should I be a “face” they contact? Or is this process often just to “sprinkle holy water” on the model and be done?

In the end, I gather for practical purposes the process of face validation is that if a group of people think it is good, then it is good. In my opinion, “face validation” is often just an argument that allows people to explain away or simply dismiss the need for any rigorous analysis of the model. The pamphlet does note that “Face validation is a point of departure to determine courses of action for more comprehensive validation efforts.” How often have we’ve seen the subsequent comprehensive validation effort? Very, very rarely. It appears that “face validation” is the end point.
Is this really part of the scientific method?

Combat Adjudication

As I stated in a previous post, I am not aware of any other major validation efforts done in the last 25 years other than what we have done. Still, there is one other effort that needs to be mentioned. This is described in a 2017 report: Using Combat Adjudication to Aid in Training for Campaign Planning.pdf

I gather this was work by J-7 of the Joint Staff to develop Joint Training Tools (JTT) using the Combat Adjudication Service (CAS) model. There are a few lines in the report that warm my heart:

  1. “It [JTT] is based on and expanded from Dupuy’s Quantified Judgement Method of Analysis (QJMA) and Tactical Deterministic Model.”
  2. “The CAS design used Dupuy’s data tables in whole or in part (e.g. terrain, weather, water obstacles, and advance rates).”
  3. “Non-combat power variables describing the combat environment and other situational information are listed in Table 1, and are a subset of variables (Dupuy, 1985).”
  4. “The authors would like to acknowledge COL Trevor N. Dupuy for getting Michael Robel interested in combat modeling in 1979.”

Now, there is a section labeled verification and validation. Let me quote from that:

CAS results have been “Face validated” against the following use cases:

    1. The 3:1 rules. The rule of thumb postulating an attacking force must have at least three times the combat power of the defending force to be successful.
    2. 1st (US) Infantry Divison vers 26th (IQ) Infantry Division during Desert Storm
    3. The Battle of 73 Easting: 2nd ACR versus elements of the Iraqi Republican Guards
    4. 3rd (US) Infantry Division’s first five days of combat during Operation Iraqi Freedom (OIF)

Each engagement is conducted with several different terrain and weather conditions, varying strength percentages and progresses from a ground only engagement to multi-service engagements to test the effect of CASP [Close Air Support] and interdiction on the ground campaign. Several shortcomings have been detected, but thus far ground and CASP match historical results. However, modeling of air interdiction could not be validated.

So, this is a face validation based upon three cases. This is more than what I have seen anyone else do in the last 25 years.

Other Validation Data Bases

There have been (only) three other major historical validations done that I am aware of that we were not involved in. They are 1) the validation of the Atlas model to the France 1940 campaign done in the 1970s, 2) the validation of the Vector model using the Golan Heights campaign of 1973, and 3) the validation of SIMNET/JANUS using 73 Easting Data from the 1991 Gulf War. I am not aware of any other major validation efforts done in the last 25 years other than what we have done (there is one face validation done in 2017 that I will discuss in a later post).

I have never seen a validation report for the ATLAS model and never seen a reference to any of its research or data from the France 1940 campaign. I suspect it does not exist. The validation of Vector was only done for unit movement. They did not validate the attrition or combat functions. These were inserted from the actual battle. The validation was done in-house by Vector, Inc. I have seen the reports from that effort but am not aware of any databases or special research used. See Chapter 18 of War by Numbers for more details and also our newsletter in 1996 on the subject: http://www.dupuyinstitute.org/pdf/v1n4.pdf

So, I know of only one useful validation database out there that is not created by us. This is the Battle of 73 Easting. It was created under contract and used for validation of the JTLS (Joint Theater-Level Simulation).

But, the Battle of 73 Easting is a strange one-sided affair. First, it was fought in a sandstorm. Visibility was severely limited. Our modern systems allowed us to see the Iraqis. The Iraqis could not see us. Therefore, it is a very one-sided affair where the U.S. had maybe 6 soldiers killed, 19 wounded ant lost one Bradley fighting vehicle. The Iraqi had been 600-1,000 casualties and dozens of tanks lost to combat (and dozens more lost to aerial bombardment in the days and weeks before the battle). According to Wikipedia they lost 160 tanks and 180 armored personnel carriers. It was a shooting gallery. I did have a phone conversation with some of the people who did the veteran interviews for this effort. They said that this fight devolved to the point that the U.S. troops were trying to fire in front of the Iraqi soldiers to encourage them to surrender. Over 1,300 Iraqis were taken prisoner.

This battle is discussed in the Wikipedia article here: https://en.wikipedia.org/wiki/Battle_of_73_Easting

I did get the validation report on this and it is somewhere in our files (although I have not seen it for years). I do remember one significant aspect of the validation effort, which is that while it indeed got the correct result (all the Iraqi’s were destroyed), it did so having the Americans use four times as much ammunition as they did historically. Does this mean that the models attrition calculation was off by a factor of four?

Anyhow, I gather the database and the validation report are available from the U.S. government. Of course, it is a very odd battle and doing a validation to just one odd one-sided battle runs the danger of the “N=1” problem. Probably best to do validations to multiple battles.

A more recent effort (2017) that included some validation effort is discussed in a report called “Using Combat Adjudication to Aid in Training for Campaign Planning.” I will discuss this in a later blog post.

Now, there are a number of other database out there addressing warfare. For example the Correlates of War (COW) databases (see: COW), databases maintained by Stockholm International Peace Research Institute (SIPRI) (see: SIPRI) and other such efforts. We have never used these but do not think by their nature that they are useful for validating combat models at division, battalion or company level.

Other TDI Data Bases

What we have listed in the previous articles is what we consider the six best databases to use for validation. The Ardennes Campaign Simulation Data Base (ACSDB) was used for a validation effort by CAA (Center for Army Analysis). The Kursk Data Base (KDB) was never used for a validation effort but was used, along with Ardennes, to test Lanchester equations (they failed).

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base to date has not been used for anything that we are aware of. As the program we were supporting was classified, then they may have done some work with it that we are not aware of, but I do not think that is the case.

The Battle of Britain Data Base

Our three battles databases, the division-level data base, the battalion-level data base and the company-level data base, have all be used for validating our own TNDM (Tactical Numerical Deterministic Model). These efforts have been written up in our newsletters (here: http://www.dupuyinstitute.org/tdipub4.htm) and briefly discussed in Chapter 19 of War by Numbers. These are very good databases to use for validation of a combat model or testing a casualty estimation methodology. We have also used them for a number of other studies (Capture Rate, Urban Warfare, Lighter-Weight Armor, Situational Awareness, Casualty Estimation Methodologies, etc.). They are extremely useful tools analyzing the nature of conflict and how it impacts certain aspects. They are, of course, unique to The Dupuy Institute and for obvious business reasons, we do keep them close hold.

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

We do have a number of other database that have not been used as much. There is a list of 793 conflicts from 1898-1998 that we have yet to use for anything (the WACCO – Warfare, Armed Conflict and Contingency Operations database). There is the Campaign Data Base (CaDB) of 196 cases from 1904 to 1991, which was used for the Lighter Weight Armor study. There are three databases that are mostly made of cases from the original Land Warfare Data Base (LWDB) that did not fit into our division-level, battalion-level, and company-level data bases. They are the Large Action Data Base (LADB) of 55 cases from 1912-1973, the Small Action Data Base (SADB) of 5 cases and the Battles Data Base (BaDB) of 243 cases from 1600-1900. We have not used these three database for any studies, although the BaDB is used for analysis in War by Numbers.

Finally, there are three databases on insurgencies, interventions and peacekeeping operations that we have developed. This first was the Modern Contingency Operations Data Base (MCODB) that we developed to use for Bosnia estimate that we did for the Joint Staff in 1995. This is discussed in Appendix II of America’s Modern Wars. It then morphed into the Small Scale Contingency Operations (SSCO) database which we used for the Lighter Weight Army study. We then did the Iraq Casualty Estimate in 2004 and significant part of the SSCO database was then used to create the Modern Insurgency Spread Sheets (MISS). This is all discussed in some depth in my book America’s Modern Wars.

None of these, except the Campaign Data Base and the Battles Data Base (1600-1900), are good for use in a model validation effort. The use of the Campaign Data Base should be supplementary to validation by another database, much like we used it in the Lighter Weight Armor study.

Now, there have been three other major historical validation efforts done that we were not involved in. I will discuss their supporting data on my next post on this subject.

Battalion and Company Level Data Bases

Since the collapse of the Soviet Union in 1991, the need and desire to model combat at the division-level has declined. The focus has shifted to lower levels of combat. As such, we have created the Battalion-Level Operations Data Base (BLODB) and the Company-Level Actions Data Base (CLADB).

The challenge for both of these databases is to find actions that have good data for both sides. It is the nature of military organizations that divisions have the staff and record keeping that allows one to model them. These records are often (but not always !!!) preserved. So, it is possible to assemble the data for both sides for an engagement at division level. This is true through at least World War II (up through 1945). After that, getting unit records from both sides is difficult. Usually one or both of the opponents are still keeping their records classified or close hold. This is why we ended up posting on this subject:

The Sad Story Of The Captured Iraqi DESERT STORM Documents

And:

So Why Are Iraqi Records Important?

 

Just to give an example of the difficulty of creating battalion-level engagements, for the southern offensive around Belgorod (Battle of Kursk) from 4-18 July 1943 I was able to created 192 engagements using the unit records for both sides. I have yet to create a single battalion-level engagement from those records. The only detailed description of a battalion-level action offered in the German records are of a mop-up operation done by the 74th Engineer Battalion. We have no idea of who they were facing or what their strength was. We do have strengths at times of various German battalions and we sometimes have strength and losses for some of the Soviet infantry and tank regiments, so it might be possible to work something up with a little estimation, but it certainly can not be done systematically like we have for division-level engagements. As U.S. and British armies (and USMC) tend to have better battalion-level record keeping than most other armies, it is possible to work something up from their records, if you can put together anything on their opponents. So far, our work on battalion-level and company-level combat has been more of a grab-bag and catch-and-catch-can effort that we had done over time.

Our battalion-level data base consists of 127 cases. They cover from 1918 to 1991. It is described here: http://www.dupuyinstitute.org/data/blodb.htm The blurry photo at the start of this blog if from that database.

Our company-level data base is more recent. It has not been set up yet as an Access data base. It consists of 98 cases from 1914 to 2000.

The BLODB was used for the battalion-level validation of the TNDM. This is discussed briefly in Chapter 19 of War by Numbers. These engagements are discussed in depth in four issues of  our International TNDM Newsletter (see Vol. 1, Numbers 2, 4, 5, 6 here: http://www.dupuyinstitute.org/tdipub4.htm )

The CLADB was used for a study done for Boeing on casualty rates compared to unit sizes in combat. This is discussed in depth in Chapter 12: The Nature of Lower Levels of Combat in War by Numbers.

Both databases are in need to expansion. To date, we have not found anyone willing to fund such an effort.