Validation by Use

Sacrobosco, Tractatus de Sphaera (1550 AD)

Another argument I have heard over the decades is that models are validated by use. Apparently the argument is that these models have been used for so long, and so many people have worked with their outputs, that they must be fine. I have seen this argument made in writing by a senior army official in 1997 in response to a letter addressing validation that we encouraged TRADOC to be send out:

See: http://www.dupuyinstitute.org/pdf/v1n4.pdf

I doubt that there is any regulation discussing “validation by use,” and I doubt anyone has ever defended this idea in public paper. Still, it is an argument that I have heard used far more than once or twice.

Now, part of the problem is that some of these models have been around a few decades. For example, the core of some of the models used by CAA, for example COSAGE, first came into existence in 1969. They are using a 50-year updated model to model modern warfare. My father worked with this model. RAND’s JICM (Joint Integrated Contingency Model) dates back to the 1980s, so it is at least 30 years old. The irony is that some people argue that one should not use historical warfare examples to validate models of modern warfare. These models now have a considerable legacy.

From a practical point of view, it means that the people who originally designed and developed the model have long since retired. In many cases, the people who intimately knew the inner workings of the model have also retired and have not really been replaced. Some of these models have become “black boxes” where the users do not really know the details of how the models calculate their results. So suddenly, validation by use seems like a reasonable argument, because these models pre-date the analysts, and they assume that there is some validity to them, as people have been using them. They simple inherited the model. Why question it?

Illustration by Bartolomeu Velho, 1568 AD
Share this:
Christopher A. Lawrence
Christopher A. Lawrence

Christopher A. Lawrence is a professional historian and military analyst. He is the Executive Director and President of The Dupuy Institute, an organization dedicated to scholarly research and objective analysis of historical data related to armed conflict and the resolution of armed conflict. The Dupuy Institute provides independent, historically-based analyses of lessons learned from modern military experience.

Mr. Lawrence was the program manager for the Ardennes Campaign Simulation Data Base, the Kursk Data Base, the Modern Insurgency Spread Sheets and for a number of other smaller combat data bases. He has participated in casualty estimation studies (including estimates for Bosnia and Iraq) and studies of air campaign modeling, enemy prisoner of war capture rates, medium weight armor, urban warfare, situational awareness, counterinsurgency and other subjects for the U.S. Army, the Defense Department, the Joint Staff and the U.S. Air Force. He has also directed a number of studies related to the military impact of banning antipersonnel mines for the Joint Staff, Los Alamos National Laboratories and the Vietnam Veterans of American Foundation.

His published works include papers and monographs for the Congressional Office of Technology Assessment and the Vietnam Veterans of American Foundation, in addition to over 40 articles written for limited-distribution newsletters and over 60 analytical reports prepared for the Defense Department. He is the author of Kursk: The Battle of Prokhorovka (Aberdeen Books, Sheridan, CO., 2015), America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Casemate Publishers, Philadelphia & Oxford, 2015), War by Numbers: Understanding Conventional Combat (Potomac Books, Lincoln, NE., 2017) and The Battle of Prokhorovka (Stackpole Books, Guilford, CT., 2019)

Mr. Lawrence lives in northern Virginia, near Washington, D.C., with his wife and son.

Articles: 1455

2 Comments

  1. Hi Chris, in my paper which you kindly referenced, I used the Term “face validation” to show I made an attempt to show the model could produce results close to what actually happened in those three incidents:

    1st ID vs 26th Iraqi Division
    2nd ACR vs elements of the IRFC
    3IDs initial operations during OIF.

    The model was built to provide a faster means of adjudication for high level training or discussion, such as seminars.

    No one told me to validate anything. I felt, for myself, that the model should produce results “close” to historical results. The model produced more casualties in the 1ID than we suffered (2 dead and a handful of wounded) but was close to times and distance gained.

    I had no official records, so I used popular accounts and my memory of what the 1ID did.

    I ran multiple iterations changing some items for leadership and weather and terrain, but did not change my firepower scores.

    Nor did I try to run my model against the TNDM or any other simulation.

    In the end I was basically satisfied and so was my government customer. In the end, my company lost the recompete and I do not believe use of the model has continued.

    I believe, as you do, we have to really think about validation.

    • Your face validation effort is the only validation effort outside of The Dupuy Institute’s work that I know of in the last 25 years. It should be a fairly standard process in any modeling effort that is being done.

      For example, we were doing some work for Boeing on the FCS (Future Combat System). They wanted a demonstration of how the FCS would perform at the Battle of Kursk (I know…this is odd). As part of that effort, we first ran the division and corps at Kursk through a TNDM validation test before we ran the TNDM testing future combat. So, we started with validation! The results of those validation runs are published in our TNDM newsletter and in Chapter 19 of War by Numbers.

Leave a Reply

Your email address will not be published. Required fields are marked *