U.S. Senate and Model Validation – Comments

This is a follow-up to our blog post:

Have They Been Reading Our Blog?

This rather significant effort came out of the blue for us, and I gather a whole lot of others in the industry. The actual proposed law is here:

U.S. Senate on Model Validation

Some people, we gather, are a little nervous about this effort. On the other hand, Clinton Reilly, an occasional commenter on this blog and the Managing Director of Computer Strategies Pty Ltd, Sydney Australia, nicely responds to these concerns with the following post:

I would not be too concerned by the prospect of more rigorous validation. I am sure it represents a major opportunity to improve modelling practices and obtain the increased funding that will be required to support the effort.

One of the first steps will be the development of a set of methodologies that will be tailored to testing the types of models required. I am sure that there will be no straight jacketing or enforced uniformity as it is obvious the needs served by the models are many and varied and cannot be met by a “one size fits all” approach.

Provided modellers prepare themselves by developing an approach to validation that is required by their user community they will be in a good position to work with the investigating committee and secure the support and funding needed.

In the end, validation is not a “pass-fail” test to be feared, it is a methodology to improve the model and improve confidence in the model results, and to fully understand the strengths and weaknesses of a model. This is essential if you are going to be using the model for analytical uses, and practically essential even if you are using it for training.

So this is an opportunity not a threat. It is a much needed leap forward.

Let us begin work on developing an approach to validation that suits our individual modelling requirements so that we can present them to the review committee when it asks for input.

Now, my experience on this subject, which dates back to managing the Ardennes Campaign Simulation Data Base (ACSDB) in 1987, is that many of the U.S. Military Operations Research community will not see it as “…an opportunity, not a threat.” We shall see.

Share this:
Christopher A. Lawrence
Christopher A. Lawrence

Christopher A. Lawrence is a professional historian and military analyst. He is the Executive Director and President of The Dupuy Institute, an organization dedicated to scholarly research and objective analysis of historical data related to armed conflict and the resolution of armed conflict. The Dupuy Institute provides independent, historically-based analyses of lessons learned from modern military experience.

Mr. Lawrence was the program manager for the Ardennes Campaign Simulation Data Base, the Kursk Data Base, the Modern Insurgency Spread Sheets and for a number of other smaller combat data bases. He has participated in casualty estimation studies (including estimates for Bosnia and Iraq) and studies of air campaign modeling, enemy prisoner of war capture rates, medium weight armor, urban warfare, situational awareness, counterinsurgency and other subjects for the U.S. Army, the Defense Department, the Joint Staff and the U.S. Air Force. He has also directed a number of studies related to the military impact of banning antipersonnel mines for the Joint Staff, Los Alamos National Laboratories and the Vietnam Veterans of American Foundation.

His published works include papers and monographs for the Congressional Office of Technology Assessment and the Vietnam Veterans of American Foundation, in addition to over 40 articles written for limited-distribution newsletters and over 60 analytical reports prepared for the Defense Department. He is the author of Kursk: The Battle of Prokhorovka (Aberdeen Books, Sheridan, CO., 2015), America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Casemate Publishers, Philadelphia & Oxford, 2015), War by Numbers: Understanding Conventional Combat (Potomac Books, Lincoln, NE., 2017) and The Battle of Prokhorovka (Stackpole Books, Guilford, CT., 2019)

Mr. Lawrence lives in northern Virginia, near Washington, D.C., with his wife and son.

Articles: 1455

5 Comments

  1. A threat to those who just want to go through the motions of training and analyzing; SOP for those who expect to achieve something useful from training and analyzing.

  2. In my experience, it is a more radical rejection than that. There are plenty of officers who genuinely think there is literally nothing to be gained from statistically modelling combat. They feel, at best, that OA or any form of statistical analysis might have some use in logistics or engineering.

    • My experience is different. The officers who are going into battle seem happy to have modelling done as it can help reduce their risk if the red team show up unexpected problems.

      The people who are concerned seem to be some of the analyst community who feel their “baby” will be torn apart by wolves. Some of them think their reputation depends on their model remaining largely unchanged.

      However if we adopt a scientific approach then there is no absolute truth and every time we find a shortcoming it is an opportunity to improve the model, especially if the problem is large and a rebuild is required.The bigger the flaw the greater the improvement. Scepticism about models is healthy and should be encouraged if coupled by the will and resources needed to redevelop.

      • BTW – I think the officers who are concerned about military modelling are concerned about an oversimplification of battlefield behaviour. Providing modelling can show that it does reflect battlefield behaviour reasonably accurately through validation then that will probably be seem as beneficial.

Leave a Reply

Your email address will not be published. Required fields are marked *