This is a follow-up to our blog post:
This rather significant effort came out of the blue for us, and I gather a whole lot of others in the industry. The actual proposed law is here:
Some people, we gather, are a little nervous about this effort. On the other hand, Clinton Reilly, an occasional commenter on this blog and the Managing Director of Computer Strategies Pty Ltd, Sydney Australia, nicely responds to these concerns with the following post:
I would not be too concerned by the prospect of more rigorous validation. I am sure it represents a major opportunity to improve modelling practices and obtain the increased funding that will be required to support the effort.
One of the first steps will be the development of a set of methodologies that will be tailored to testing the types of models required. I am sure that there will be no straight jacketing or enforced uniformity as it is obvious the needs served by the models are many and varied and cannot be met by a “one size fits all” approach.
Provided modellers prepare themselves by developing an approach to validation that is required by their user community they will be in a good position to work with the investigating committee and secure the support and funding needed.
In the end, validation is not a “pass-fail” test to be feared, it is a methodology to improve the model and improve confidence in the model results, and to fully understand the strengths and weaknesses of a model. This is essential if you are going to be using the model for analytical uses, and practically essential even if you are using it for training.
So this is an opportunity not a threat. It is a much needed leap forward.
Let us begin work on developing an approach to validation that suits our individual modelling requirements so that we can present them to the review committee when it asks for input.
Now, my experience on this subject, which dates back to managing the Ardennes Campaign Simulation Data Base (ACSDB) in 1987, is that many of the U.S. Military Operations Research community will not see it as “…an opportunity, not a threat.” We shall see.
A threat to those who just want to go through the motions of training and analyzing; SOP for those who expect to achieve something useful from training and analyzing.
In my experience, it is a more radical rejection than that. There are plenty of officers who genuinely think there is literally nothing to be gained from statistically modelling combat. They feel, at best, that OA or any form of statistical analysis might have some use in logistics or engineering.
You guys (the UK) invented Operational Research.
My experience is different. The officers who are going into battle seem happy to have modelling done as it can help reduce their risk if the red team show up unexpected problems.
The people who are concerned seem to be some of the analyst community who feel their “baby” will be torn apart by wolves. Some of them think their reputation depends on their model remaining largely unchanged.
However if we adopt a scientific approach then there is no absolute truth and every time we find a shortcoming it is an opportunity to improve the model, especially if the problem is large and a rebuild is required.The bigger the flaw the greater the improvement. Scepticism about models is healthy and should be encouraged if coupled by the will and resources needed to redevelop.
BTW – I think the officers who are concerned about military modelling are concerned about an oversimplification of battlefield behaviour. Providing modelling can show that it does reflect battlefield behaviour reasonably accurately through validation then that will probably be seem as beneficial.