Continuing to comment on the article in the December 2018 issue of the Phalanx by Jonathan Alt, Christopher Morey and Larry Larimer (this is part 2 of 7; see part 1 here).
On the first page (page 28) top of the third column they make the rather declarative statement that:
The combat simulations used by military operations research and analysis agencies adhere to strict standards established by the DoD regarding verification, validation and accreditation (Department of Defense, 2009).
Now, I have not reviewed what has been done on verification, validation and accreditation since 2009, but I did do a few fairly exhaustive reviews before then. One such review is written up in depth in The International TNDM Newsletter. It is Volume 1, No. 4 (February 1997). You can find it here:
http://www.dupuyinstitute.org/tdipub4.htm
The newsletter includes a letter dated 21 January 1997 from the Scientific Advisor to the CG (Commanding General) at TRADOC (Training and Doctrine Command). This is the same organization that the three gentlemen who wrote the article in the Phalanx work for. The Scientific Advisor sent a letter out to multiple commands to try to flag the issue of validation (letter is on page 6 of the newsletter). My understanding is that he received few responses (I saw only one, it was from Leavenworth). After that, I gather there was no further action taken. This was a while back, so maybe everything has changed, as I gather they are claiming with that declarative statement. I doubt it.
This issue to me is validation. Verification is often done. Actual validations are a lot rarer. In 1997, this was my list of combat models in the industry that had been validated (the list is on page 7 of the newsletter):
1. Atlas (using 1940 Campaign in the West)
2. Vector (using undocumented turning runs)
3. QJM (by HERO using WWII and Middle-East data)
4. CEM (by CAA using Ardennes Data Base)
5. SIMNET/JANUS (by IDA using 73 Easting data)
Now, in 2005 we did a report on Casualty Estimation Methodologies (it is report CE-1 list here: http://www.dupuyinstitute.org/tdipub3.htm). We reviewed the listing of validation efforts, and from 1997 to 2005…nothing new had been done (except for a battalion-level validation we had done for the TNDM). So am I now to believe that since 2009, they have actively and aggressively pursued validation? Especially as most of this time was in a period of severely declining budgets, I doubt it. One of the arguments against validation made in meetings I attended in 1987 was that they did not have the time or budget to spend on validating. The budget during the Cold War was luxurious by today’s standards.
If there have been meaningful validations done, I would love to see the validation reports. The proof is in the pudding…..send me the validation reports that will resolve all doubts.
Backing up the claim that not much validation has been done, I was told by one of my ex-battalion commanders that on the eve of Desert Storm we had lots of data on the accuracy of tank training ammunition, but very little on war shots.
In another incident, I enquired of a customer who would be validating our ph/pk data as well as other data for a product, and was told that validation was not their responsibility. The product was validated, but more in a subjective rather than objective manner.
Mike, thanks for the comments. They are along similar lines to what Chris has heard through the years. Validation certainly seems more honored in the breach than in the observance, despite DOD regulations.
Out of curiosity, do you know of any other attempts to test ph/pks to operational combat data? We are aware of multiple attempts to do this with the Lanchester equations, but have yet to see anything like that related to ph/pks.
Cheers,
–Shawn
Sorry. That’s all I’ve got. 🙁