Paul Davis (RAND) on Bugaboos

Just scanning the MORS Wargaming Special Meeting, October 2016, Final Report, January 31, 2017. The link to the 95-page report is here:

http://www.mors.org/Portals/23/Docs/Events/2016/Wargaming/MORS%20Wargaming%20Workshop%20Report.pdf?ver=2017-03-01-151418-980

There are a few comments from Dr. Paul Davis (RAND) starting on page 13 that are worth quoting:

I was struck through the workshop by a schism among attendees. One group believes, intuitively and viscerally, that human gaming–although quite powerful–is just a subset of modeling general. The other group believes, just as intuitively and viscerally, that human gaming is very different….

The impression had deep roots. Writings in the 1950s about defense modeling and systems analysis emphasized being scientific, rigorous, quantitative, and tied to mathematics. This was to be an antidote for hand-waving subjective assertions. That desire translated into an emphasis on “closed” models with no human interactions, which allowed reproducibility. Most DoD-level models have often been at theater or campaign level (e.g., IDAGAM, TACWAR, JICM, Thunder, and Storm). Many represent combat as akin to huge armies grinding each other down, as in the European theaters of World Wars I and II. such models are quite large, requiring considerable expertise and experience to understand.

Another development was standardized scenarios and date set with the term “data” referring to everything from facts to highly uncertain assumptions about scenario, commander decisions, and battle outcomes. Standardization allowed common baselines, which assured that policymakers would receive reports with common assumptions rather than diverse hidden assumptions chosen to favor advocates’ programs. The baselines also promoted joint thinking and assured a level playing field for joint analysis. Such reasons were prominent in DoD’s Analytic Agenda (later called Support for Strategic Analysis). Not surprisingly, however, the tendency was often to be disdainful of such other forms of modeling as the history-base formula models of Trevor Dupuy and the commercial board games of Jim Dunnigan and Mark Herman. These alternative approaches seen as somehow “lesser,” because they were allegedly less rigorous and scientific. Uncertainty analysis has been seriously inadequate. I have demurred on these matters for many years, as in the “Base of Sand” paper in 1993 and more recent monographs available on the RAND website….

The quantitative/qualitative split is a bugaboo. Many “soft” phenomena can be characterized with meaningful, albeit imprecise, numbers.

The Paul Davis “Base of Sand” paper from 1991 is here: https://www.rand.org/pubs/notes/N3148.html

 

Share this:
Christopher A. Lawrence
Christopher A. Lawrence

Christopher A. Lawrence is a professional historian and military analyst. He is the Executive Director and President of The Dupuy Institute, an organization dedicated to scholarly research and objective analysis of historical data related to armed conflict and the resolution of armed conflict. The Dupuy Institute provides independent, historically-based analyses of lessons learned from modern military experience.

Mr. Lawrence was the program manager for the Ardennes Campaign Simulation Data Base, the Kursk Data Base, the Modern Insurgency Spread Sheets and for a number of other smaller combat data bases. He has participated in casualty estimation studies (including estimates for Bosnia and Iraq) and studies of air campaign modeling, enemy prisoner of war capture rates, medium weight armor, urban warfare, situational awareness, counterinsurgency and other subjects for the U.S. Army, the Defense Department, the Joint Staff and the U.S. Air Force. He has also directed a number of studies related to the military impact of banning antipersonnel mines for the Joint Staff, Los Alamos National Laboratories and the Vietnam Veterans of American Foundation.

His published works include papers and monographs for the Congressional Office of Technology Assessment and the Vietnam Veterans of American Foundation, in addition to over 40 articles written for limited-distribution newsletters and over 60 analytical reports prepared for the Defense Department. He is the author of Kursk: The Battle of Prokhorovka (Aberdeen Books, Sheridan, CO., 2015), America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Casemate Publishers, Philadelphia & Oxford, 2015), War by Numbers: Understanding Conventional Combat (Potomac Books, Lincoln, NE., 2017) and The Battle of Prokhorovka (Stackpole Books, Guilford, CT., 2019)

Mr. Lawrence lives in northern Virginia, near Washington, D.C., with his wife and son.

Articles: 1455

6 Comments

  1. Hi Chris,

    I think there is another schism in the wargaming/modeling/simulation community.

    I think there are 3 or 4 “factions”

    1. Analytical – analyzing particular problems or scenarios with present day/mid-term equipment/organizations, and enemies.
    2. R&D/Experimental. Directly supporting the development of new equipment and organizations.
    3. Training – Simulations (Live, Virtual, or Constructive) supporting the training of units.
    4. Operational – Simulations supporting active operations.

    Factions 1 and 2 could well be the same; and they are viewed, from my perspective, as the premier members of the overall club.

    Faction 3 is somewhat less favored. While they are important, they don’t pretend to predictive and results from their exercises are apparently not placed in the Army, or perhaps the Department of Defense database/repository. (I think that is the correct term and things may have changed since I learned this.)

    Faction 4 I don’t believe has any real products, but I could be mistaken. For example, the staff wargaming phase of the MDMP is still done by a bunch of guys standing around a map. This faction is also limited because using a simulation of any sort would be very time constrained and have to run faster than real time in order to evaluate a 3 x 3 matrix of the friendly courses of action verses the enemies best COA, Most Likely COA, and Most Dangerous COA.

    Just some more food for thought.

      • Well, it has always bothered me the people using models for training don’t seem to be very concerned about the accuracy of their models. If you are training them using a model that has errors in it, what are you actually training your people to do?

        So yes, I think the various “Base of Sand” problems should very much be an issue for training models also.

        • The programs I have worked on went through a validation stage but it was not focused on validating the data but was focused on whether or not the behavior and results were, in the opinion of the tester, were doctrinally valid.

          Our hit and damage data did come from agencies tasked with providing appropriate data, but it did not exist for all weapon-munition-target pairs. The end customer did not try to validate that data.

          The customer does caution users not to try to use these programs to predict or ‘valudate’ Operational plans and states they are probably no more than 80% accurate.

          I agree validation is an issue that manifests across all domains.

          I placed you in the analytical field because I don’t think your product takes too much time to use and does not have enough granularity at brigade level and below to be usable in its current form.

          I of course could be wrong.

          • Thank you for your comments on validation.

            Our primary “product” is studies and analysis, be it Iraq casualty estimates, urban warfare studies, or combat termination studies. Our combat model (the TNDM) is a small part of our work and has never been sold to the U.S. It has been validated to battalion-level combat (see Chapter 19 in War by Numbers).

Leave a Reply

Your email address will not be published. Required fields are marked *