Tag Base of Sand problem

War By Numbers Published

Christopher A. Lawrence, War by Numbers Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

Aussie OR

Over the years I have run across a number of Australian Operations Research and Historical Analysis efforts. Overall, I have been impressed with what I have seen. Below is one of their papers written by Nigel Perry. He is not otherwise known to me. It is dated December 2011: Applications of Historical Analyses in Combat Modeling

It does address the value of Lanchester equations in force-on-force combat models, which in my mind is already a settled argument (see: Lanchester Equations Have Been Weighed). His is the latest argument that I gather reinforces this point.

The author of this paper references the work of Robert Helmbold and Dean Hartley (see page 14). He does favorably reference the work of Trevor Dupuy but does not seem to be completely aware of the extent or full nature of it (pages 14, 16, 17, 24 and 53). He does not seem to aware that the work of Helmbold and Hartley was both built from a database that was created by Trevor Dupuy’s companies HERO & DMSI. Without Dupuy, Helmbold and Hartley would not have had data to work from.

Specifically, Helmbold was using the Chase database, which was programmed by the government from the original paper version provided by Dupuy. I think it consisted of 597-599 battles (working from memory here). It also included a number of coding errors when they programmed it and did not include the battle narratives. Hartley had Oakridge National Laboratories purchase a computerized copy from Dupuy of what was now called the Land Warfare Data Base (LWDB). It consisted of 603 or 605 engagements (and did not have the coding errors but still did not include the narratives). As such, they both worked from almost the same databases.

Dr. Perrty does take a copy of Hartley’s  database and expands it to create more engagements. He says he expanded it from 750 battles (except the database we sold to Harley had 603 or 605 cases) to around 1600. It was estimated in the 1980s by Curt Johnson (Director and VP of HERO) to take three man-days to create a battle. If this estimate is valid (actually I think it is low), then to get to 1600 engagements the Australian researchers either invested something like 10 man-years of research, or relied heavily on secondary sources without any systematic research, or only partly developed each engagement (for example, only who won and lost). I suspect the latter.

Dr. Perry shows on page 25:

Data-segment……..Start…….End……Number of……Attacker…….Defender

Epoch…………………Year…….Year……..Battles………Victories……Victories

Ancient………………- 490…….1598………….63………………36……………..27

17th Century……….1600…….1692………….93………………67……………..26

18th Century……….1700…….1798………..147…………….100……………..47

Revolution…………..1792……1800…………238…………….168…………….70

Empire……………….1805……1815…………327……………..203…………..124

ACW………………….1861……1865…………143……………….75…………….68

19th Century……….1803…….1905…………126……………….81…………….45

WWI………………….1914…….1918…………129……………….83…………….46

WWII…………………1920…….1945…………233……………..165…………….68

Korea………………..1950…….1950…………..20……………….20………………0

Post WWII………….1950……..2008…………118……………….86…………….32

 

We, of course, did something very similar. We took the Land Warfare Data Base (the 605 engagement version), expanded in considerably with WWII and post-WWII data, proofed and revised a number of engagements using more primarily source data, divided it into levels of combat (army-level, division-level, battalion-level, company-level) and conducted analysis with the 1280 or so engagements we had. This was a much more powerful and better organized tool. We also looked at winner and loser, but used the 605 engagement version (as we did the analysis in 1996). An example of this, from pages 16 and 17 of my manuscript for War by Numbers shows:

Attacker Won:

 

                        Force Ratio                Force Ratio    Percent Attack Wins:

                        Greater than or         less than          Force Ratio Greater Than

                        equal to 1-to-1            1-to1                or equal to 1-to-1

1600-1699        16                              18                         47%

1700-1799        25                              16                         61%

1800-1899        47                              17                         73%

1900-1920        69                              13                         84%

1937-1945      104                                8                         93%

1967-1973        17                              17                         50%

Total               278                              89                         76%

 

Defender Won:

 

                        Force Ratio                Force Ratio    Percent Defense Wins:

                        Greater than or         less than          Force Ratio Greater Than

                        equal to 1-to-1            1-to1                or equal to 1-to-1

1600-1699           7                                6                       54%

1700-1799         11                              13                       46%

1800-1899         38                              20                       66%

1900-1920         30                              13                       70%

1937-1945         33                              10                       77%

1967-1973         11                                5                       69%

Total                130                              67                       66%

 

Anyhow, from there (pages 26-59) the report heads into an extended discussion of the analysis done by Helmbold and Hartley (which I am not that enamored with). My book heads in a different direction: War by Numbers III (Table of Contents)

 

 

Osipov

Back in 1915, a Russian named M. Osipov published a paper in a Tsarist military journal that was Lanchester like: http://www.dtic.mil/dtic/tr/fulltext/u2/a241534.pdf

He actually tested his equations to historical data, which are presented in his paper. He ended up coming up with something similar to Lanchester equations but it did not have a square law, but got a similar effect by putting things to the 3/2nds power.

As far as we know, because of the time it was published (June-October 1915), it was not influenced or done with any awareness of work that the far more famous Frederick Lanchester had done (and Lanchester was famous for a lot more than just his modeling equations).  Lanchester first published his work in the fall of 1914 (after the Great War had already started). It is possible that Osipov was aware of it, but he does not mention Lanchester. He was probably not aware of Lanchester’s work. It appears to be the case of him independently coming up with the use of differential equations to describe combat attrition. This was also the case with Rear Admiral J. V. Chase, who wrote a classified staff paper for U.S. Navy in 1902 that was not revealed until 1972.

Osipov, after he had written his paper, may have served in World War I, which was already underway at the time it was published. Between the war, the Russian revolutions, the civil war afterwards, the subsequent repressions by Cheka and later Stalin, we do not know what happened to M. Osipov. At the time I was asked by CAA if our Russian research team knew about him. I passed the question to Col. Sverdlov and Col. Vainer and they were not aware of him. It is probably possible to chase him down, but would probably take some effort. Perhaps some industrious researcher will find out more about him.

It does not appear that Osipov had any influence on Soviet operations research or military analysis. It appears that he was ignored or forgotten. His article was re-published in the September 1988  of the Soviet Military-Historical Journal with the propaganda influenced statement that they also had their own “Lanchester.” Of course, this “Soviet Lanchester” was publishing in a Tsarist military journal, hardly a demonstration of the strength of the Soviet system.

 

Should Defense Department Campaign-Level Combat Modeling Be Reinstated?

Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller

In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”

According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.

CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).

CAPE took the criticism to heart.

CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.

However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”

As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.

Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)

Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.

In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”

As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.

In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)

So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

The Uncongenial Lessons of Past Conflicts

Williamson Murray, professor emeritus of history at Ohio State University, on the notion that military failures can be traced to an overemphasis on the lessons of the last war:

It is a myth that military organizations tend to do badly in each new war because they have studied too closely the last one; nothing could be farther from the truth. The fact is that military organizations, for the most part, study what makes them feel comfortable about themselves, not the uncongenial lessons of past conflicts. The result is that more often than not, militaries have to relearn in combat—and usually at a heavy cost—lessons that were readily apparent at the end of the last conflict.

[Williamson Murray, “Thinking About Innovation,” Naval War College Review, Spring 2001, 122-123. This passage was cited in a recent essay by LTG H.R. McMaster, “Continuity and Change: The Army Operating Concept and Clear Thinking About Future War,” Military Review, March-April 2015. I recommend reading both.]

Studying The Conduct of War: “We Surely Must Do Better”

"The Ultimate Sand Castle" [Flickr, Jon]
“The Ultimate Sand Castle” [Flickr, Jon]

Chris and I both have discussed previously the apparent waning interest on the part of the Department of Defense to sponsor empirical research studying the basic phenomena of modern warfare. The U.S. government’s boom-or-bust approach to this is long standing, extending back at least to the Vietnam War. Recent criticism of the Department of Defense’s Office of Net Assessment (OSD/NA) is unlikely to help. Established in 1973 and led by the legendary Andrew “Yoda” Marshall until 2015, OSD/NA plays an important role in funding basic research on topics of crucial importance to the art of net assessment. Critics of the office appear to be unaware of just how thin the actual base of empirical knowledge is on the conduct of war. Marshall understood that the net result of a net assessment based mostly on guesswork was likely to be useless, or worse, misleadingly wrong.

This lack of attention to the actual conduct of war extends beyond government sponsored research. In 2004, Stephen Biddle, a professor of political science at George Washington University and a well-regarded defense and foreign policy analyst, published Military Power: Explaining Victory and Defeat in Modern Battle. The book focused on a very basic question: what causes victory and defeat in battle? Using a comparative approach that incorporated quantitative and qualitative methods, he effectively argued that success in contemporary combat was due to the mastery of what he called the “modern system.” (I won’t go into detail here, but I heartily recommend the book to anyone interested in the topic.)

Military Power was critically acclaimed and received multiple awards from academic, foreign policy, military, operations research, and strategic studies organizations. For all the accolades, however, Biddle was quite aware just how neglected the study of war has become in U.S. academic and professional communities. He concluded the book with a very straightforward assessment:

[F]or at least a generation, the study of war’s conduct has fallen between the stools of the institutional structure of modern academia and government. Political scientists often treat war itself as outside their subject matter; while its causes are seen as political and hence legitimate subjects of study, its conduct and outcomes are more often excluded. Since the 1970s, historians have turned away from the conduct of operations to focus on war’s effects on social, economic, and political structures. Military officers have deep subject matter knowledge but are rarely trained as theoreticians and have pressing operational demands on their professional attention. Policy analysts and operations researchers focus so tightly on short-deadline decision analysis (should the government buy the F22 or cancel it? Should the Army have 10 divisions or 8?) that underlying issues of cause and effect are often overlooked—even when the decisions under analysis turn on embedded assumptions about the causes of military outcomes. Operations research has also gradually lost much of its original empirical focus; modeling is now a chiefly deductive undertaking, with little systematic effort to test deductive claims against real world evidence. Over forty years ago, Thomas Schelling and Bernard Brodie argued that without an academic discipline of military science, the study of the conduct of war had languished; the passage of time has done little to overturn their assessment. Yet the subject is simply too important to treat by proxy and assumption on the margins of other questions In the absence of an institutional home for the study of warfare, it is all the more essential that analysts in existing disciplines recognize its importance and take up the business of investigating capability and its causes directly and rigorously. Few subjects are more important—or less studied by theoretical social scientists. With so much at stake, we surely must do better. [pp. 207-208]

Biddle published Military Power 12 years ago, in 2004. Has anything changed substantially? Have we done better?

Estimating Combat Casualties II

Just a few comments on this article:

  1. One notes the claim of 30,000 killed for the 1991 Gulf War. This was typical of some of the discussion at the time. As we know, the real figure was much, much lower.
  2. Note that Jack Anderson is quoting some “3-to-1 Rule.” We are not big fans of “3-to-1 Rules.” Trevor Dupuy does briefly refute it.
  3. Trevor Dupuy does end the discussion by mentioning “combat power ratios.” This is not quite the same as “force ratios.”

Anyhow, interesting blast from the past, although some of this discussion we were also conducting a little over a week ago at a presentation we provided.

 

Estimating Combat Casualties I

Shawn Woodford was recently browsing in a used bookstore in Annapolis. He came across a copy of Genius for War. Tucked in the front cover was this clipping from the Washington Post. It is undated, but makes reference to a Jack Anderson article from 1 November, presumably 1990. So it must have been published sometime shortly thereafter.

19901100EstimatingCombatCasualties

 

Assessing the 1990-1991 Gulf War Forecasts

WargamesA number of forecasts of potential U.S. casualties in a war to evict Iraqi forces from Kuwait appeared in the media in the autumn of 1990. The question of the human costs became a political issue for the administration of George H. W. Bush and influenced strategic and military decision-making.

Almost immediately following President Bush’s decision to commit U.S. forces to the Middle East in August 1990, speculation appeared in the media about what a war between Iraq and a U.S.-led international coalition might entail. In early September, U.S. News & World Report reported “that the U.S. Joint Chiefs of Staff and the National Security Council estimated that the United States would lose between 20,000 and 30,000 dead and wounded soldiers in a Gulf war.” The Bush administration declined official comment on these figures at the time, but the media indicated that they were derived from Defense Department computer models used to wargame possible conflict scenarios.[1] The numbers shocked the American public and became unofficial benchmarks in subsequent public discussion and debate.

A Defense Department wargame exploring U.S. options in Iraq had taken place on 25 August, the results of which allegedly led to “major changes” in military planning.[2] Although linking the wargame and the reported casualty estimate is circumstantial, the cited figures were very much in line with other contemporary U.S. military casualty estimates. A U.S. Army Personnel Command [PERSCOM] document that informed U.S. Central Command [USCENTCOM] troop replacement planning, likely based on pre-crisis plans for the defense of Saudi Arabia against possible Iraqi invasion, anticipated “about 40,000” total losses.[3]

These early estimates were very likely to have been based on a concept plan involving a frontal attack on Iraqi forces in Kuwait using a single U.S. Army corps and a U.S. Marine Expeditionary Force. In part due to concern about potential casualties from this course of action, the Bush administration approved USCENTCOM commander General Norman Schwarzkopf’s preferred concept for a flanking offensive using two U.S. Army corps and additional Marine forces.[4] Despite major reinforcements and a more imaginative battle plan, USCENTCOM medical personnel reportedly briefed Defense Secretary Dick Cheney and Joint Chiefs Chairman Colin Powell in December 1990 that they were anticipating 20,000 casualties, including 7,000 killed in action.[5] Even as late as mid-February 1991, PERSCOM was forecasting 20,000 U.S. casualties in the first five days of combat.[6]

The reported U.S. government casualty estimates prompted various public analysts to offer their own public forecasts. One anonymous “retired general” was quoted as saying “Everyone wants to have the number…Everyone wants to be able to say ‘he’s right or he’s wrong, or this is the way it will go, or this is the way it won’t go, or better yet, the senator or the higher-ranking official is wrong because so-and-so says that the number is this and such.’”[7]

Trevor Dupuy’s forecast was among the first to be cited by the media[8], and he presented it before a hearing of the Senate Armed Services Committee in December.

Other prominent public estimates were offered by political scientists Barry Posen and John J. Mearshimer, and military analyst Joshua Epstein. In November, Posen projected that the Coalition would initiate an air offensive that would quickly gain air superiority, followed by a frontal ground attack lasting approximately 20 days incurring 4,000 (with 1,000 dead) to 10,000 (worst case) casualties. He used the historical casualty rates experienced by Allied forces in Normandy in 1944 and the Israelis in 1967 and 1973 as a rough baseline for his prediction.[9]

Epstein’s prediction in December was similar to Posen’s. Coalition forces would begin with a campaign to obtain control of the air, followed by a ground attack that would succeed within 15-21 days, incurring between 3,000 and 16,000 U.S. casualties, with 1,049-4,136 killed. Like Dupuy, Epstein derived his forecast from a combat model, the Adaptive Dynamic Model.[10]

On the eve of the beginning of the air campaign in January 1991, Mearshimer estimated that Coalition forces would defeat the Iraqis in a week or less and that U.S. forces would suffer fewer than 1,000 killed in combat. Mearshimer’s forecast was based on a qualitative analysis of Coalition and Iraqi forces as opposed to a quantitative one. Although like everyone else he failed to foresee the extended air campaign and believed that successful air/land breakthrough battles in the heart of the Iraqi defenses would minimize casualties, he did fairly evaluate the disparity in quality between Coalition and Iraqi combat forces.[11]

In the aftermath of the rapid defeat of Iraqi forces in Kuwait, the media duly noted the singular accuracy of Mearshimer’s prediction.[12] The relatively disappointing performance of the quantitative models, especially the ones used by the Defense Department, punctuated debates within the U.S. military operations research community over the state of combat modeling. Dubbed “the base of sand problem” by RAND analysts Paul Davis and Donald Blumenthal, serious questions were raised about the accuracy and validity of the methodologies and constructs that underpinned the models.[13] Twenty-five years later, many of these questions remain unaddressed. Some of these will be explored in future posts.

NOTES

[1] “Potential War Casualties Put at 100,000; Gulf crisis: Fewer U.S. troops would be killed or wounded than Iraq soldiers, military experts predict,” Reuters, 5 September 1990; Benjamin Weiser, “Computer Simulations Attempting to Predict the Price of Victory,” Washington Post, 20 January 1991

[2] Brian Shellum, A Chronology of Defense Intelligence in the Gulf War: A Research Aid for Analysts (Washington, D.C.: DIA History Office, 1997), p. 20

[3] John Brinkerhoff and Theodore Silva, The United States Army Reserve in Operation Desert Storm: Personnel Services Support (Alexandria, VA: ANDRULIS Research Corporation, 1995), p. 9, cited in Brian L. Hollandsworth, “Personnel Replacement Operations during Operations Desert Storm and Desert Shield” Master’s Thesis (Ft. Leavenworth, KS: U.S. Army Command and General Staff College, 2015), p. 15

[4] Richard M. Swain, “Lucky War”: Third Army in Desert Storm (Ft. Leavenworth, KS: U.S. Army Command and General Staff College Press, 1994)

[5] Bob Woodward, The Commanders (New York: Simon and Schuster, 1991)

[6] Swain, “Lucky War”, p. 205

[7] Weiser, “Computer Simulations Attempting to Predict the Price of Victory”

[8] “Potential War Casualties Put at 100,000,” Reuters

[9] Barry R. Posen, “Political Objectives and Military Options in the Persian Gulf,” Defense and Arms Control Studies Working Paper, Cambridge, MA: Massachusetts Institute of Technology, November 1990)

[10] Joshua M. Epstein, “War with Iraq: What Price Victory?” Briefing Paper, Brookings Institution, December 1990, cited in Michael O’Hanlon, “Estimating Casualties in a War to Overthrow Saddam,” Orbis, Winter 2003; Weiser, “Computer Simulations Attempting to Predict the Price of Victory”

[11] John. J. Mearshimer, “A War the U.S. Can Win—Decisively,” Chicago Tribune, 15 January 1991

[12] Mike Royko, “Most Experts Really Blew It This Time,” Chicago Tribune, 28 February 1991

[13] Paul K. Davis and Donald Blumenthal, “The Base of Sand Problem: A White Paper on the State of Military Combat Modeling” (Santa Monica, CA: RAND, 1991)