Tag Trevor N. Dupuy

Predictions

We do like to claim we have predicted the casualty rates correctly in three wars (operations): 1) The 1991 Gulf War, 2) the 1995 Bosnia intervention, and 3) the Iraq insurgency.  Furthermore, these were predictions make of three very different types of operations, a conventional war, an “operation other than war” (OOTW) and an insurgency.

The Gulf War prediction was made in public testimony by Trevor Dupuy to Congress and published in his book If War Comes: How to Defeat Saddam Hussein. It is discussed in my book America’s Modern Wars (AMW) pages 51-52 and in some blog posts here.

The Bosnia intervention prediction is discussed in Appendix II of AMW and the Iraq casualty estimate is Chapter 1 and Appendix I.

We like to claim that we are three for three on these predictions. What does that really mean? If the odds of making a correct prediction are 50/50 (the same as a coin toss), then the odds of getting three correct predictions in a row is 12.5%. We may not be particularly clever, just a little lucky.

On the other hand, some might argue that these predictions were not that hard to make, and knowledgeable experts would certainly predict correctly at least two-thirds of the time. In that case the odds of getting three correct predictions in a row is more like 30%.

Still, one notes that there was a lot of predictions concerning the Gulf War that were higher than Trevor Dupuy’s. In the case of Bosnia, the Joint Staff was informed by a senior OR (Operations Research) office in the Army that there was no methodology for predicting losses in an “operation other than war” (AMW, page 309). In the case of the Iraq casualty estimate, we were informed by a director of an OR organization that our estimate was too high, and that the U.S. would suffer less than 2,000 killed and be withdrawn in a couple of years (Shawn was at that meeting). I think I left that out of my book in its more neutered final draft….my first draft was more detailed and maybe a little too “angry”. So maybe, predicting casualties in military operations is a little tricky. If the odds of a correct prediction was only one-in-three, then the odds of getting three correct predictions in a row is only 4%. For marketing purposes, we like this argument better 😉

Hard to say what are the odds of making a correct prediction are. The only war that had multiple public predictions (and of course, several private and classified ones) was the 1991 Gulf War. There were a number of predictions made and we believe most were pretty high. There was no other predictions we are aware of for Bosnia in 1995, other than the “it could turn into another Vietnam” ones. There are no other predictions we are aware of for Iraq in 2004, although lots of people were expressing opinions on the subject. So, it is hard to say how difficult it is to make a correct prediction in these cases.

P.S.: Yes, this post was inspired by my previous post on the Stanley Cup play-offs.

 

Logistics in Trevor Dupuy’s Combat Models

Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), p. 79

Mystics & Statistics reader Stiltzkin posed two interesting questions in response to my recent post on the new blog, Logistics in War:

Is there actually a reliable way of calculating logistical demand in correlation to “standing” ration strength/combat/daily strength army size?

Did Dupuy ever focus on logistics in any of his work?

The answer to his first question is, yes, there is. In fact, this has been a standard military staff function since before there were military staffs (Martin van Creveld’s book, Supplying War: Logistics from Wallenstein to Patton (2nd ed.) is an excellent general introduction). Staff officer’s guides and field manuals from various armies from the 19th century to the present are full of useful information on field supply allotments and consumption estimates intended to guide battlefield sustainment. The records of modern armies also contain reams of bureaucratic records documenting logistical functions as they actually occurred. Logistics and supply is a woefully under-studied aspect of warfare, but not because there are no sources upon which to draw.

As to his second question, the answer is also yes. Dupuy addressed logistics in his work in a couple of ways. He included two logistics multipliers in his combat models, one in the calculation for the battlefield effects of weapons, the Operational Lethality Index (OLI), and also as one element of the value for combat effectiveness, which is a multiplier in his combat power formula.

Dupuy considered the impact of logistics on combat to be intangible, however. From his historical study of combat, Dupuy understood that logistics impacted both weapons and combat effectiveness, but in the absence of empirical data, he relied on subject matter expertise to assign it a specific value in his model.

Logistics or supply capability is basic in its importance to combat effectiveness. Yet, as in the case of the leadership, training, and morale factors, it is almost impossible to arrive at an objective numerical assessment of the absolute effectiveness of a military supply system. Consequently, this factor also can be applied only when solid historical data provides a basis for objective evaluation of the relative effectiveness of the opposing supply capabilities.[1]

His approach to this stands in contrast to other philosophies of combat model design, which hold that if a factor cannot be empirically measured, it should not be included in a model. (It is up to the reader to decide if this is a valid approach to modeling real-world phenomena or not.)

Yet, as with many aspects of the historical study of combat, Dupuy and his colleagues at the Historical Evaluation Research Organization (HERO) had taken an initial cut at empirical research on the subject. In the late 1960s and early 1970s, Dupuy and HERO conducted a series of studies for the U.S. Air Force on the historical use of air power in support of ground warfare. One line of inquiry looked at the effects of air interdiction on supply, specifically at Operation STRANGLE, an effort by the U.S. and British air forces to completely block the lines of communication and supply of German ground forces defending Rome in 1944.

Dupuy and HERO dug deeply into Allied and German primary source documentation to extract extensive data on combat strengths and losses, logistical capabilities and capacities, supply requirements, and aircraft sorties and bombing totals. Dupuy proceeded from a historically-based assumption that combat units, using expedients, experience, and training, could operate unimpaired while only receiving up to 65% of their normal supply requirements. If the level of supply dipped below 65%, the deficiency would begin impinging on combat power at a rate proportional to the percentage of loss (i.e., a 60% supply rate would impose a 5% decline, represented as a combat effectiveness multiplier of .95, and so on).

Using this as a baseline, Dupuy and HERO calculated the amount of aerial combat power the Allies needed to apply to impact German combat effectiveness. They determined that Operation STRANGLE was able to reduce German supply capacity to about 41.8% of normal, which yielded a reduction in the combat power of German ground combat forces by an average of 6.8%.

He cautioned that these calculations were “directly relatable only to the German situation as it existed in Italy in late March and early April 1944.” As detailed as the analysis was, Dupuy stated that it “may be an oversimplification of a most complex combination of elements, including road and railway nets, supply levels, distribution of targets, and tonnage on targets. This requires much further exhaustive analysis in order to achieve confidence in this relatively simple relationship of interdiction effort to supply capability.”[2]

The historical work done by Dupuy and HERO on logistics and combat appears unique, but it seems highly relevant. There is no lack of detailed data from which to conduct further inquiries. The only impediment appears to be lack of interest.

NOTES

 [1] Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), p. 38.

[2] Ibid., pp. 78-94.

[NOTE: This post was edited to clarify the effect of supply reduction through aerial interdiction in the Operation STRANGLE study.]

Trevor Dupuy and Historical Trends Related to Weapon Lethality

There appears to be renewed interest in U.S. Army circles in Trevor Dupuy’s theory of a historical relationship between increasing weapon lethality, declining casualty rates, and greater dispersion on the battlefield. A recent article by Army officer and strategist Aaron Bazin, “Seven Charts That Help Explain American War” at The Strategy Bridge, used a composite version of two of Dupuy’s charts to explain the American military’s attraction to technology. (The graphic in Bazin’s article originated in a 2009 Australian Army doctrinal white paper, “Army’s Future Land Operating Concept,” which evidently did not cite Dupuy as the original source for the charts or the associated concepts.)

John McRea, like Bazin a U.S. Army officer, and a founding member of The Military Writer’s Guild, reposted Dupuy’s graphic in a blog post entitled “Outrageous Fortune: Spears and Arrows,” examining tactical and economic considerations in the use of asymmetrical technologies in warfare.

Dr. Conrad Crane, Chief of Historical Services for the U.S. Army Heritage and Education Center at the Army War College, also referenced Dupuy’s concepts in his look at human performance requirements, “The Future Soldier: Alone in a Crowd,” at War on the Rocks.

Dupuy originally developed his theory based on research and analysis undertaken by the Historical Evaluation and Research Organization (HERO) in 1964, for a study he directed, “Historical Trends Related to Weapon Lethality.” (Annex I, Annex II, Annex III). HERO had been contracted by the Advanced Tactics Project (AVTAC) of the U.S. Army Combat Developments Command, to provide unclassified support for Project OREGON TRAIL, a series of 45 classified studies of tactical nuclear weapons, tactics, and organization, which took 18 months to complete.

AVTAC asked HERO “to identify and analyze critical relationships and the cause-effect aspects of major advances in the lethality of weapons and associated changes in tactics and organization” from the Roman Era to the present. HERO’s study itself was a group project, incorporating 58 case studies from 21 authors, including such scholars as Gunther E. Rothenberg, Samuel P. Huntington, S.L.A. Marshall, R. Ernest Dupuy, Grace P. Hayes, Louis Morton, Peter Paret, Stefan T. Possony, and Theodore Ropp.

Dupuy synthesized and analyzed these case studies for the HERO study’s final report. He described what he was seeking to establish in his 1979 book, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles.

If the numbers of military history mean anything, it appears self-evident that there must be some kind of relationship between the quantities of weapons employed by opposing forces in combat, and the number of casualties suffered by each side. It also seems fairly obvious that some weapons are likely to cause more casualties than others, and that the effectiveness of weapons will depend upon their ability to reach their targets. So it becomes clear that the relationship of weapons to casualties is not quite the simple matter of comparing numbers to numbers. To compare weapons to casualties it is necessary to know not only the numbers of weapons, but also how many there are of each different type, and how effective or lethal each of these is.

The historical relationship between lethality, casualties, and dispersion that Dupuy deduced in this study provided the basis for his subsequent quest to establish an empirically-based, overarching theory of combat, which he articulated through his Quantified Judgement Model. Dupuy refined and updated the analysis from the 1964 HERO study in his 1980 book, The Evolution of Weapons and Warfare.

Mosul and ISF Combat Effectiveness

The situation in Mosul, 16-19 December 2016 (Institute for the Study of War)

After a period of “operational refit,” Iraqi Security Forces (ISF) waging battle with Daesh fighters for control of the city of Mosul launched a new phase of their advance on 29 December. The initial phase of the assault, which began on 17 October 2016, ground to a halt due to strong Daesh resistance and heavy casualties among the Iraqi Counterterrorism Service (CTS) troops spearheading the operation.

For the new offensive, the CTS was reinforced with additional Iraqi Army ground units, as well as an armored element of the Federal Police. Additional U.S. combat forces and advisors have also been moved closer to the front lines in support.

Although possessing an enormous manpower advantage over the Daesh defenders, ISF had managed to secure only one-quarter of the city in two months of combat. This is likely due to the fact that the only ISF elements that have demonstrated any offensive combat effectiveness have been the CTS and the Popular Mobilization Forces (PMF, or Hash’d al Shaabi) Iraqi Shi’a militia mobilized by Grand Ayatollah Ali Sistani in 2014. PMF brigades hold the western outskirts of the city, but thus far have been restrained from entering it for fear of provoking sectarian violence with the mostly Sunni residents.

Daesh defenders, believed to number only from 3,000-5,000 at the outset of the battle, have had the luxury of fighting against only one axis of advance and within urban terrain filled with trapped civilians, which they have used as human shields. They mounted a particularly effective counterattack against the CTS using vehicle-borne improvised explosive devices (VBIEDs), which halted the initial offensive in mid-December. ISF casualties appear to be concentrated in the elite 1st Special Operations Brigade (the so-called “Golden Division”) of the CTS. An unnamed Pentagon source was quoted as stating that the Golden Division’s maneuver battalions had incurred “upwards of 50 percent casualties,” which, if sustained, would have rendered it combative ineffective in less than a month.

The Iraqi government has come to rely on the Golden Division to generate reliable offensive combat power. It spearheaded the attacks that recovered Tikrit, Ramadi, and Fallujah earlier in the year. Originally formed in 2004 as the non-sectarian Iraqi Special Operations Forces brigade, the Golden Division was amalgamated into the CTS in 2007 along with specialized counterterrorism and national police elements. Although intended for irregular warfare, the CTS appears to be the only Iraqi military force capable of effective conventional offensive combat operations, likely due to higher level of combat effectiveness relative to the rest of the ISF, as well as its interoperability with U.S. and Coalition supporting forces.

Historically, the Iraqi Army has not demonstrated a high level of overall combat effectiveness. Trevor Dupuy’s analysis of the performance of the various combatants in the 1973 Arab-Israeli War ranked the Iraqi Army behind that of the Israelis, Jordanians, Egyptians, and Syrians. He estimated the Israelis to have a 3.43 to 1.00 combat effectiveness advantage over the Iraqis in 1973. Dupuy credited the Iraqis with improved effectiveness following the 1980-88 Iran-Iraq War in his pre-war estimate of the outcome of the 1990-91 Gulf War. This turned out to be erroneous; overestimation of Iraqi combat effectiveness in part led Dupuy to predict a higher casualty rate for U.S. forces than actually occurred. The ineffective performance of the Iraqi Army in 2003 should have not surprised anyone.

The relative success of the CTS can be seen as either indicative of the general failure of the decade-long U.S. effort to rebuild an effective Iraqi military establishment, or as an exemplary success of the U.S. Special Operations Forces model for training and operating with indigenous military forces. Or both.

What Is The Relationship Between Rate of Fire and Military Effectiveness?

marine-firing-m240Over at his Best Defense blog, Tom Ricks recently posed an interesting question: Is rate of fire no longer a key metric in assessing military effectiveness?

Rate of fire doesn’t seem to be important in today’s militaries. I mean, everyone can go “full auto.” Rather, the problem seems to me firing too much and running out of ammunition.

I wonder if this affects how contemporary military historians look at the tactical level of war. Throughout most of history, the problem, it seems to me, was how many rocks, spears, arrows or bullets you could get off. Hence the importance of drill, which was designed to increase the volume of infantry fire (and to reduce people walking off the battlefield when they moved back to reload).

There are several ways to address this question from a historical perspective, but one place to start is to look at how rate of fire relates historically to combat.

Rate of fire is one of several measures of a weapon’s ability to inflict damage, i.e. its lethality. In the early 1960s, Trevor Dupuy and his associates at the Historical Evaluation Research Organization (HERO) assessed whether historical trends in increasing weapon lethality were changing the nature of combat. To measure this, they developed a methodology for scoring the inherent lethality of a given weapon, the Theoretical Lethality Index (TLI). TLI is the product of five factors:

  • rate of fire
  • targets per strike
  • range factor
  • accuracy
  • reliability

In the TLI methodology, rate of fire is defined as the number of effective strikes a weapon can deliver under ideal conditions in increments of one hour, and assumes no logistical limitation.

As measured by TLI, increased rates of fire do indeed increase weapon lethality. The TLI of an early 20th century semi-automatic rifle is nearly five times higher than a mid-19th century muzzle-loaded rifle due to its higher rate of fire. Despite having lower accuracy and reliability, a World War II-era machine gun has 10 times the TLI of a semi-automatic rifle due to its rate of fire. The rate of fire of small arms has not increased since the early-to-mid 20th century, and the assault rifle, adopted by modern armies following World War II, remains that standard infantry weapon in the early 21st century.

attrition-fig-11

Rate of fire is just but one of many factors that can influence a weapon’s lethality, however. Artillery has much higher TLI values than small arms despite lower rates of fire. This is for the obvious reasons that artillery has far greater range than small arms and because each round of ammunition can hit multiple targets per strike.

There are other methods for scoring weapon lethality but the TLI provides a logical and consistent methodology for comparing weapons to each other. Through the TLI, Dupuy substantiated the observation that indeed, weapons have become more lethal over time, particularly in the last century.

But if weapons have become more lethal, has combat become bloodier? No. Dupuy and his colleagues also discovered that, counterintuitively, the average casualty rates in land combat have been declining since the 17th century. Combat casualty rates did climb in the early and mid-19th century, but fell again precipitously from the later 19th century through the end of the 20th.

attrition-fig-13

The reason, Dupuy determined, was because armies have historically adapted to increases in weapon lethality by dispersing in greater depth on the battlefield, decentralizing tactical decision-making and enhancing mobility, and placing a greater emphasis on combined arms tactics. The area occupied by 100,000 soldiers increased 4,000 times between antiquity and the late 20th century. Average ground force dispersion increased by a third between World War II and the 1973 Yom Kippur War, and he estimated it had increased by another quarter by 1990.

attrition-fig-14

Simply put, even as weapons become more deadly, there are fewer targets on the battlefield for them to hit. Through the mid-19th century, the combination of low rates of fire and relatively shorter range required the massing of infantry fires in order to achieve lethal effect. Before 1850, artillery caused more battlefield casualties than infantry small arms. This ratio changed due to the increased rates of fire and range of rifled and breach loading weapons introduced in the 1850s and 1860s. The majority of combat casualties in  conflicts of the mid-to-late 19th century were inflicted by infantry small arms.

attrition-fig-19The lethality of modern small arms combined with machine guns led to further dispersion and the decentralization of tactical decision-making in early 20th century warfare. The increased destructiveness of artillery, due to improved range and more powerful ammunition, coupled with the invention of the field telephone and indirect fire techniques during World War I, restored the long arm to its role as king of the battlefield.

attrition-fig-35

Dupuy represented this historical relationship between lethality and dispersion on the battlefield by applying a dispersion factor to TLI values to obtain what he termed the Operational Lethality Index (OLI). By accounting for these effects, OLI values are a good theoretical approximation of relative weapon effectiveness.

npw-fig-2-5Although little empirical research has been done on this question, it seems logical that the trend toward greater use of precision-guided weapons is at least a partial response to the so-called “empty battlefield.” The developers of the Third Offset Strategy postulated that the emphasis on developing precision weaponry by the U.S. in the 1970s was a calculated response to offset the Soviet emphasis on mass firepower (i.e. the “second offset”). The goal of modern precision weapons is “one shot, one kill,” where a reduced rate of fire is compensated for by greater range and accuracy. Such weapons have become sufficiently lethal that the best way to survive on a modern battlefield is to not be seen.

At least, that was the conventional wisdom until recently. The U.S. Army in particular is watching how the Ukrainian separatist forces and their Russian enablers are making use of new artillery weapons, drone and information technology, and tactics to engage targets with mass fires. Some critics have alleged that the U.S. artillery arm has atrophied during the Global War on Terror and may no longer be capable of overmatching potential adversaries. It is not yet clear whether there will be a real competition between mass and precision fires on the battlefields of the near future, but it is possible that it signals yet another shift in the historical relationship between lethality, mobility, and dispersion in combat.

SOURCES

Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (Falls Church, VA: NOVA Publications, 1995)

_____., Understanding War: History and Theory of Combat (New York: Paragon House, 1987)

_____. The Evolution of Weapons and Warfare (Indianapolis, IN: The Bobbs-Merrill Company, Inc., 1980)

_____. Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979)

Tank Loss Rates in Combat: Then and Now

wwii-tank-battlefieldAs the U.S. Army and the national security community seek a sense of what potential conflicts in the near future might be like, they see the distinct potential for large tank battles. Will technological advances change the character of armored warfare? Perhaps, but it seems more likely that the next big tank battles – if they occur – will likely resemble those from the past.

One aspect of future battle of great interest to military planners is probably going to tank loss rates in combat. In a previous post, I looked at the analysis done by Trevor Dupuy on the relationship between tank and personnel losses in the U.S. experience during World War II. Today, I will take a look at his analysis of historical tank loss rates.

In general, Dupuy identified that a proportional relationship exists between personnel casualty rates in combat and losses in tanks, guns, trucks, and other equipment. (His combat attrition verities are discussed here.) Looking at World War II division and corps-level combat engagement data in 1943-1944 between U.S., British and German forces in the west, and German and Soviet forces in the east, Dupuy found similar patterns in tank loss rates.

attrition-fig-58

In combat between two division/corps-sized, armor-heavy forces, Dupuy found that the tank loss rates were likely to be between five to seven times the personnel casualty rate for the winning side, and seven to 10 for the losing side. Additionally, defending units suffered lower loss rates than attackers; if an attacking force suffered a tank losses seven times the personnel rate, the defending forces tank losses would be around five times.

Dupuy also discovered the ratio of tank to personnel losses appeared to be a function of the proportion of tanks to infantry in a combat force. Units with fewer than six tanks per 1,000 troops could be considered armor supporting, while those with a density of more than six tanks per 1,000 troops were armor-heavy. Armor supporting units suffered lower tank casualty rates than armor heavy units.

attrition-fig-59

Dupuy looked at tank loss rates in the 1973 Arab-Israeli War and found that they were consistent with World War II experience.

What does this tell us about possible tank losses in future combat? That is a very good question. One guess that is reasonably certain is that future tank battles will probably not involve forces of World War II division or corps size. The opposing forces will be brigade combat teams, or more likely, battalion-sized elements.

Dupuy did not have as much data on tank combat at this level, and what he did have indicated a great deal more variability in loss rates. Examples of this can be found in the tables below.

attrition-fig-53attrition-fig-54

These data points showed some consistency, with a mean of 6.96 and a standard deviation of 6.10, which is comparable to that for division/corps loss rates. Personnel casualty rates are higher and much more variable than those at the division level, however. Dupuy stated that more research was necessary to establish a higher degree of confidence and relevance of the apparent battalion tank loss ratio. So one potentially fruitful area of research with regard to near future combat could very well be a renewed focus on historical experience.

NOTES

Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (Falls Church, VA: NOVA Publications, 1995), pp. 41-43; 81-90; 102-103

Technology, Eggs, and Risk (Oh, My)

Tokyo, Japan --- Eggs in a basket --- Image by © JIRO/Corbis
Tokyo, Japan — Eggs in a basket — Image by © JIRO/Corbis

In my last post, on the potential for the possible development of quantum radar to undermine the U.S. technological advantage in stealth technology, I ended by asking this question:

The basic assumption behind the Third Offset Strategy is that the U.S. can innovate and adopt technological capabilities fast enough to maintain or even expand its current military superiority. Does the U.S. really have enough of a scientific and technological development advantage over its rivals to validate this assumption?

My colleague, Chris, has suggested that I expand on the thinking behind this. Here goes:

The lead times needed for developing advanced weapons and the costs involved in fielding them make betting on technological innovation as a strategy seem terribly risky. In his 1980 study of the patterns of weapon technology development, The Evolution of Weapons and Warfare, Trevor Dupuy noted that there is a clear historical pattern of a period of 20-30 years between the invention of a new weapon and its use in combat in a tactically effective way. For example, practical armored fighting vehicles were first developed in 1915 but they were not used fully effectively in battle until the late 1930s.

The examples I had in mind when I wrote my original post were the F-35 Joint Strike Fighter (JSF) and the Littoral Combat Ship (LCS), both of which derive much, if not most, of their combat power from being stealthy. If that capability were to be negated even partially by a technological breakthrough or counter by a potential adversary, then 20+ years of development time and hundreds of billions of dollars would have been essentially wasted. If either or both or weapons system were rendered ineffective in the middle of a national emergency, neither could be quickly retooled nor replaced. The potential repercussions could be devastating.

I reviewed the development history of the F-35 in a previous post. Development began in 2001 and the Air Force declared the first F-35 squadron combat operational (in a limited capacity) in August 2016 (which has since been stood down for repairs). The first fully combat-capable F-35s will not be ready until 2018 at the soonest, and the entire fleet will not be ready until at least 2023. Just getting the aircraft fully operational will have taken 15-22 years, depending on how one chooses to calculate it. It will take several more years after that to fully evaluate the F-35 in operation and develop tactics, techniques, and procedures to maximize its effectiveness in combat. The lifetime cost of the F-35 has been estimated at $1.5 trillion, which is likely to be another underestimate.

The U.S. Navy anticipated the need for ships capable of operating in shallow coastal waters in the late 1990s. Development of the LCS began in 2003 the first ships of two variants were launched in 2006 and 2008, respectively. Two of each design have been built so far. Since then, cost overruns, developmental problems, disappointing performances at sea, and reconsideration of the ship’s role led the Navy to scale back a planned purchase of 53 LCSs to 40 at the end of 2015 to allow money to be spent on other priorities. As of July 2016, only 26 LCSs have been programmed and the Navy has been instructed to select one of the two designs to complete the class. Initial program procurement costs were $22 billion, which have now risen to $39 billion. Operating costs for each ship is currently estimated at $79 million, which the Navy asserts will drop when simultaneous testing and operational use ends. The Navy plans to build LCSs until the 2040s, which includes replacements for the original ten after a service life of 25 years. Even at the annual operating cost of a current U.S. Navy frigate ($59 million), a back of the envelope calculation for a lifetime cost for the LCS is around $91 billion, all told; this is also likely an underestimate. This seems like a lot of money to spend on a weapon that the Navy intends to pull out of combat should it sustain any damage.

It would not take a technological breakthrough as singular as quantum radar to degrade the effectiveness of U.S. stealth technology, either. The Russians claim that they already possess radars that can track U.S. stealth aircraft. U.S. sources essentially concede this, but point out that tracking a stealth platform does not mean that it can be attacked successfully. Obtaining a track sufficient to target involves other technological capabilities that are susceptible to U.S. electronic warfare capabilities. U.S. stealth aircraft already need to operate in conjunction with existing EW platforms to maintain their cloaked status. Even if quantum radar proves infeasible, the game over stealth is already afoot.

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

U.S. Tank Losses and Crew Casualties in World War II

Attrition-CoverIn his 1990 book Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War, Trevor Dupuy took a look at the relationship between tank losses and crew casualties in the U.S. 1st  Army between June 1944 and May 1945 (pp. 80-81). The data sampled included 797 medium (averaging 5 crewmen) and 101 light (averaging 4 crewmen) tanks. For each tank loss, an average of one crewman was killed or wounded. Interestingly, although gunfire accounted for the most tank and crew casualties, infantry anti-tank rockets (such as the Panzerfaust) inflicted 13% of the tank losses, but caused 21% of the crew losses.

Attrition, Fig. 50Casualties were evenly distributed among the crew positions.

Attrition, Fig. 51Whether or not a destroyed tank caught fire made a big difference for the crew. Only 40% of the tanks in the sample burned, but casualties were distributed evenly between the tanks that burned and those that did not. This was due to the higher casualty rate in the tanks that caught fire (1.28 crew casualties per tank) and those that did not (0.78 casualties per tank).

Attrition, Fig. 52Dupuy found the relationship between tank losses and casualties to be straightforward and obvious. This relationship would not be so simple when viewed at the battalion level. More on that in a future post [Tank Loss Rates in Combat: Then and Now].

The Military Conflict Institute (TMCI) Will Meet in October

TMCI logoThe Military Conflict Institute (the website has not been recently updated) will hold it’s 58th General Working Meeting from 3-5 October 2016, hosted by the Institute for Defense Analysis in Alexandria, Virginia. It will feature discussions and presentations focused on war termination in likely areas of conflict in the near future, such as Egypt, Turkey, North Korea, Iran, Saudi Arabia, Kurdistan, and Israel. There will also be presentations on related and general military topics as well.

TMCI was founded in 1979 by Dr. Donald S. Marshall and Trevor Dupuy. They were concerned by the inability of existing Defense Department combat models to produce results that were consistent or rooted in historical experience. The organization is a non-profit, interdisciplinary, informal group that avoids government or institutional affiliation in order to maintain an independent perspective and voice. It’s objective is to advance public understanding of organized warfare in all its aspects. Most of the initial members were drawn from the ranks of operations analysts experienced in quantitative historical study and military operations research, but it has grown to include a diverse group of scholars, historians, students of war, soldiers, sailors, marines, airmen, and scientists. Member disciplines range from military science to diplomacy and philosophy.

For agenda information, contact Roger Mickelson TMCI6@aol.com. For joining instructions, contact Rosser Bobbitt rbobbitt@ida.org. Attendance is subject to approval.