Tag Trevor N. Dupuy

The Great 3-1 Rule Debate

coldwarmap3[This piece was originally posted on 13 July 2016.]

Trevor Dupuy’s article cited in my previous post, “Combat Data and the 3:1 Rule,” was the final salvo in a roaring, multi-year debate between two highly regarded members of the U.S. strategic and security studies academic communities, political scientist John Mearsheimer and military analyst/polymath Joshua Epstein. Carried out primarily in the pages of the academic journal International Security, Epstein and Mearsheimer argued the validity of the 3-1 rule and other analytical models with respect the NATO/Warsaw Pact military balance in Europe in the 1980s. Epstein cited Dupuy’s empirical research in support of his criticism of Mearsheimer’s reliance on the 3-1 rule. In turn, Mearsheimer questioned Dupuy’s data and conclusions to refute Epstein. Dupuy’s article defended his research and pointed out the errors in Mearsheimer’s assertions. With the publication of Dupuy’s rebuttal, the International Security editors called a time out on the debate thread.

The Epstein/Mearsheimer debate was itself part of a larger political debate over U.S. policy toward the Soviet Union during the administration of Ronald Reagan. This interdisciplinary argument, which has since become legendary in security and strategic studies circles, drew in some of the biggest names in these fields, including Eliot Cohen, Barry Posen, the late Samuel Huntington, and Stephen Biddle. As Jeffery Friedman observed,

These debates played a prominent role in the “renaissance of security studies” because they brought together scholars with different theoretical, methodological, and professional backgrounds to push forward a cohesive line of research that had clear implications for the conduct of contemporary defense policy. Just as importantly, the debate forced scholars to engage broader, fundamental issues. Is “military power” something that can be studied using static measures like force ratios, or does it require a more dynamic analysis? How should analysts evaluate the role of doctrine, or politics, or military strategy in determining the appropriate “balance”? What role should formal modeling play in formulating defense policy? What is the place for empirical analysis, and what are the strengths and limitations of existing data?[1]

It is well worth the time to revisit the contributions to the 1980s debate. I have included a bibliography below that is not exhaustive, but is a place to start. The collapse of the Soviet Union and the end of the Cold War diminished the intensity of the debates, which simmered through the 1990s and then were obscured during the counterterrorism/ counterinsurgency conflicts of the post-9/11 era. It is possible that the challenges posed by China and Russia amidst the ongoing “hybrid” conflict in Syria and Iraq may revive interest in interrogating the bases of military analyses in the U.S and the West. It is a discussion that is long overdue and potentially quite illuminating.

NOTES

[1] Jeffery A. Friedman, “Manpower and Counterinsurgency: Empirical Foundations for Theory and Doctrine,” Security Studies 20 (2011)

BIBLIOGRAPHY

(Note: Some of these are behind paywalls, but some are available in PDF format. Mearsheimer has made many of his publications freely available here.)

John J. Mearsheimer, “Why the Soviets Can’t Win Quickly in Central Europe,” International Security, Vol. 7, No. 1 (Summer 1982)

Samuel P. Huntington, “Conventional Deterrence and Conventional Retaliation in Europe,” International Security 8, no. 3 (Winter 1983/84)

Joshua Epstein, Strategy and Force Planning (Washington, DC: Brookings, 1987)

Joshua M. Epstein, “Dynamic Analysis and the Conventional Balance in Europe,” International Security 12, no. 4 (Spring 1988)

John J. Mearsheimer, “Numbers, Strategy, and the European Balance,” International Security 12, no. 4 (Spring 1988)

Stephen Biddle, “The European Conventional Balance,” Survival 30, no. 2 (March/April 1988)

Eliot A. Cohen, “Toward Better Net Assessment: Rethinking the European Conventional Balance,International Security Vol. 13, No. 1 (Summer 1988)

Joshua M. Epstein, “The 3:1 Rule, the Adaptive Dynamic Model, and the Future of Security Studies,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, “Assessing the Conventional Balance,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, Barry R. Posen, Eliot A. Cohen, “Correspondence: Reassessing Net Assessment,” International Security 13, No. 4 (Spring 1989)

Trevor N. Dupuy, “Combat Data and the 3:1 Rule,” International Security 14, no. 1 (Summer 1989)

Stephen Biddle et al., Defense at Low Force Levels (Alexandria, VA: Institute for Defense Analyses, 1991)

Force Ratios in Conventional Combat

American soldiers of the 117th Infantry Regiment, Tennessee National Guard, part of the 30th Infantry Division, move past a destroyed American M5A1 “Stuart” tank on their march to recapture the town of St. Vith during the Battle of the Bulge, January 1945. [Wikipedia]
[This piece was originally posted on 16 May 2017.]

This post is a partial response to questions from one of our readers (Stilzkin). On the subject of force ratios in conventional combat….I know of no detailed discussion on the phenomenon published to date. It was clearly addressed by Clausewitz. For example:

At Leuthen Frederick the Great, with about 30,000 men, defeated 80,000 Austrians; at Rossbach he defeated 50,000 allies with 25,000 men. These however are the only examples of victories over an opponent two or even nearly three times as strong. Charles XII at the battle of Narva is not in the same category. The Russian at that time could hardly be considered as Europeans; moreover, we know too little about the main features of that battle. Bonaparte commanded 120,000 men at Dresden against 220,000—not quite half. At Kolin, Frederick the Great’s 30,000 men could not defeat 50,000 Austrians; similarly, victory eluded Bonaparte at the desperate battle of Leipzig, though with his 160,000 men against 280,000, his opponent was far from being twice as strong.

These examples may show that in modern Europe even the most talented general will find it very difficult to defeat an opponent twice his strength. When we observe that the skill of the greatest commanders may be counterbalanced by a two-to-one ratio in the fighting forces, we cannot doubt that superiority in numbers (it does not have to more than double) will suffice to assure victory, however adverse the other circumstances.

and:

If we thus strip the engagement of all the variables arising from its purpose and circumstance, and disregard the fighting value of the troops involved (which is a given quantity), we are left with the bare concept of the engagement, a shapeless battle in which the only distinguishing factors is the number of troops on either side.

These numbers, therefore, will determine victory. It is, of course, evident from the mass of abstractions I have made to reach this point that superiority of numbers in a given engagement is only one of the factors that determines victory. Superior numbers, far from contributing everything, or even a substantial part, to victory, may actually be contributing very little, depending on the circumstances.

But superiority varies in degree. It can be two to one, or three or four to one, and so on; it can obviously reach the point where it is overwhelming.

In this sense superiority of numbers admittedly is the most important factor in the outcome of an engagement, as long as it is great enough to counterbalance all other contributing circumstance. It thus follows that as many troops as possible should be brought into the engagement at the decisive point.

And, in relation to making a combat model:

Numerical superiority was a material factor. It was chosen from all elements that make up victory because, by using combinations of time and space, it could be fitted into a mathematical system of laws. It was thought that all other factors could be ignored if they were assumed to be equal on both sides and thus cancelled one another out. That might have been acceptable as a temporary device for the study of the characteristics of this single factor; but to make the device permanent, to accept superiority of numbers as the one and only rule, and to reduce the whole secret of the art of war to a formula of numerical superiority at a certain time and a certain place was an oversimplification that would not have stood up for a moment against the realities of life.

Force ratios were discussed in various versions of FM 105-5 Maneuver Control, but as far as I can tell, this was not material analytically developed. It was a set of rules, pulled together by a group of anonymous writers for the sake of being able to adjudicate wargames.

The only detailed quantification of force ratios was provided in Numbers, Predictions and War by Trevor Dupuy. Again, these were modeling constructs, not something that was analytically developed (although there was significant background research done and the model was validated multiple times). He then discusses the subject in his book Understanding War, which I consider the most significant book of the 90+ that he wrote or co-authored.

The only analytically based discussion of force ratios that I am aware of (or at least can think of at this moment) is my discussion in my upcoming book War by Numbers: Understanding Conventional Combat. It is the second chapter of the book: https://dupuyinstitute.dreamhosters.com/2016/02/17/war-by-numbers-iii/

In this book, I assembled the force ratios required to win a battle based upon a large number of cases from World War II division-level combat. For example (page 18 of the manuscript):

I did this for the ETO, for the battles of Kharkov and Kursk (Eastern Front 1943, divided by when the Germans are attacking and when the Soviets are attacking) and for PTO (Manila and Okinawa 1945).

There is more than can be done on this, and we do have the data assembled to do this, but as always, I have not gotten around to it. This is why I am already considering a War by Numbers II, as I am already thinking about all the subjects I did not cover in sufficient depth in my first book.

Dupuy’s Verities: Initiative

German Army soldiers advance during the Third Battle of Kharkov in early 1943. This was the culmination of a counteroffensive by German Field Marshal Erich von Manstein that blunted the Soviet offensive drive following the recapture of Stalingrad in late 1942. [Photo: KonchitsyaLeto/Reddit]

The fifth of Trevor Dupuy’s Timeless Verities of Combat is:

Initiative permits application of preponderant combat power.

From Understanding War (1987):

The importance of seizing and maintaining the initiative has not declined in our times, nor will it in the future. This has been the secret of success of all of the great captains of history. It was as true of MacArthur as it was of Alexander the Great, Grant or Napoleon. Some modern Soviet theorists have suggested that this is even more important now in an era of high technology than formerly. They may be right. This has certainly been a major factor in the Israeli victories over the Arabs in all of their wars.

Given the prominent role initiative has played in warfare historically, it is curious that it is not a principle of war in its own right. However, it could be argued that it is sufficiently embedded in the principles of the offensive and maneuver that it does not need to be articulated separately. After all, the traditional means of sizing the initiative on the battlefield is through a combination of the offensive and maneuver.

Initiative is a fundamental aspect of current U.S. Army doctrine, as stated in ADP 3-0 Operations (2017):

The central idea of operations is that, as part of a joint force, Army forces seize, retain, and exploit the initiative to gain and maintain a position of relative advantage in sustained land operations to prevent conflict, shape the operational environment, and win our Nation’s wars as part of unified action.

For Dupuy, the specific connection between initiative and combat power is likely why he chose to include it as a verity in its own right. Combat power was the central concept in his theory of combat and initiative was not just the basic means of achieving a preponderance of combat power through superior force strength (i.e. numbers), but also in harnessing the effects of the circumstantial variables of combat that multiply combat power (i.e. surprise, mobility, vulnerability, combat effectiveness). It was precisely through the exploitation of this relationship between initiative and combat power that allowed inferior numbers of German and Israeli combat forces to succeed time and again in combat against superior numbers of Soviet and Arab opponents.

Using initiative to apply preponderant combat power in battle is the primary way the effects of maneuver (to “gain and maintain a position of relative advantage“) are abstracted in Dupuy’s Quantified Judgement Model (QJM)/Tactical Numerical Deterministic Model (TNDM). The QJM/TNDM itself is primarily a combat attrition adjudicator that determines combat outcomes through calculations of relative combat power. The numerical force strengths of the opposing forces engaged as determined by maneuver can be easily inputted into the QJM/TNDM and then modified by the applicable circumstantial variables of combat related to maneuver to obtain a calculation of relative combat power. As another of Dupuy’s verities states, “superior combat power always wins.”

What Does Lethality Mean In Warfare?

In an insightful essay over at The Strategy Bridge, “Lethality: An Inquiry,” Marine Corps officer Olivia Gerard accomplishes one of the most important, yet most often overlooked, aspects of successfully thinking about and planning for war: questioning a basic assumption. She achieves this by posing a simple question: “What is lethality?”

Gerard notes that the current U.S. National Defense Strategy is predicated on lethality; as it states: “A more lethal, resilient, and rapidly innovating Joint Force, combined with a robust constellation of allies and partners, will sustain American influence and ensure favorable balances of power that safeguard the free and open international order.” She also identifies the linkage in the strategy between lethality and deterrence via a supporting statement from Deputy Secretary of Defense Patrick Shanahan: “Everything we do is geared toward one goal: maximizing lethality. A lethal force is the strongest deterrent to war.”

After pointing out that the strategy does not define the concept of lethality, Gerard responds to Shanahan’s statement by asking “why?”

She uses this as a jumping off point to examine the meaning of lethality in warfare. Starting from the traditional understanding of lethality as a tactical concept, Gerard walks through the way it has been understood historically. From this, she formulates a construct for understanding the relationship between lethality and strategy:

Organizational lethality emerges from tactical lethality that is institutionally codified. Tactical lethality is nested within organizational lethality, which is nested within strategic lethality. Plugging these terms into an implicit calculus, we can rewrite strategic lethality as the efficacy with which we can form intentional deadly relationships towards targets that can be actualized towards political ends.

To this, Gerard appends two interesting caveats: “Notice first that the organizational component becomes implicit. What remains outside, however, is the intention–a meta-intention–to form these potential deadly relationships in the first place.”

It is the second of these caveats—the intent to connect lethality to a strategic end—that informs Gerard’s conclusion. While the National Defense Strategy does not define the term, she observes that by explicitly leveraging the threat to use lethality to bolster deterrence, it supplies the necessary credibility needed to make deterrence viable. “Proclaiming lethality a core tenet, especially in a public strategic document, is the communication of the threat.”

Gerard’s exploration of lethality and her proposed framework for understanding it provide a very useful way of thinking about the way it relates to warfare. It is definitely worth your time to read.

What might be just as interesting, however, are the caveats to her construct because they encompass a lot of what is problematic about the way the U.S. military thinks—explicitly and implicitly—about tactical lethality and how it is codified into concepts of organizational lethality. (While I have touched on some of those already, Gerard gives more to reflect on. More on that later.)

Gerard also references the definition of lethality Trevor Dupuy developed for his 1964 study of historical trends in weapon lethality. While noting that his definition was too narrow for the purposes of her inquiry, the historical relationship between lethality, casualties, and dispersion on the battlefield Dupuy found in that study formed the basis for his subsequent theories of warfare and models of combat. (I will write more about those in the future as well.)

Human Factors In Warfare: Fear In A Lethal Environment

Chaplain (Capt.) Emil Kapaun (right) and Capt. Jerome A. Dolan, a medical officer with the 8th Cavalry Regiment, 1st Cavalry Division, carry an exhausted Soldier off the battlefield in Korea, early in the war. Kapaun was famous for exposing himself to enemy fire. When his battalion was overrun by a Chinese force in November 1950, rather than take an opportunity to escape, Kapaun voluntarily remained behind to minister to the wounded. In 2013, Kapaun posthumously received the Medal of Honor for his actions in the battle and later in a prisoner of war camp, where he died in May 1951. [Photo Credit: Courtesy of the U.S. Army Center of Military History]

[This piece was originally published on 27 June 2017.]

Trevor Dupuy’s theories about warfare were sometimes criticized by some who thought his scientific approach neglected the influence of the human element and chance and amounted to an attempt to reduce war to mathematical equations. Anyone who has read Dupuy’s work knows this is not, in fact, the case.

Moral and behavioral (i.e human) factors were central to Dupuy’s research and theorizing about combat. He wrote about them in detail in his books. In 1989, he presented a paper titled “The Fundamental Information Base for Modeling Human Behavior in Combat” at a symposium on combat modeling that provided a clear, succinct summary of his thinking on the topic.

He began by concurring with Carl von Clausewitz’s assertion that

[P]assion, emotion, and fear [are] the fundamental characteristics of combat… No one who has participated in combat can disagree with this Clausewitzean emphasis on passion, emotion, and fear. Without doubt, the single most distinctive and pervasive characteristic of combat is fear: fear in a lethal environment.

Despite the ubiquity of fear on the battlefield, Dupuy pointed out that there is no way to study its impact except through the historical record of combat in the real world.

We cannot replicate fear in laboratory experiments. We cannot introduce fear into field tests. We cannot create an environment of fear in training or in field exercises.

So, to study human reaction in a battlefield environment we have no choice but to go to the battlefield, not the laboratory, not the proving ground, not the training reservation. But, because of the nature of the very characteristics of combat which we want to study, we can’t study them during the battle. We can only do so retrospectively.

We have no choice but to rely on military history. This is why military history has been called the laboratory of the soldier.

He also pointed out that using military history analytically has its own pitfalls and must be handled carefully lest it be used to draw misleading or inaccurate conclusions.

I must also make clear my recognition that military history data is far from perfect, and that–even at best—it reflects the actions and interactions of unpredictable human beings. Extreme caution must be exercised when using or analyzing military history. A single historical example can be misleading for either of two reasons: (a) The data is inaccurate, or (b) The example may be true, but also be untypical.

But, when a number of respectable examples from history show consistent patterns of human behavior, then we can have confidence that behavior in accordance with the pattern is typical, and that behavior inconsistent with the pattern is either untypical, or is inaccurately represented.

He then stated very concisely the scientific basis for his method.

My approach to historical analysis is actuarial. We cannot predict the future in any single instance. But, on the basis of a large set of reliable experience data, we can predict what is likely to occur under a given set of circumstances.

Dupuy listed ten combat phenomena that he believed were directly or indirectly related to human behavior. He considered the list comprehensive, if not exhaustive.

I shall look at Dupuy’s treatment of each of these in future posts (click links above).

Artillery Survivability In Modern Combat

The U.S. Army’s M109A6 Paladin 155 mm Self-Propelled Howitzer. [U.S. Army]
[This piece was originally published on 17 July 2017.]

Much attention is being given in the development of the U.S. joint concept of Multi-Domain Battle (MDB) to the implications of recent technological advances in long-range precision fires. It seems most of the focus is being placed on exploring the potential for cross-domain fires as a way of coping with the challenges of anti-access/area denial strategies employing long-range precision fires. Less attention appears to be given to assessing the actual combat effects of such weapons. The prevailing assumption is that because of the increasing lethality of modern weapons, battle will be bloodier than it has been in recent experience.

I have taken a look in previous posts at how the historical relationship identified by Trevor Dupuy between weapon lethality, battlefield dispersion, and casualty rates argues against this assumption with regard to personnel attrition and tank loss rates. What about artillery loss rates? Will long-range precision fires make ground-based long-range precision fire platforms themselves more vulnerable? Historical research suggests that trend was already underway before the advent of the new technology.

In 1976, Trevor Dupuy and the Historical Evaluation and Research Organization (HERO; one of TDI’s corporate ancestors) conducted a study sponsored by Sandia National Laboratory titled “Artillery Survivability in Modern War.” (PDF) The study focused on looking at historical artillery loss rates and the causes of those losses. It drew upon quantitative data from the 1973 Arab-Israel War, the Korean War, and the Eastern Front during World War II.

Conclusions

1. In the early wars of the 20th Century, towed artillery pieces were relatively invulnerable, and they were rarely severely damaged or destroyed except by very infrequent direct hits.

2. This relative invulnerability of towed artillery resulted in general lack of attention to the problems of artillery survivability through World War II.

3. The lack of effective hostile counter-artillery resources in the Korean and Vietnam wars contributed to continued lack of attention to the problem of artillery survivability, although increasingly armies (particularly the US Army) were relying on self-propelled artillery pieces.

4. Estimated Israeli loss statistics of the October 1973 War suggest that because of size and characteristics, self-propelled artillery is more vulnerable to modern counter-artillery means than was towed artillery in that and previous wars; this greater historical physical vulnerability of self-propelled weapons is consistent with recent empirical testing by the US Army.

5. The increasing physical vulnerability of modern self-propelled artillery weapons is compounded by other modern combat developments, including:

a. Improved artillery counter-battery techniques and resources;
b. Improved accuracy of air-delivered munitions;
c..increased lethality of modern artillery ammunition; and
d. Increased range of artillery and surface-to-surface missiles suitable for use against artillery.

6. Despite this greater vulnerability of self-propelled weapons, Israeli experience in the October war demonstrated that self-propelled artillery not only provides significant protection to cannoneers but also that its inherent mobility permits continued effective operation under circumstances in which towed artillery crews would be forced to seek cover, and thus be unable to fire their weapons. ‘

7. Paucity of available processed, compiled data on artillery survivability and vulnerability limits analysis and the formulation of reliable artillery loss experience tables or formulae.

8. Tentative analysis of the limited data available for this study indicates the following:

a. In “normal” deployment, percent weapon losses by standard weight classification are in the following proportions:

b. Towed artillery losses to hostile artillery (counterbattery) appear in general to very directly with battle intensity (as measured by percent personnel casualties per day), at a rate somewhat less than half of the percent personnel losses for units of army strength or greater; this is a straight-line relationship, or close to it; the stronger or more effective the hostile artillery is, the steeper the slope of the curve;

c. Towed artillery losses to all hostile anti-artillery means appears in general to vary directly with battle intensity at a rate about two-thirds of the-percent personnel losses for units of army strength or greater; the curve rises slightly more rapidly in high intensity combat than in normal or low-intensity combat; the stronger or more effective the hostile anti-artillery means (primarily air and counter-battery), the steeper the slope of the curve;

d. Self-propelled artillery losses appear to be generally consistent with towed losses, but at rates at least twice as great in comparison to battle intensity.

9. There are available in existing records of US and German forces in World war II, and US forces in the Korean and Vietnam Wars, unit records and reports that will permit the formulation of reliable artillery loss experience tables and formulae for those conflicts; these, with currently available and probably improved, data from the Arab-Israeli wars, will permit the formulation of reliable artillery loss experience tables and formulae for simulations of modern combat under current and foreseeable future conditions.

The study caveated these conclusions with the following observations:

Most of the artillery weapons in World War II were towed weapons. By the time the United States had committed small but significant numbers of self-propelled artillery pieces in Europe, German air and artillery counter-battery retaliatory capabilities had been significantly reduced. In the Korean and Vietnam wars, although most American artillery was self-propelled, the enemy had little counter-artillery capability either in the air or in artillery weapons and counter-battery techniques.

It is evident from vulnerability testing of current Army self-propelled weapons, that these weapons–while offering much more protection to cannoneers and providing tremendous advantages in mobility–are much more vulnerable to hostile action than are towed weapons, and that they are much more subject to mechanical breakdowns involving either the weapons mountings or the propulsion elements. Thus there cannot be a direct relationship between aggregated World War II data, or even aggregated Korean war or October War data, and current or future artillery configurations. On the other hand, the body of data from the October war where artillery was self-propelled is too small and too specialized by environmental and operational circumstances to serve alone as a paradigm of artillery vulnerability.

Despite the intriguing implications of this research, HERO’s proposal for follow on work was not funded. HERO only used easily accessible primary and secondary source data for the study. It noted much more primary source data was likely available but that it would require a significant research effort to compile it. (Research is always the expensive tent-pole in quantitative historical analysis. This seems to be why so little of it ever gets funded.) At the time of the study in 1976, no U.S. Army organization could identify any existing quantitative historical data or analysis on artillery losses, classified or otherwise. A cursory search on the Internet reveals no other such research as well. Like personnel attrition and tank loss rates, it would seem that artillery loss rates would be another worthwhile subject for quantitative analysis as part of the ongoing effort to develop the MDB concept.

Human Factors In Warfare: Suppression

Images from a Finnish Army artillery salvo fired by towed 130mm howitzers during an exercise in 2013. [Puolustusvoimat – Försvarsmakten – The Finnish Defence Forces/YouTube]
[This piece was originally posted on 24 August 2017.]

According to Trevor Dupuy, “Suppression is perhaps the most obvious and most extensive manifestation of the impact of fear on the battlefield.” As he detailed in Understanding War: History and Theory of Combat (1987),

There is probably no obscurity of combat requiring clarification and understanding more urgently than that of suppression… Suppression usually is defined as the effect of fire (primarily artillery fire) upon the behavior of hostile personnel, reducing, limiting, or inhibiting their performance of combat duties. Suppression lasts as long as the fires continue and for some brief, indeterminate period thereafter. Suppression is the most important effect of artillery fire, contributing directly to the ability of the supported maneuver units to accomplish their missions while preventing the enemy units from accomplishing theirs. (p. 251)

Official US Army field artillery doctrine makes a distinction between “suppression” and “neutralization.” Suppression is defined to be instantaneous and fleeting; neutralization, while also temporary, is relatively longer-lasting. Neutralization, the doctrine says, results when suppressive effects are so severe and long-lasting that a target is put out of action for a period of time after the suppressive fire is halted. Neutralization combines the psychological effects of suppressive gunfire with a certain amount of damage. The general concept of neutralization, as distinct from the more fleeting suppression, is a reasonable one. (p. 252)

Despite widespread acknowledgement of the existence of suppression and neutralization, the lack of interest in analyzing its effects was a source of professional frustration for Dupuy. As he commented in 1989,

The British did some interesting but inconclusive work on suppression in their battlefield operations research in World War II. In the United States I am aware of considerable talk about suppression, but very little accomplishment, over the past 20 years. In the light of the significance of suppression, our failure to come to grips with the issue is really quite disgraceful.

This lack of interest is curious, given that suppression and neutralization remain embedded in U.S. Army combat doctrine to this day. The current Army definitions are:

Suppression – In the context of the computed effects of field artillery fires, renders a target ineffective for a short period of time producing at least 3-percent casualties or materiel damage. [Army Doctrine Reference Publication (ADRP) 1-02, Terms and Military Symbols, December 2015, p. 1-87]

Neutralization – In the context of the computed effects of field artillery fires renders a target ineffective for a short period of time, producing 10-percent casualties or materiel damage. [ADRP 1-02, p. 1-65]

A particular source for Dupuy’s irritation was the fact that these definitions were likely empirically wrong. As he argued in Understanding War,

This is almost certainly the wrong way to approach quantification of neutralization. Not only is there no historical evidence that 10% casualties are enough to achieve this effect, there is no evidence that any level of losses is required to achieve the psycho-physiological effects of suppression or neutralization. Furthermore, the time period in which casualties are incurred is probably more important than any arbitrary percentage of loss, and the replacement of casualties and repair of damage are probably irrelevant. (p. 252)

Thirty years after Dupuy pointed this problem out, the construct remains enshrined in U.S. doctrine, unquestioned and unsubstantiated. Dupuy himself was convinced that suppression probably had little, if anything, to do with personnel loss rates.

I believe now that suppression is related to and probably a component of disruption caused by combat processes other than surprise, such as a communications failure. Further research may reveal, however, that suppression is a very distinct form of disruption that can be measured or estimated quite independently of disruption caused by any other phenomenon. (Understanding War, p. 251)

He had developed a hypothesis for measuring the effects of suppression, but was unable to interest anyone in the U.S. government or military in sponsoring a study on it. Suppression as a combat phenomenon remains only vaguely understood.

“Quantity Has A Quality All Its Own”: How Robot Swarms Might Change Future Combat

Humans vs. machines in the film Matrix Revolutions (2003) [Screencap by The Matrix Wiki]

Yesterday, Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security, and prolific writer on the future of robotics and artificial intelligence, posted a fascinating argument on Twitter regarding swarms and mass in future combat.

His thread was in response to an article by Shmuel Shmuel posted on War on the Rocks, which made the case that the same computer processing technology enabling robotic vehicles combined with old fashioned kinetic weapons (i.e. anti-aircraft guns) offered a cost-effective solution to swarms.

Scharre agreed that robotic drones are indeed vulnerable to such countermeasures, but made this point in response:

He then went to contend that robotic swarms offer the potential to reestablish the role of mass in future combat. Mass, either in terms of numbers of combatants or volume of firepower, has played a decisive role in most wars. As the aphorism goes, usually credited to Josef Stalin, “mass has a quality all of its own.”

Scharre observed that the United States went in a different direction in its post-World War II approach to warfare, adopting instead “offset” strategies that sought to leverage superior technology to balance against the mass militaries of the Communist bloc.

While effective during the Cold War, Scharre concurs with the arguments that offset strategies are becoming far too expensive and may ultimately become self-defeating.

In order to avoid this fate, Scharre contends that

The entire thread is well worth reading.

Trevor Dupuy would have agreed with much of what Scharre’s asserts. He identified the relationship between increasing weapon lethality and battlefield dispersion that goes back to the 17th century. Dupuy believed that the primary factor driving this relationship was the human response to fear in a lethal environment, with soldiers dispersing in depth and frontage on battlefields in order to survive weapons of ever increasing destructiveness.

TDI Friday Read: Lethality, Dispersion, And Mass On Future Battlefields

Robots might very well change that equation. Whether autonomous or “human in the loop,” robotic swarms do not feel fear and are inherently expendable. Cheaply produced robots might very well provide sufficient augmentation to human combat units to restore the primacy of mass in future warfare.

How Many Confederates Fought At Antietam?

Dead soldiers lying near the Dunker Church on the Antietam battlefield. [History.com]

Numbers matter in war and warfare. Armies cannot function effectively without reliable counts of manpower, weapons, supplies, and losses. Wars, campaigns, and battles are waged or avoided based on assessments of relative numerical strength. Possessing superior numbers, either overall or at the decisive point, is a commonly held axiom (if not a guarantor) for success in warfare.

These numbers of war likewise inform the judgements of historians. They play a large role in shaping historical understanding of who won or lost, and why. Armies and leaders possessing a numerical advantage are expected to succeed, and thus come under exacting scrutiny when they do not. Commanders and combatants who win in spite of inferiorities in numbers are lauded as geniuses or elite fighters.

Given the importance of numbers in war and history, however, it is surprising to see how often historians treat quantitative data carelessly. All too often, for example, historical estimates of troop strength are presented uncritically and often rounded off, apparently for simplicity’s sake. Otherwise careful scholars are not immune from the casual or sloppy use of numbers.

However, just as careless treatment of qualitative historical evidence results in bad history, the same goes for mishandling quantitative data. To be sure, like any historical evidence, quantitative data can be imprecise or simply inaccurate. Thus, as with any historical evidence, it is incumbent upon historians to analyze the numbers they use with methodological rigor.

OK, with that bit of throat-clearing out of the way, let me now proceed to jump into one of the greatest quantitative morasses in military historiography: strengths and losses in the American Civil War. Participants, pundits, and scholars have been arguing endlessly over numbers since before the war ended. And since nothing seems to get folks riled up more than debating Civil War numbers than arguing about the merits (or lack thereof) of Union General George B. McClellan, I am eventually going to add him to the mix as well.

The reason I am grabbing these dual lightning rods is to illustrate the challenges of quantitative data and historical analysis by looking at one of Trevor Dupuy’s favorite historical case studies, the Battle of Antietam (or Sharpsburg, for the unreconstructed rebels lurking out there). Dupuy cited his analysis of the battle in several of his books, mainly as a way of walking readers through his Quantified Judgement Method of Analysis (QJMA), and to demonstrate his concept of combat multipliers.

I have questions about his Antietam analysis that I will address later. To begin, however, I want to look at the force strength numbers he used. On p. 156 of Numbers, Predictions and War, he provided the following figures for the opposing armies at Antietam:The sources he cited for these figures were R. Ernest Dupuy and Trevor N. Dupuy, The Compact History of the Civil War (New York: Hawthorn, 1960) and Thomas L. Livermore, Numbers and Losses of the Civil War (reprint, Bloomington: University of Indiana, 1957).

It is with Livermore that I will begin tracing the historical and historiographical mystery of how many Confederates fought at the Battle of Antietam.

Questioning The Validity Of The 3-1 Rule Of Combat

Canadian soldiers going “over the top” during an assault in the First World War. [History.com]
[This post was originally published on 1 December 2017.]

How many troops are needed to successfully attack or defend on the battlefield? There is a long-standing rule of thumb that holds that an attacker requires a 3-1 preponderance over a defender in combat in order to win. The aphorism is so widely accepted that few have questioned whether it is actually true or not.

Trevor Dupuy challenged the validity of the 3-1 rule on empirical grounds. He could find no historical substantiation to support it. In fact, his research on the question of force ratios suggested that there was a limit to the value of numerical preponderance on the battlefield.

TDI President Chris Lawrence has also challenged the 3-1 rule in his own work on the subject.

The validity of the 3-1 rule is no mere academic question. It underpins a great deal of U.S. military policy and warfighting doctrine. Yet, the only time the matter was seriously debated was in the 1980s with reference to the problem of defending Western Europe against the threat of Soviet military invasion.

It is probably long past due to seriously challenge the validity and usefulness of the 3-1 rule again.