Tag Doctrine

Artillery Effectiveness vs. Armor (Part 1)

A U.S. M1 155mm towed artillery piece being set up for firing during the Battle of the Bulge, December 1944.

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

The effectiveness of artillery against exposed personnel and other “soft” targets has long been accepted. Fragments and blast are deadly to those unfortunate enough to not be under cover. What has also long been accepted is the relative—if not total—immunity of armored vehicles when exposed to shell fire. In a recent memorandum, the United States Army Armor School disputed the results of tests of artillery versus tanks by stating, “…the Armor School nonconcurred with the Artillery School regarding the suppressive effects of artillery…the M-1 main battle tank cannot be destroyed by artillery…”

This statement may in fact be true,[1] if the advancement of armored vehicle design has greatly exceeded the advancement of artillery weapon design in the last fifty years. [Original emphasis] However, if the statement is not true, then recent research by TDI[2] into the effectiveness of artillery shell fire versus tanks in World War II may be illuminating.

The TDI search found that an average of 12.8 percent of tank and other armored vehicle losses[3] were due to artillery fire in seven eases in World War II where the cause of loss could be reliably identified. The highest percent loss due to artillery was found to be 14.8 percent in the case of the Soviet 1st Tank Army at Kursk (Table II). The lowest percent loss due to artillery was found to be 5.9 percent in the case of Dom Bütgenbach (Table VIII).

The seven cases are split almost evenly between those that show armor losses to a defender and those that show losses to an attacker. The first four cases (Kursk, Normandy l. Normandy ll, and the “Pocket“) are engagements in which the side for which armor losses were recorded was on the defensive. The last three cases (Ardennes, Krinkelt. and Dom Bütgenbach) are engagements in which the side for which armor losses were recorded was on the offensive.

Four of the seven eases (Normandy I, Normandy ll, the “Pocket,” and Ardennes) represent data collected by operations research personnel utilizing rigid criteria for the identification of the cause of loss. Specific causes of loss were only given when the primary destructive agent could be clearly identified. The other three cases (Kursk, Krinkelt, and Dom Bütgenbach) are based upon combat reports that—of necessity—represent less precise data collection efforts.

However, the similarity in results remains striking. The largest identifiable cause of tank loss found in the data was, predictably, high-velocity armor piercing (AP) antitank rounds. AP rounds were found to be the cause of 68.7 percent of all losses. Artillery was second, responsible for 12.8 percent of all losses. Air attack as a cause was third, accounting for 7.4 percent of the total lost. Unknown causes, which included losses due to hits from multiple weapon types as well as unidentified weapons, inflicted 6.3% of the losses and ranked fourth. Other causes, which included infantry antitank weapons and mines, were responsible for 4.8% of the losses and ranked fifth.

NOTES

[1] The statement may be true, although it has an “unsinkable Titanic,” ring to it. It is much more likely that this statement is a hypothesis, rather than a truism.

[2] As pan of this article a survey of the Research Analysis Corporation’s publications list was made in an attempt to locate data from previous operations research on the subject. A single reference to the study of tank losses was found. Group 1 Alvin D. Coox and L. Van Loan Naisawald, Survey of Allied Tank Casualties in World War II, CONFIDENTIAL ORO Report T-117, 1 March 1951.

[3] The percentage loss by cause excludes vehicles lost due to mechanical breakdown or abandonment. lf these were included, they would account for 29.2 percent of the total lost. However, 271 of the 404 (67.1%) abandoned were lost in just two of the cases. These two cases (Normandy ll and the Falaise Pocket) cover the period in the Normandy Campaign when the Allies broke through the German defenses and began the pursuit across France.

Artillery Survivability In Modern Combat

The U.S. Army’s M109A6 Paladin 155 mm Self-Propelled Howitzer. [U.S. Army]
[This piece was originally published on 17 July 2017.]

Much attention is being given in the development of the U.S. joint concept of Multi-Domain Battle (MDB) to the implications of recent technological advances in long-range precision fires. It seems most of the focus is being placed on exploring the potential for cross-domain fires as a way of coping with the challenges of anti-access/area denial strategies employing long-range precision fires. Less attention appears to be given to assessing the actual combat effects of such weapons. The prevailing assumption is that because of the increasing lethality of modern weapons, battle will be bloodier than it has been in recent experience.

I have taken a look in previous posts at how the historical relationship identified by Trevor Dupuy between weapon lethality, battlefield dispersion, and casualty rates argues against this assumption with regard to personnel attrition and tank loss rates. What about artillery loss rates? Will long-range precision fires make ground-based long-range precision fire platforms themselves more vulnerable? Historical research suggests that trend was already underway before the advent of the new technology.

In 1976, Trevor Dupuy and the Historical Evaluation and Research Organization (HERO; one of TDI’s corporate ancestors) conducted a study sponsored by Sandia National Laboratory titled “Artillery Survivability in Modern War.” (PDF) The study focused on looking at historical artillery loss rates and the causes of those losses. It drew upon quantitative data from the 1973 Arab-Israel War, the Korean War, and the Eastern Front during World War II.

Conclusions

1. In the early wars of the 20th Century, towed artillery pieces were relatively invulnerable, and they were rarely severely damaged or destroyed except by very infrequent direct hits.

2. This relative invulnerability of towed artillery resulted in general lack of attention to the problems of artillery survivability through World War II.

3. The lack of effective hostile counter-artillery resources in the Korean and Vietnam wars contributed to continued lack of attention to the problem of artillery survivability, although increasingly armies (particularly the US Army) were relying on self-propelled artillery pieces.

4. Estimated Israeli loss statistics of the October 1973 War suggest that because of size and characteristics, self-propelled artillery is more vulnerable to modern counter-artillery means than was towed artillery in that and previous wars; this greater historical physical vulnerability of self-propelled weapons is consistent with recent empirical testing by the US Army.

5. The increasing physical vulnerability of modern self-propelled artillery weapons is compounded by other modern combat developments, including:

a. Improved artillery counter-battery techniques and resources;
b. Improved accuracy of air-delivered munitions;
c..increased lethality of modern artillery ammunition; and
d. Increased range of artillery and surface-to-surface missiles suitable for use against artillery.

6. Despite this greater vulnerability of self-propelled weapons, Israeli experience in the October war demonstrated that self-propelled artillery not only provides significant protection to cannoneers but also that its inherent mobility permits continued effective operation under circumstances in which towed artillery crews would be forced to seek cover, and thus be unable to fire their weapons. ‘

7. Paucity of available processed, compiled data on artillery survivability and vulnerability limits analysis and the formulation of reliable artillery loss experience tables or formulae.

8. Tentative analysis of the limited data available for this study indicates the following:

a. In “normal” deployment, percent weapon losses by standard weight classification are in the following proportions:

b. Towed artillery losses to hostile artillery (counterbattery) appear in general to very directly with battle intensity (as measured by percent personnel casualties per day), at a rate somewhat less than half of the percent personnel losses for units of army strength or greater; this is a straight-line relationship, or close to it; the stronger or more effective the hostile artillery is, the steeper the slope of the curve;

c. Towed artillery losses to all hostile anti-artillery means appears in general to vary directly with battle intensity at a rate about two-thirds of the-percent personnel losses for units of army strength or greater; the curve rises slightly more rapidly in high intensity combat than in normal or low-intensity combat; the stronger or more effective the hostile anti-artillery means (primarily air and counter-battery), the steeper the slope of the curve;

d. Self-propelled artillery losses appear to be generally consistent with towed losses, but at rates at least twice as great in comparison to battle intensity.

9. There are available in existing records of US and German forces in World war II, and US forces in the Korean and Vietnam Wars, unit records and reports that will permit the formulation of reliable artillery loss experience tables and formulae for those conflicts; these, with currently available and probably improved, data from the Arab-Israeli wars, will permit the formulation of reliable artillery loss experience tables and formulae for simulations of modern combat under current and foreseeable future conditions.

The study caveated these conclusions with the following observations:

Most of the artillery weapons in World War II were towed weapons. By the time the United States had committed small but significant numbers of self-propelled artillery pieces in Europe, German air and artillery counter-battery retaliatory capabilities had been significantly reduced. In the Korean and Vietnam wars, although most American artillery was self-propelled, the enemy had little counter-artillery capability either in the air or in artillery weapons and counter-battery techniques.

It is evident from vulnerability testing of current Army self-propelled weapons, that these weapons–while offering much more protection to cannoneers and providing tremendous advantages in mobility–are much more vulnerable to hostile action than are towed weapons, and that they are much more subject to mechanical breakdowns involving either the weapons mountings or the propulsion elements. Thus there cannot be a direct relationship between aggregated World War II data, or even aggregated Korean war or October War data, and current or future artillery configurations. On the other hand, the body of data from the October war where artillery was self-propelled is too small and too specialized by environmental and operational circumstances to serve alone as a paradigm of artillery vulnerability.

Despite the intriguing implications of this research, HERO’s proposal for follow on work was not funded. HERO only used easily accessible primary and secondary source data for the study. It noted much more primary source data was likely available but that it would require a significant research effort to compile it. (Research is always the expensive tent-pole in quantitative historical analysis. This seems to be why so little of it ever gets funded.) At the time of the study in 1976, no U.S. Army organization could identify any existing quantitative historical data or analysis on artillery losses, classified or otherwise. A cursory search on the Internet reveals no other such research as well. Like personnel attrition and tank loss rates, it would seem that artillery loss rates would be another worthwhile subject for quantitative analysis as part of the ongoing effort to develop the MDB concept.

Human Factors In Warfare: Suppression

Images from a Finnish Army artillery salvo fired by towed 130mm howitzers during an exercise in 2013. [Puolustusvoimat – Försvarsmakten – The Finnish Defence Forces/YouTube]
[This piece was originally posted on 24 August 2017.]

According to Trevor Dupuy, “Suppression is perhaps the most obvious and most extensive manifestation of the impact of fear on the battlefield.” As he detailed in Understanding War: History and Theory of Combat (1987),

There is probably no obscurity of combat requiring clarification and understanding more urgently than that of suppression… Suppression usually is defined as the effect of fire (primarily artillery fire) upon the behavior of hostile personnel, reducing, limiting, or inhibiting their performance of combat duties. Suppression lasts as long as the fires continue and for some brief, indeterminate period thereafter. Suppression is the most important effect of artillery fire, contributing directly to the ability of the supported maneuver units to accomplish their missions while preventing the enemy units from accomplishing theirs. (p. 251)

Official US Army field artillery doctrine makes a distinction between “suppression” and “neutralization.” Suppression is defined to be instantaneous and fleeting; neutralization, while also temporary, is relatively longer-lasting. Neutralization, the doctrine says, results when suppressive effects are so severe and long-lasting that a target is put out of action for a period of time after the suppressive fire is halted. Neutralization combines the psychological effects of suppressive gunfire with a certain amount of damage. The general concept of neutralization, as distinct from the more fleeting suppression, is a reasonable one. (p. 252)

Despite widespread acknowledgement of the existence of suppression and neutralization, the lack of interest in analyzing its effects was a source of professional frustration for Dupuy. As he commented in 1989,

The British did some interesting but inconclusive work on suppression in their battlefield operations research in World War II. In the United States I am aware of considerable talk about suppression, but very little accomplishment, over the past 20 years. In the light of the significance of suppression, our failure to come to grips with the issue is really quite disgraceful.

This lack of interest is curious, given that suppression and neutralization remain embedded in U.S. Army combat doctrine to this day. The current Army definitions are:

Suppression – In the context of the computed effects of field artillery fires, renders a target ineffective for a short period of time producing at least 3-percent casualties or materiel damage. [Army Doctrine Reference Publication (ADRP) 1-02, Terms and Military Symbols, December 2015, p. 1-87]

Neutralization – In the context of the computed effects of field artillery fires renders a target ineffective for a short period of time, producing 10-percent casualties or materiel damage. [ADRP 1-02, p. 1-65]

A particular source for Dupuy’s irritation was the fact that these definitions were likely empirically wrong. As he argued in Understanding War,

This is almost certainly the wrong way to approach quantification of neutralization. Not only is there no historical evidence that 10% casualties are enough to achieve this effect, there is no evidence that any level of losses is required to achieve the psycho-physiological effects of suppression or neutralization. Furthermore, the time period in which casualties are incurred is probably more important than any arbitrary percentage of loss, and the replacement of casualties and repair of damage are probably irrelevant. (p. 252)

Thirty years after Dupuy pointed this problem out, the construct remains enshrined in U.S. doctrine, unquestioned and unsubstantiated. Dupuy himself was convinced that suppression probably had little, if anything, to do with personnel loss rates.

I believe now that suppression is related to and probably a component of disruption caused by combat processes other than surprise, such as a communications failure. Further research may reveal, however, that suppression is a very distinct form of disruption that can be measured or estimated quite independently of disruption caused by any other phenomenon. (Understanding War, p. 251)

He had developed a hypothesis for measuring the effects of suppression, but was unable to interest anyone in the U.S. government or military in sponsoring a study on it. Suppression as a combat phenomenon remains only vaguely understood.

UPDATE: Should The U.S. Army Add More Tube Artillery To It Combat Units?

A 155mm Paladin howitzer with 1st Battery, 10th Field Artillery, 3rd Brigade Combat Team, Task Force Liberty stands ready for a fire mission at forward operating base Gabe April 16, 2005. [U.S. Department of Defense/DVIDS]

In response to my recent post looking at the ways the U.S. is seeking to improve its long range fires capabilities, TDI received this comment via Twitter:

@barefootboomer makes a fair point. It appears that the majority of the U.S. Army’s current efforts to improve its artillery capabilities are aimed at increasing lethality and capability of individual systems, but not actually adding additional guns to the force structure.

Are Army combat units undergunned in the era of multi-domain battle? The Mobile Protected Firepower program is intended to provide additional light tanks high-caliber direct fire guns to the Infantry Brigade Combat Teams. In his recent piece at West Point’s Modern War Institute blog, Captain Brandon Morgan recommended increasing the proportion of U.S. corps rocket artillery to tube artillery systems from roughly 1:4 to something closer to the current Russian Army ratio of 3:4.

Should the Army be adding other additional direct or indirect fires systems to its combat forces? What types and at what levels? Direct or indirect fire? More tubes per battery? More batteries? More battalions?

What do you think?

UPDATE: I got a few responses to my queries. The balance reflected this view:

@barefootboomer elaborated on his original point:

There were not many specific suggestions about changes to the existing forces structure, except for this one:

Are there any other thoughts or suggestions out there about this, or is the consensus that the Army is already pretty much on the right course toward fixing its fires problems?

Status Update On U.S. Long Range Fires Capabilities

Soldiers fire an M777A2 howitzer while supporting Iraqi security forces near al-Qaim, Iraq, Nov. 7, 2017, as part of the operation to defeat the Islamic State of Iraq and Syria. [Spc. William Gibson/U.S. Army]

Earlier this year, I noted that the U.S. is investing in upgrading its long range strike capabilities as part of its multi-domain battle doctrinal response to improving Chinese, Russian, and Iranian anti-access/area denial (A2/AD) capabilities. There have been a few updates on the progress of those investments.

The U.S. Army Long Range Fires Cross Functional Team

A recent article in Army Times by Todd South looked at some of the changes being implemented by the U.S. Army cross functional team charged with prioritizing improvements in the service’s long range fires capabilities. To meet a requirement to double the ranges of its artillery systems within five years, “the Army has embarked upon three tiers of focus, from upgrading old school artillery cannons, to swapping out its missile system to double the distance it can fire, and giving the Army a way to fire surface-to-surface missiles at ranges of 1,400 miles.”

The Extended Range Cannon Artillery program is working on rocket assisted munitions to double the range of the Army’s workhouse 155mm guns to 24 miles, with some special rounds capable of reaching targets up to 44 miles away. As I touched on recently, the Army is also looking into ramjet rounds that could potentially increase striking range to 62 miles.

To develop the capability for even longer range fires, the Army implemented a Strategic Strike Cannon Artillery program for targets up to nearly 1,000 miles, and a Strategic Fires Missile effort enabling targeting out to 1,400 miles.

The Army is also emphasizing retaining trained artillery personnel and an improved training regime which includes large-scale joint exercises and increased live-fire opportunities.

Revised Long Range Fires Doctrine

But better technology and training are only part of the solution. U.S. Army Captain Harrison Morgan advocated doctrinal adaptations to shift Army culture away from thinking of fires solely as support for maneuver elements. Among his recommendations are:

  • Increasing the proportion of U.S. corps rocket artillery to tube artillery systems from roughly 1:4 to something closer to the current Russian Army ratio of 3:4.
  • Fielding a tube artillery system capable of meeting or surpassing the German-made PZH 2000, which can strike targets out to 30 kilometers with regular rounds, sustain a firing rate of 10 rounds per minute, and strike targets with five rounds simultaneously.
  • Focus on integrating tube and rocket artillery with a multi-domain, joint force to enable the destruction of the majority of enemy maneuver forces before friendly ground forces reach direct-fire range.
  • Allow tube artillery to be task organized below the brigade level to provide indirect fires capabilities to maneuver battalions, and make rocket artillery available to division and brigade commanders. (Morgan contends that the allocation of indirect fires capabilities to maneuver battalions ended with the disbanding of the Army’s armored cavalry regiments in 2011.)
  • Increase training in use of unmanned aerial vehicle (UAV) assets at the tactical level to locate, target, and observe fires.

U.S. Air Force and U.S. Navy Face Long Range Penetrating Strike Challenges

The Army’s emphasis on improving long range fires appears timely in light of the challenges the U.S. Air Force and U.S. Navy face in conducting long range penetrating strikes mission in the A2/AD environment. A fascinating analysis by Jerry Hendrix for the Center for a New American Security shows the current strategic problems stemming from U.S. policy decisions taken in the early 1990s following the end of the Cold War.

In an effort to generate a “peace dividend” from the fall of the Soviet Union, the Clinton administration elected to simplify the U.S. military force structure for conducting long range air attacks by relieving the Navy of its associated responsibilities and assigning the mission solely to the Air Force. The Navy no longer needed to replace its aging carrier-based medium range bombers and the Air Force pushed replacements for its aging B-52 and B-1 bombers into the future.

Both the Air Force and Navy emphasized development and acquisition of short range tactical aircraft which proved highly suitable for the regional contingencies and irregular conflicts of the 1990s and early 2000s. Impressed with U.S. capabilities displayed in those conflicts, China, Russia, and Iran invested in air defense and ballistic missile technologies specifically designed to counter American advantages.

The U.S. now faces a strategic environment where its long range strike platforms lack the range and operational and technological capability to operate within these AS/AD “bubbles.” The Air Force has far too few long range bombers with stealth capability, and neither the Air Force nor Navy tactical stealth aircraft can carry long range strike missiles. The missiles themselves lack stealth capability. The short range of the Navy’s aircraft and insufficient numbers of screening vessels leave its aircraft carriers vulnerable to ballistic missile attack.

Remedying this state of affairs will take time and major investments in new weapons and technological upgrades. However, with certain upgrades, Hendrix sees the current Air Force and Navy force structures capable of providing the basis for a long range penetrating strike operational concept effective against A2/AD defenses. The unanswered question is whether these upgrades will be implemented at all.

U.S. Army Mobile Protected Firepower (MPF) Program Update

BAE Systems has submitted its proposal to the U.S. Army to build and test the Mobile Protected Firepower (MPF) vehicle [BAE Systems/Fox News]

When we last checked in with the U.S. Army’s Mobile Protected Firepower (MPF) program—an effort to quickly field a new light tank lightweight armored vehicle with a long-range direct fire capability—Request for Proposals (RFPs) were expected by November 2017 and the first samples by April 2018. It now appears the first MPF prototypes will not be delivered before mid-2020 at the earliest.

According to a recent report by Kris Osborn on Warrior Maven, “The service expects to award two Engineering Manufacturing and Development (EMD) deals by 2019 as part of an initial step to building prototypes from multiple vendors, service officials said. Army statement said initial prototypes are expected within 14 months of a contract award.”

Part of the delay appears to stem from uncertainty about requirements. As Osborn reported, “For the Army, the [MPF} effort involves what could be described as a dual-pronged acquisition strategy in that it seeks to leverage currently available or fast emerging technology while engineering the vehicle with an architecture such that it can integrate new weapons and systems as they emerge over time.”

Among the technologies the Army will seek to integrate into the MPF are a lightweight, heavy caliber main gun, lightweight armor composites, active protection systems, a new generation of higher-resolution targeting sensors, greater computer automation, and artificial intelligence.

Osborn noted that

the Army’s Communications Electronics Research, Development and Engineering Center (CERDEC) is already building prototype sensors – with this in mind. In particular, this early work is part of a longer-range effort to inform the Army’s emerging Next-Generation Combat Vehicle (NGCV). The NGCV, expected to become an entire fleet of armored vehicles, is now being explored as something to emerge in the late 2020s or early 2030s.

These evolving requirements are already impacting the Army’s approach to fielding MPF. It originally intended to “do acquisition differently to deliver capability quickly.” MPF program director Major General David Bassett declared in October 2017, “We expect to be delivering prototypes off of that program effort within 15 months of contract award…and getting it in the hands of an evaluation unit six months after that — rapid!

It is now clear the Army won’t be meeting that schedule after all. Stay tuned.

Questioning The Validity Of The 3-1 Rule Of Combat

Canadian soldiers going “over the top” during an assault in the First World War. [History.com]
[This post was originally published on 1 December 2017.]

How many troops are needed to successfully attack or defend on the battlefield? There is a long-standing rule of thumb that holds that an attacker requires a 3-1 preponderance over a defender in combat in order to win. The aphorism is so widely accepted that few have questioned whether it is actually true or not.

Trevor Dupuy challenged the validity of the 3-1 rule on empirical grounds. He could find no historical substantiation to support it. In fact, his research on the question of force ratios suggested that there was a limit to the value of numerical preponderance on the battlefield.

TDI President Chris Lawrence has also challenged the 3-1 rule in his own work on the subject.

The validity of the 3-1 rule is no mere academic question. It underpins a great deal of U.S. military policy and warfighting doctrine. Yet, the only time the matter was seriously debated was in the 1980s with reference to the problem of defending Western Europe against the threat of Soviet military invasion.

It is probably long past due to seriously challenge the validity and usefulness of the 3-1 rule again.

The Origins Of The U.S. Army’s Concept Of Combat Power

The U.S. Army’s concept of combat power can be traced back to the thinking of British theorist J.F.C. Fuller, who collected his lectures and thoughts into the book, The Foundations of the Science of War (1926).

In a previous post, I critiqued the existing U.S. Army doctrinal method for calculating combat power. The ideas associated with the term “combat power” have been a part of U.S Army doctrine since the 1920s. However, the Army did not specifically define what combat power actually meant until the 1982 edition of FM 100-5 Operations, which introduced the AirLand Battle concept. So where did the Army’s notion of the concept originate? This post will trace the way it has been addressed in the capstone Field Manual (FM) 100-5 Operations series.

As then-U.S. Army Major David Boslego explained in a 1995 School of Advanced Military Studies (SAMS) thesis[1], the Army’s original idea of combat power most likely derived from the work of British military theorist J.F.C. Fuller. In the late 1910s and early 1920s, Fuller articulated the first modern definitions of the principles of war, which he developed from his conception of force on the battlefield as something more than just the tangible effects of shock and firepower. Fuller’s principles were adopted in the 1920 edition of the British Army Field Service Regulations (FSR), which was the likely vector of influence on the U.S. Army’s 1923 FSR. While the term “combat power” does not appear in the 1923 FSR, the influence of Fullerian thinking is evident.

The first use of the phrase itself by the Army can be found in the 1939 edition of FM 100-5 Tentative Field Service Regulations, Operations, which replaced and updated the 1923 FSR. It appears just twice and was not explicitly defined in the text. As Boslego noted, however, even then the use of the term

highlighted a holistic view of combat power. This power was the sum of all factors which ultimately affected the ability of the soldiers to accomplish the mission. Interestingly, the authors of the 1939 edition did not focus solely on the physical objective of destroying the enemy. Instead, they sought to break the enemy’s power of resistance which connotes moral as well as physical factors.

This basic, implied definition of combat power as a combination of interconnected tangible physical and intangible moral factors could be found in all successive editions of FM 100-5 through 1968. The type and character of the factors comprising combat power evolved along with the Army’s experience of combat through this period, however. In addition to leadership, mobility, and firepower, the 1941 edition of FM 100-5 included “better armaments and equipment,” which reflected the Army’s initial impressions of the early “blitzkrieg” battles of World War II.

From World War II Through Korea

While FM 100-5 (1944) and  FM 100-5 (1949) made no real changes with respect to describing combat power, the 1954 edition introduced significant new ideas in the wake of major combat operations in Korea, albeit still without actually defining the term. As with its predecessors, FM 100-5 (1954) posited combat power as a combination of firepower, maneuver, and leadership. For the first time, it defined the principles of mass, unity of command, maneuver, and surprise in terms of combat power. It linked the principle of the offensive, “only offensive action achieves decisive results,” with the enduring dictum that “offensive action requires the concentration of superior combat power at the decisive point and time.”

Boslego credited the authors of FM 100-5 (1954) with recognizing the non-linear nature of warfare and advising commanders to take a holistic perspective. He observed that they introduced the subtle but important understanding of combat power not as a fixed value, but as something relative and interactive between two forces in battle. Any calculation of combat power would be valid only in relation to the opposing combat force. “Relative combat power is dynamic and can be directly influenced by opposing commanders. It therefore must be analyzed by the commander in its potential relation to all other factors.” One of the fundamental ways a commander could shift the balance of combat power against an enemy was through maneuver: “Maneuver must be used to alter the relative combat power of military forces.”

[As I mentioned in a previous post, Trevor Dupuy considered FM 100-5 (1954)’s list and definitions of the principles of war to be the best version.]

Into the “Pentomic Era”

The 1962 edition of FM 100-5 supplied a general definition of combat power that articulated the way the Army had been thinking about it since 1939.

Combat power is a combination of the physical means available to a commander and the moral strength of his command. It is significant only in relation to the combat power of the opposing forces. In applying the principles of war, the development and application of combat power are essential to decisive results.

It further refined the elements of combat power by redefining the principles of economy of force and security in terms of it as well.

By the early 1960s, however, the Army’s thinking about force on the battlefield was dominated by the prospect of the use of nuclear weapons. As Boslego noted, both FM 100-5 (1962) and FM 100-5 (1968)

dwelt heavily on the importance of dispersing forces to prevent major losses from a single nuclear strike, being highly mobile to mass at decisive points and being flexible in adjusting forces to the current situation. The terms dispersion, flexibility, and mobility were repeated so frequently in speeches, articles, and congressional testimony, that…they became a mantra. As a result, there was a lack of rigor in the Army concerning what they meant in general and how they would be applied on the tactical battlefield in particular.

The only change the 1968 edition made was to expand the elements of combat power to include “firepower, mobility, communications, condition of equipment, and status of supply,” which presaged an increasing focus on the technological aspects of combat and warfare.

The first major modification in the way the Army thought about combat power since before World War II was reflected in FM 100-5 (1976). These changes in turn prompted a significant reevaluation of the concept by then-U.S. Army Major Huba Wass de Czege. I will tackle how this resulted in the way combat power was redefined in the 1982 edition of FM 100-5 in a future post.

Notes

[1] David V. Boslego, “The Relationship of Information to the Relative Combat Power Model in Force XXI Engagements,” School of Advanced Military Studies Monograph, U.S. Army Command and General Staff College, Fort Leavenworth, Kansas, 1995.

Dupuy/DePuy

Trevor N. Dupuy (1916-1995) and General William E. DePuy (1919-1992)

I first became acquainted with Trevor Dupuy and his work after seeing an advertisement for his book Numbers, Prediction & War in Simulation Publications, Inc.’s (SPI) Strategy & Tactics war gaming magazine way back in the late 1970s. Although Dupuy was already a prolific military historian, this book brought him to the attention of an audience outside of the insular world of the U.S. government military operations research and analysis community.

Ever since, however, Trevor Dupuy has been occasionally been confused with one of his contemporaries, U.S. Army General William E. DePuy. DePuy was notable in his own right, primarily as the first commander of the U.S. Army Training and Doctrine Command (TRADOC) from 1973 to 1977, and as one of the driving intellectual forces behind the effort to reorient the U.S. Army back to conventional warfare following the Vietnam War.

The two men had a great deal in common. They were born within three years of one another and both served in the U.S. Army during World War II. Both possessed an analytical bent and each made significant contributions to institutional and public debates about combat and warfare in the late 20th century. Given that they tilled the same topical fields at about the same time, it does not seem too odd that they were mistaken for each other.

Perhaps the most enduring link between the two men has been a shared name, though they spelled and pronounced it differently. The surname Dupuy is of medieval French origin and has been traced back to LePuy, France, in the province of Languedoc. It has several variant spellings, including DePuy and Dupuis. The traditional French pronunciation is “do-PWEE.” This is how Trevor Dupuy said his name.

However, following French immigration to North America beginning in the 17th century, the name evolved an anglicized spelling, DePuy (or sometimes Depew), and pronunciation, “deh-PEW.” This is the way General DePuy said it.

It is this pronunciation difference in conversation that has tipped me off personally to the occasional confusion in identities. Though rare these days, it still occurs. While this is a historical footnote, it still seems worth gently noting that Trevor Dupuy and William DePuy were two different people.

TDI Friday Read: Lethality, Dispersion, And Mass On Future Battlefields

Armies have historically responded to the increasing lethality of weapons by dispersing mass in frontage and depth on the battlefield. Will combat see a new period of adjustment over the next 50 years like the previous half-century, where dispersion continues to shift in direct proportion to increased weapon range and precision, or will there be a significant change in the character of warfare?

One point of departure for such an inquiry could be the work of TDI President Chris Lawrence, who looked into the nature of historical rates of dispersion in combat from 1600 to 1991.

The Effects Of Dispersion On Combat

As he explained,

I am focusing on this because l really want to come up with some means of measuring the effects of a “revolution in warfare.” The last 400 years of human history have given us more revolutionary inventions impacting war than we can reasonably expect to see in the next 100 years. In particular, I would like to measure the impact of increased weapon accuracy, improved intelligence, and improved C2 on combat.

His tentative conclusions were:

  1. Dispersion has been relatively constant and driven by factors other than firepower from 1600-1815.
  2. Since the Napoleonic Wars, units have increasingly dispersed (found ways to reduce their chance to be hit) in response to increased lethality of weapons.
  3. As a result of this increased dispersion, casualties in a given space have declined.
  4. The ratio of this decline in casualties over area have been roughly proportional to the strength over an area from 1600 through WWI. Starting with WWII, it appears that people have dispersed faster than weapons lethality, and this trend has continued.
  5. In effect, people dispersed in direct relation to increased firepower from 1815 through 1920, and then after that time dispersed faster than the increase in lethality.
  6. It appears that since WWII, people have gone back to dispersing (reducing their chance to be hit) at the same rate that firepower is increasing.
  7. Effectively, there are four patterns of casualties in modem war:

Period 1 (1600 – 1815): Period of Stability

  • Short battles
  • Short frontages
  • High attrition per day
  • Constant dispersion
  • Dispersion decreasing slightly after late 1700s
  • Attrition decreasing slightly after mid-1700s.

Period 2 (1816 – 1905): Period of Adjustment

  • Longer battles
  • Longer frontages
  • Lower attrition per day
  • Increasing dispersion
  • Dispersion increasing slightly faster than lethality

Period 3 (1912 – 1920): Period of Transition

  • Long battles
  • Continuous frontages
  • Lower attrition per day
  • Increasing dispersion
  • Relative lethality per kilometer similar to past, but lower
  • Dispersion increasing slightly faster than lethality

Period 4 (1937 – present): Modern Warfare

  • Long battles
  • Continuous frontages
  • Low attrition per day
  • High dispersion (perhaps constant?)
  • Relatively lethality per kilometer much lower than the past
  • Dispersion increased much faster than lethality going into the period.
  • Dispersion increased at the same rate as lethality within the period.

Chris based his study on previous work done by Trevor Dupuy and his associates, which established a pattern in historical combat between lethality, dispersion, and battlefield casualty rates.

Trevor Dupuy and Historical Trends Related to Weapon Lethality

What Is The Relationship Between Rate of Fire and Military Effectiveness?

Human Factors In Warfare: Dispersion

There is no way to accurately predict the future relationship between weapon lethality and dispersion on the battlefield, but we should question whether or not current conception of combat reflect consideration of the historical trends.

Attrition In Future Land Combat

The Principle Of Mass On The Future Battlefield