Tag lethality

Active Defense, Forward Defense, and A2/AD in Eastern Europe

The current military and anti-access/area denial situation in Eastern Europe. [Map and overlay derived from situation map by Thomas C. Thielen (@noclador) https://twitter.com/noclador/status/1079999716333703168; and Ian Williams, “The Russia – NATO A2AD Environment,” Missile Threat, Center for Strategic and International Studies, published January 3, 2017, last modified November 29, 2018, https://missilethreat.csis.org/russia-nato-a2ad-environment/]

In an article published by West Point’s Modern War Institute last month, The US Army is Wrong on Future War,” Nathan Jennings, Amos Fox and Adam Taliaferro laid out a detailed argument that current and near-future political, strategic, and operational realities augur against the Army’s current doctrinal conceptualization for Multi-Domain Operations (MDO).

[T]he US Army is mistakenly structuring for offensive clashes of mass and scale reminiscent of 1944 while competitors like Russia and China have adapted to twenty-first-century reality. This new paradigm—which favors fait accompli acquisitions, projection from sovereign sanctuary, and indirect proxy wars—combines incremental military actions with weaponized political, informational, and economic agendas under the protection of nuclear-fires complexes to advance territorial influence…

These factors suggest, cumulatively, that the advantage in military confrontation between great powers has decisively shifted to those that combine strategic offense with tactical defense.

As a consequence, the authors suggested that “the US Army should recognize the evolved character of modern warfare and embrace strategies that establish forward positions of advantage in contested areas like Eastern Europe and the South China Sea. This means reorganizing its current maneuver-centric structure into a fires-dominant force with robust capacity to defend in depth.”

Forward Defense, Active Defense, and AirLand Battle

To illustrate their thinking, Jennings, Fox, and Taliaferro invoked a specific historical example:

This strategic realignment should begin with adopting an approach more reminiscent of the US Army’s Active Defense doctrine of the 1970s than the vaunted AirLand Battle concept of the 1980s. While many distain (sic) Active Defense for running counter to institutional culture, it clearly recognized the primacy of the combined-arms defense in depth with supporting joint fires in the nuclear era. The concept’s elevation of the sciences of terrain and weaponry at scale—rather than today’s cult of the offense—is better suited to the current strategic environment. More importantly, this methodology would enable stated political aims to prevent adversary aggression rather than to invade their home territory.

In the article’s comments, many pushed back against reviving Active Defense thinking, which has apparently become indelibly tarred with the derisive criticism that led to its replacement by AirLand Battle in the 1980s. As the authors gently noted, much of this resistance stemmed from the perceptions of Army critics that Active Defense was passive and defensively-oriented, overly focused on firepower, and suspicions that it derived from operations research analysts reducing warfare and combat to a mathematical “battle calculus.”

While AirLand Battle has been justly lauded for enabling U.S. military success against Iraq in 1990-91 and 2003 (a third-rank, non-nuclear power it should be noted), it always elided the fundamental question of whether conventional deep strikes and operational maneuver into the territory of the Soviet Union’s Eastern European Warsaw Pact allies—and potentially the Soviet Union itself—would have triggered a nuclear response. The criticism of Active Defense similarly overlooked the basic political problem that led to the doctrine in the first place, namely, the need to provide a credible conventional forward defense of West Germany. Keeping the Germans actively integrated into NATO depended upon assurances that a Soviet invasion could be resisted effectively without resorting to nuclear weapons. Indeed, the political cohesion of the NATO alliance itself rested on the contradiction between the credibility of U.S. assurances that it would defend Western Europe with nuclear weapons if necessary and the fears of alliance members that losing a battle for West Germany would make that necessity a reality.

Forward Defense in Eastern Europe

A cursory look at the current military situation in Eastern Europe along with Russia’s increasingly robust anti-access/area denial (A2/AD) capabilities (see map) should clearly illustrate the logic behind a doctrine of forward defense. U.S. and NATO troops based in Western Europe would have to run a gauntlet of well protected long-range fires systems just to get into battle in Ukraine or the Baltics. Attempting operational maneuver at the end of lengthy and exposed logistical supply lines would seem to be dauntingly challenging. The U.S. 2nd U.S. Cavalry ABCT Stryker Brigade Combat Team based in southwest Germany appears very much “lone and lonely.” It should also illustrate the difficulties in attacking the Russian A2/AD complex; an act, which Jennings, Fox, and Taliaferro remind, that would actively court a nuclear response.

In this light, Active Defense—or better—a MDO doctrine of forward defense oriented on “a fires-dominant force with robust capacity to defend in depth,” intended to “enable stated political aims to prevent adversary aggression rather than to invade their home territory,” does not really seem foolishly retrograde after all.

TDI Friday Read: Multi-Domain Battle/Operations Doctrine

With the December 2018 update of the U.S. Army’s Multi-Domain Operations (MDO) concept, this seems like a good time to review the evolution of doctrinal thinking about it. We will start with the event that sparked the Army’s thinking about the subject: the 2014 rocket artillery barrage fired from Russian territory that devastated Ukrainian Army forces near the village of Zelenopillya. From there we will look at the evolution of Army thinking beginning with the initial draft of an operating concept for Multi-Domain Battle (MDB) in 2017. To conclude, we will re-up two articles expressing misgivings over the manner with which these doctrinal concepts are being developed, and the direction they are taking.

The Russian Artillery Strike That Spooked The U.S. Army

Army And Marine Corps Join Forces To Define Multi-Domain Battle Concept

Army/Marine Multi-Domain Battle White Paper Available

What Would An Army Optimized For Multi-Domain Battle Look Like?

Sketching Out Multi-Domain Battle Operational Doctrine

U.S. Army Updates Draft Multi-Domain Battle Operating Concept

U.S. Army Multi-Domain Operations Concept Continues Evolving

U.S. Army Doctrine and Future Warfare

 

U.S. Army Doctrine and Future Warfare

Pre-war U.S. Army warfighting doctrine led to fielding the M10, M18 and M36 tank destroyers to counter enemy tanks. Their relatively ineffective performance against German panzers in Europe during World War II has been seen as the result of flawed thinking about tank warfare. [Wikimedia]

Two recently published articles on current U.S. Army doctrine development and the future of warfare deserve to be widely read:

“An Army Caught in the Middle Between Luddites, Luminaries, and the Occasional Looney,”

The first, by RAND’s David Johnson, is titled “An Army Caught in the Middle Between Luddites, Luminaries, and the Occasional Looney,” published by War on the Rocks.

Johnson begins with an interesting argument:

Contrary to what it says, the Army has always been a concepts-based, rather than a doctrine-based, institution. Concepts about future war generate the requirements for capabilities to realize them… Unfortunately, the Army’s doctrinal solutions evolve in war only after the failure of its concepts in its first battles, which the Army has historically lost since the Revolutionary War.

The reason the Army fails in its first battles is because its concepts are initially — until tested in combat — a statement of how the Army “wants to fight” and rarely an analytical assessment of how it “will have to fight.”

Starting with the Army’s failure to develop its own version of “blitzkrieg” after World War I, Johnson identified conservative organizational politics, misreading technological advances, and a stubborn refusal to account for the capabilities of potential adversaries as common causes for the inferior battlefield weapons and warfighting methods that contributed to its impressive string of lost “first battles.”

Conversely, Johnson credited the Army’s novel 1980s AirLand Battle doctrine as the product of an honest assessment of potential enemy capabilities and the development of effective weapon systems that were “based on known, proven technologies that minimized the risk of major program failures.”

“The principal lesson in all of this” he concluded, “is that the U.S. military should have a clear problem that it is trying to solve to enable it to innovate, and is should realize that innovation is generally not invention.” There are “also important lessons from the U.S. Army’s renaissance in the 1970s, which also resulted in close cooperation between the Army and the Air Force to solve the shared problem of the defense of Western Europe against Soviet aggression that neither could solve independently.”

“The US Army is Wrong on Future War”

The other article, provocatively titled “The US Army is Wrong on Future War,” was published by West Point’s Modern War Institute. It was co-authored by Nathan Jennings, Amos Fox, and Adam Taliaferro, all graduates of the School of Advanced Military Studies, veterans of Iraq and Afghanistan, and currently serving U.S. Army officers.

They argue that

the US Army is mistakenly structuring for offensive clashes of mass and scale reminiscent of 1944 while competitors like Russia and China have adapted to twenty-first-century reality. This new paradigm—which favors fait accompli acquisitions, projection from sovereign sanctuary, and indirect proxy wars—combines incremental military actions with weaponized political, informational, and economic agendas under the protection of nuclear-fires complexes to advance territorial influence. The Army’s failure to conceptualize these features of the future battlefield is a dangerous mistake…

Instead, they assert that the current strategic and operational realities dictate a far different approach:

Failure to recognize the ascendancy of nuclear-based defense—with the consequent potential for only limited maneuver, as in the seventeenth century—incurs risk for expeditionary forces. Even as it idealizes Patton’s Third Army with ambiguous “multi-domain” cyber and space enhancements, the US Army’s fixation with massive counter-offensives to defeat unrealistic Russian and Chinese conquests of Europe and Asia misaligns priorities. Instead of preparing for past wars, the Army should embrace forward positional and proxy engagement within integrated political, economic, and informational strategies to seize and exploit initiative.

The factors they cite that necessitate the adoption of positional warfare include nuclear primacy; sanctuary of sovereignty; integrated fires complexes; limited fait accompli; indirect proxy wars; and political/economic warfare.

“Given these realities,” Jennings, Fox, and Taliaferro assert, “the US Army must adapt and evolve to dominate great-power confrontation in the nuclear age. As such, they recommend that the U.S. (1) adopt “an approach more reminiscent of the US Army’s Active Defense doctrine of the 1970s than the vaunted AirLand Battle concept of the 1980s,” (2) “dramatically recalibrate its approach to proxy warfare; and (3) compel “joint, interagency and multinational coordination in order to deliberately align economic, informational, and political agendas in support of military objectives.”

Future U.S. Army Doctrine: How It Wants to Fight or How It Has to Fight?

Readers will find much with which to agree or disagree in each article, but they both provide viewpoints that should supply plenty of food for thought. Taken together they take on a different context. The analysis put forth by Jenninigs, Fox, and Taliaferro can be read as fulfilling Johnson’s injunction to base doctrine on a sober assessment of the strategic and operational challenges presented by existing enemy capabilities, instead of as an aspirational concept for how the Army would prefer to fight a future war. Whether or not Jennings, et al, have accurately forecasted the future can be debated, but their critique should raise questions as to whether the Army is repeating past doctrinal development errors identified by Johnson.

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

The Great 3-1 Rule Debate

coldwarmap3[This piece was originally posted on 13 July 2016.]

Trevor Dupuy’s article cited in my previous post, “Combat Data and the 3:1 Rule,” was the final salvo in a roaring, multi-year debate between two highly regarded members of the U.S. strategic and security studies academic communities, political scientist John Mearsheimer and military analyst/polymath Joshua Epstein. Carried out primarily in the pages of the academic journal International Security, Epstein and Mearsheimer argued the validity of the 3-1 rule and other analytical models with respect the NATO/Warsaw Pact military balance in Europe in the 1980s. Epstein cited Dupuy’s empirical research in support of his criticism of Mearsheimer’s reliance on the 3-1 rule. In turn, Mearsheimer questioned Dupuy’s data and conclusions to refute Epstein. Dupuy’s article defended his research and pointed out the errors in Mearsheimer’s assertions. With the publication of Dupuy’s rebuttal, the International Security editors called a time out on the debate thread.

The Epstein/Mearsheimer debate was itself part of a larger political debate over U.S. policy toward the Soviet Union during the administration of Ronald Reagan. This interdisciplinary argument, which has since become legendary in security and strategic studies circles, drew in some of the biggest names in these fields, including Eliot Cohen, Barry Posen, the late Samuel Huntington, and Stephen Biddle. As Jeffery Friedman observed,

These debates played a prominent role in the “renaissance of security studies” because they brought together scholars with different theoretical, methodological, and professional backgrounds to push forward a cohesive line of research that had clear implications for the conduct of contemporary defense policy. Just as importantly, the debate forced scholars to engage broader, fundamental issues. Is “military power” something that can be studied using static measures like force ratios, or does it require a more dynamic analysis? How should analysts evaluate the role of doctrine, or politics, or military strategy in determining the appropriate “balance”? What role should formal modeling play in formulating defense policy? What is the place for empirical analysis, and what are the strengths and limitations of existing data?[1]

It is well worth the time to revisit the contributions to the 1980s debate. I have included a bibliography below that is not exhaustive, but is a place to start. The collapse of the Soviet Union and the end of the Cold War diminished the intensity of the debates, which simmered through the 1990s and then were obscured during the counterterrorism/ counterinsurgency conflicts of the post-9/11 era. It is possible that the challenges posed by China and Russia amidst the ongoing “hybrid” conflict in Syria and Iraq may revive interest in interrogating the bases of military analyses in the U.S and the West. It is a discussion that is long overdue and potentially quite illuminating.

NOTES

[1] Jeffery A. Friedman, “Manpower and Counterinsurgency: Empirical Foundations for Theory and Doctrine,” Security Studies 20 (2011)

BIBLIOGRAPHY

(Note: Some of these are behind paywalls, but some are available in PDF format. Mearsheimer has made many of his publications freely available here.)

John J. Mearsheimer, “Why the Soviets Can’t Win Quickly in Central Europe,” International Security, Vol. 7, No. 1 (Summer 1982)

Samuel P. Huntington, “Conventional Deterrence and Conventional Retaliation in Europe,” International Security 8, no. 3 (Winter 1983/84)

Joshua Epstein, Strategy and Force Planning (Washington, DC: Brookings, 1987)

Joshua M. Epstein, “Dynamic Analysis and the Conventional Balance in Europe,” International Security 12, no. 4 (Spring 1988)

John J. Mearsheimer, “Numbers, Strategy, and the European Balance,” International Security 12, no. 4 (Spring 1988)

Stephen Biddle, “The European Conventional Balance,” Survival 30, no. 2 (March/April 1988)

Eliot A. Cohen, “Toward Better Net Assessment: Rethinking the European Conventional Balance,International Security Vol. 13, No. 1 (Summer 1988)

Joshua M. Epstein, “The 3:1 Rule, the Adaptive Dynamic Model, and the Future of Security Studies,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, “Assessing the Conventional Balance,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, Barry R. Posen, Eliot A. Cohen, “Correspondence: Reassessing Net Assessment,” International Security 13, No. 4 (Spring 1989)

Trevor N. Dupuy, “Combat Data and the 3:1 Rule,” International Security 14, no. 1 (Summer 1989)

Stephen Biddle et al., Defense at Low Force Levels (Alexandria, VA: Institute for Defense Analyses, 1991)

What Does Lethality Mean In Warfare?

In an insightful essay over at The Strategy Bridge, “Lethality: An Inquiry,” Marine Corps officer Olivia Gerard accomplishes one of the most important, yet most often overlooked, aspects of successfully thinking about and planning for war: questioning a basic assumption. She achieves this by posing a simple question: “What is lethality?”

Gerard notes that the current U.S. National Defense Strategy is predicated on lethality; as it states: “A more lethal, resilient, and rapidly innovating Joint Force, combined with a robust constellation of allies and partners, will sustain American influence and ensure favorable balances of power that safeguard the free and open international order.” She also identifies the linkage in the strategy between lethality and deterrence via a supporting statement from Deputy Secretary of Defense Patrick Shanahan: “Everything we do is geared toward one goal: maximizing lethality. A lethal force is the strongest deterrent to war.”

After pointing out that the strategy does not define the concept of lethality, Gerard responds to Shanahan’s statement by asking “why?”

She uses this as a jumping off point to examine the meaning of lethality in warfare. Starting from the traditional understanding of lethality as a tactical concept, Gerard walks through the way it has been understood historically. From this, she formulates a construct for understanding the relationship between lethality and strategy:

Organizational lethality emerges from tactical lethality that is institutionally codified. Tactical lethality is nested within organizational lethality, which is nested within strategic lethality. Plugging these terms into an implicit calculus, we can rewrite strategic lethality as the efficacy with which we can form intentional deadly relationships towards targets that can be actualized towards political ends.

To this, Gerard appends two interesting caveats: “Notice first that the organizational component becomes implicit. What remains outside, however, is the intention–a meta-intention–to form these potential deadly relationships in the first place.”

It is the second of these caveats—the intent to connect lethality to a strategic end—that informs Gerard’s conclusion. While the National Defense Strategy does not define the term, she observes that by explicitly leveraging the threat to use lethality to bolster deterrence, it supplies the necessary credibility needed to make deterrence viable. “Proclaiming lethality a core tenet, especially in a public strategic document, is the communication of the threat.”

Gerard’s exploration of lethality and her proposed framework for understanding it provide a very useful way of thinking about the way it relates to warfare. It is definitely worth your time to read.

What might be just as interesting, however, are the caveats to her construct because they encompass a lot of what is problematic about the way the U.S. military thinks—explicitly and implicitly—about tactical lethality and how it is codified into concepts of organizational lethality. (While I have touched on some of those already, Gerard gives more to reflect on. More on that later.)

Gerard also references the definition of lethality Trevor Dupuy developed for his 1964 study of historical trends in weapon lethality. While noting that his definition was too narrow for the purposes of her inquiry, the historical relationship between lethality, casualties, and dispersion on the battlefield Dupuy found in that study formed the basis for his subsequent theories of warfare and models of combat. (I will write more about those in the future as well.)

Simpkin on the Long-Term Effects of Firepower Dominance

To follow on my earlier post introducing British military theorist Richard Simpkin’s foresight in detecting trends in 21st Century warfare, I offer this paragraph, which immediately followed the ones I quoted:

Briefly and in the most general terms possible, I suggest that the long-term effect of dominant firepower will be threefold. It will disperse mass in the form of a “net” of small detachments with the dual role of calling down fire and of local quasi-guerrilla action. Because of its low density, the elements of this net will be everywhere and will thus need only the mobility of the boot. It will transfer mass, structurally from the combat arms to the artillery, and in deployment from the direct fire zone (as we now understand it) to the formation and protection of mobile fire bases capable of movement at heavy-track tempo (Chapter 9). Thus the third effect will be to polarise mobility, for the manoeuvre force still required is likely to be based on the rotor. This line of thought is borne out by recent trends in Soviet thinking on the offensive. The concept of an operational manoeuvre group (OMG) which hives off raid forces against C3 and indirect fire resources is giving way to more fluid and discontinuous manoeuvre by task forces (“air-ground assault groups” found by “shock divisions”) directed onto fire bases—again of course with an operational helicopter force superimposed. [Simpkin, Race To The Swift, p. 169]

It seems to me that in the mid-1980s, Simpkin accurately predicted the emergence of modern anti-access/area denial (A2/AD) defensive systems with reasonable accuracy, as well the evolving thinking on the part of the U.S. military as to how to operate against them.

Simpkin’s vision of task forces (more closely resembling Russian/Soviet OMGs than rotary wing “air-ground assault groups” operational forces, however) employing “fluid and discontinuous manoeuvre” at operational depths to attack long-range precision firebases appears similar to emerging Army thinking about future multidomain operations. (It’s likely that Douglas MacGregor’s Reconnaissance Strike Group concept more closely fits that bill.)

One thing he missed on was his belief that rotary wing helicopter combat forces would supplant armored forces as the primary deep operations combat arm. However, there is the potential possibility that drone swarms might conceivably take the place in Simpkin’s operational construct that he allotted to heliborne forces. Drones have two primary advantages over manned helicopters: they are far cheaper and they are far less vulnerable to enemy fires. With their unique capacity to blend mass and fires, drones could conceivably form the deep strike operational hammer that Simpkin saw rotary wing forces providing.

Just as interesting was Simpkin’s anticipation of the growing importance of information and electronic warfare in these environments. More on that later.

Richard Simpkin on 21st Century Trends in Mass and Firepower

Anvil of “troops” vs. anvil of fire. (Richard Simpkin, Race To The Swift: Thoughts on Twenty-First Century Warfare, Brassey’s: London, 1985, p. 51)

For my money, one of the most underrated analysts and theorists of modern warfare was the late Brigadier Richard Simpkin. A retired British Army World War II veteran, Simpkin helped design the Chieftan tank in the 60s and 70s. He is best known for his series of books analyzing Soviet and Western military theory and doctrine. His magnum opus was Race To The Swift: Thoughts on Twenty-First Century Warfare, published in 1985. A brilliant blend of military history, insightful analysis of tactics and technology as well as operations and strategy, and Simpkin’s idiosyncratic wit, the observations in Race To The Swift are becoming more prescient by the year.

Some of Simpkin’s analysis has not aged well, such as the focus on the NATO/Soviet confrontation in Central Europe, and a bold prediction that rotary wing combat forces would eventually supplant tanks as the primary combat arm. However, it would be difficult to find a better historical review of the role of armored forces in modern warfare and how trends in technology, tactics, and doctrine are interacting with strategy, policy, and politics to change the character of warfare in the 21st Century.

To follow on my previous post on the interchangeability of fire (which I gleaned from Simpkin, of course), I offer this nugget on how increasing weapons lethality would affect 21st Century warfare, written from the perspective of the mid 1980s:

While accidents of ground will always provide some kind of cover, the effect of modern firepower on land force tactics is equally revolutionary. Just as we saw in Part 2 how the rotary wing may well turn force structures inside out, firepower is already turning tactical concepts inside out, by replacing the anvil of troops with an anvil of fire (Fig. 5, page 51)*. The use of combat troops at high density to hold ground or to seize it is already likely to prove highly costly, and may soon become wholly unprofitable. The interesting question is what effect the dominance of firepower will have at operational level.

One school of thought, to which many defence academics on both sides of the Atlantic subscribe, is that it will reduce mobility and bring about a return to positional warfare. The opposite view is that it will put a premium on elusiveness, increasing mobility and reducing mass. On analysis, both these opinions appear rather simplistic, mainly because they ignore the interchangeability of troops and fire…—in other words the equivalence or complementarity of the movement of troops and the massing of fire. They also underrate the part played by manned and unmanned surveillance, and by communication. Another factor, little understood by soldiers and widely ignored, is the weight of fire a modern fast jet in its strike configuration, flying a lo-lo-lo profile, can put down very rapidly wherever required. With modern artillery and air support, a pair of eyes backed up by an unjammable radio and perhaps a thermal imager becomes the equivalent of at least a (company) combat team, perhaps a battle group. [Simpkin, Race To The Swift, pp. 168-169]

Sound familiar? I will return to Simpkin’s insights in future posts, but I suggest you all snatch up a copy of Race To The Swift for yourselves.

* See above.

Interchangeability Of Fire And Multi-Domain Operations

Soviet “forces and resources” chart. [Richard Simpkin, Deep Battle: The Brainchild of Marshal Tukhachevskii (Brassey’s: London, 1987) p. 254]

With the emergence of the importance of cross-domain fires in the U.S. effort to craft a joint doctrine for multi-domain operations, there is an old military concept to which developers should give greater consideration: interchangeability of fire.

This is an idea that British theorist Richard Simpkin traced back to 19th century Russian military thinking, which referred to it then as the interchangeability of shell and bayonet. Put simply, it was the view that artillery fire and infantry shock had equivalent and complimentary effects against enemy troops and could be substituted for one another as circumstances dictated on the battlefield.

The concept evolved during the development of the Russian/Soviet operational concept of “deep battle” after World War I to encompass the interchangeability of fire and maneuver. In Soviet military thought, the battlefield effects of fires and the operational maneuver of ground forces were equivalent and complementary.

This principle continues to shape contemporary Russian military doctrine and practice, which is, in turn, influencing U.S. thinking about multi-domain operations. In fact, the idea is not new to Western military thinking at all. Maneuver warfare advocates adopted the concept in the 1980s, but it never found its way into official U.S. military doctrine.

An Idea Who’s Time Has Come. Again.

So why should the U.S. military doctrine developers take another look at interchangeability now? First, the increasing variety and ubiquity of long-range precision fire capabilities is forcing them to address the changing relationship between mass and fires on multi-domain battlefields. After spending a generation waging counterinsurgency and essentially outsourcing responsibility for operational fires to the U.S. Air Force and U.S. Navy, both the U.S. Army and U.S. Marine Corps are scrambling to come to grips with the way technology is changing the character of land operations. All of the services are at the very beginning of assessing the impact of drone swarms—which are themselves interchangeable blends of mass and fires—on combat.

Second, the rapid acceptance and adoption of the idea of cross-domain fires has carried along with it an implicit acceptance of the interchangeability of the effects of kinetic and non-kinetic (i.e. information, electronic, and cyber) fires. This alone is already forcing U.S. joint military thinking to integrate effects into planning and decision-making.

The key component of interchangability is effects. Inherent in it is acceptance of the idea that combat forces have effects on the battlefield that go beyond mere physical lethality, i.e. the impact of fire or shock on a target. U.S. Army doctrine recognizes three effects of fires: destruction, neutralization, and suppression. Russian and maneuver warfare theorists hold that these same effects can be achieved through the effects of operational maneuver. The notion of interchangeability offers a very useful way of thinking about how to effectively integrate the lethality of mass and fires on future battlefields.

But Wait, Isn’t Effects Is A Four-Letter Word?

There is a big impediment to incorporating interchangeability into U.S. military thinking, however, and that is the decidedly ambivalent attitude of the U.S. land warfare services toward thinking about non-tangible effects in warfare.

As I have pointed out before, the U.S. Army (at least) has no effective way of assessing the effects of fires on combat, cross-domain or otherwise, because it has no real doctrinal methodology for calculating combat power on the battlefield. Army doctrine conceives of combat power almost exclusively in terms of capabilities and functions, not effects. In Army thinking, a combat multiplier is increased lethality in the form of additional weapons systems or combat units, not the intangible effects of operational or moral (human) factors on combat. For example, suppression may be a long-standing element in doctrine, but the Army still does not really have a clear idea of what causes it or what battlefield effects it really has.

In the wake of the 1990-91 Gulf War and the ensuing “Revolution in Military Affairs,” the U.S. Air Force led the way forward in thinking about the effects of lethality on the battlefield and how it should be leveraged to achieve strategic ends. It was the motivating service behind the development of a doctrine of “effects based operations” or EBO in the early 2000s.

However, in 2008, U.S. Joint Forces Command commander, U.S Marine General (and current Secretary of Defense) James Mattis ordered his command to no longer “use, sponsor, or export” EBO or related concepts and terms, the underlying principles of which he deemed to be “fundamentally flawed.” This effectively eliminated EBO from joint planning and doctrine. While Joint Forces Command was disbanded in 2011 and EBO thinking remains part of Air Force doctrine, Mattis’s decree pretty clearly showed what the U.S. land warfare services think about battlefield effects.

Artillery Effectiveness vs. Armor (Part 5-Summary)

U.S. Army 155mm field howitzer in Normandy. [padresteve.com]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

Table IX shows the distribution of cause of loss by type or armor vehicle. From the distribution it might be inferred that better protected armored vehicles may be less vulnerable to artillery attack. Nevertheless, the heavily armored vehicles still suffered a minimum loss of 5.6 percent due to artillery. Unfortunately the sample size for heavy tanks was very small, 18 of 980 cases or only 1.8 percent of the total.

The data are limited at this time to the seven cases.[6] Further research is necessary to expand the data sample so as to permit proper statistical analysis of the effectiveness of artillery versus tanks.

NOTES

[18] Heavy armor includes the KV-1, KV-2, Tiger, and Tiger II.

[19] Medium armor includes the T-34, Grant, Panther, and Panzer IV.

[20] Light armor includes the T-60, T-70. Stuart, armored cars, and armored personnel carriers.