Tag tactics

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

What Does Lethality Mean In Warfare?

In an insightful essay over at The Strategy Bridge, “Lethality: An Inquiry,” Marine Corps officer Olivia Gerard accomplishes one of the most important, yet most often overlooked, aspects of successfully thinking about and planning for war: questioning a basic assumption. She achieves this by posing a simple question: “What is lethality?”

Gerard notes that the current U.S. National Defense Strategy is predicated on lethality; as it states: “A more lethal, resilient, and rapidly innovating Joint Force, combined with a robust constellation of allies and partners, will sustain American influence and ensure favorable balances of power that safeguard the free and open international order.” She also identifies the linkage in the strategy between lethality and deterrence via a supporting statement from Deputy Secretary of Defense Patrick Shanahan: “Everything we do is geared toward one goal: maximizing lethality. A lethal force is the strongest deterrent to war.”

After pointing out that the strategy does not define the concept of lethality, Gerard responds to Shanahan’s statement by asking “why?”

She uses this as a jumping off point to examine the meaning of lethality in warfare. Starting from the traditional understanding of lethality as a tactical concept, Gerard walks through the way it has been understood historically. From this, she formulates a construct for understanding the relationship between lethality and strategy:

Organizational lethality emerges from tactical lethality that is institutionally codified. Tactical lethality is nested within organizational lethality, which is nested within strategic lethality. Plugging these terms into an implicit calculus, we can rewrite strategic lethality as the efficacy with which we can form intentional deadly relationships towards targets that can be actualized towards political ends.

To this, Gerard appends two interesting caveats: “Notice first that the organizational component becomes implicit. What remains outside, however, is the intention–a meta-intention–to form these potential deadly relationships in the first place.”

It is the second of these caveats—the intent to connect lethality to a strategic end—that informs Gerard’s conclusion. While the National Defense Strategy does not define the term, she observes that by explicitly leveraging the threat to use lethality to bolster deterrence, it supplies the necessary credibility needed to make deterrence viable. “Proclaiming lethality a core tenet, especially in a public strategic document, is the communication of the threat.”

Gerard’s exploration of lethality and her proposed framework for understanding it provide a very useful way of thinking about the way it relates to warfare. It is definitely worth your time to read.

What might be just as interesting, however, are the caveats to her construct because they encompass a lot of what is problematic about the way the U.S. military thinks—explicitly and implicitly—about tactical lethality and how it is codified into concepts of organizational lethality. (While I have touched on some of those already, Gerard gives more to reflect on. More on that later.)

Gerard also references the definition of lethality Trevor Dupuy developed for his 1964 study of historical trends in weapon lethality. While noting that his definition was too narrow for the purposes of her inquiry, the historical relationship between lethality, casualties, and dispersion on the battlefield Dupuy found in that study formed the basis for his subsequent theories of warfare and models of combat. (I will write more about those in the future as well.)

Simpkin on the Long-Term Effects of Firepower Dominance

To follow on my earlier post introducing British military theorist Richard Simpkin’s foresight in detecting trends in 21st Century warfare, I offer this paragraph, which immediately followed the ones I quoted:

Briefly and in the most general terms possible, I suggest that the long-term effect of dominant firepower will be threefold. It will disperse mass in the form of a “net” of small detachments with the dual role of calling down fire and of local quasi-guerrilla action. Because of its low density, the elements of this net will be everywhere and will thus need only the mobility of the boot. It will transfer mass, structurally from the combat arms to the artillery, and in deployment from the direct fire zone (as we now understand it) to the formation and protection of mobile fire bases capable of movement at heavy-track tempo (Chapter 9). Thus the third effect will be to polarise mobility, for the manoeuvre force still required is likely to be based on the rotor. This line of thought is borne out by recent trends in Soviet thinking on the offensive. The concept of an operational manoeuvre group (OMG) which hives off raid forces against C3 and indirect fire resources is giving way to more fluid and discontinuous manoeuvre by task forces (“air-ground assault groups” found by “shock divisions”) directed onto fire bases—again of course with an operational helicopter force superimposed. [Simpkin, Race To The Swift, p. 169]

It seems to me that in the mid-1980s, Simpkin accurately predicted the emergence of modern anti-access/area denial (A2/AD) defensive systems with reasonable accuracy, as well the evolving thinking on the part of the U.S. military as to how to operate against them.

Simpkin’s vision of task forces (more closely resembling Russian/Soviet OMGs than rotary wing “air-ground assault groups” operational forces, however) employing “fluid and discontinuous manoeuvre” at operational depths to attack long-range precision firebases appears similar to emerging Army thinking about future multidomain operations. (It’s likely that Douglas MacGregor’s Reconnaissance Strike Group concept more closely fits that bill.)

One thing he missed on was his belief that rotary wing helicopter combat forces would supplant armored forces as the primary deep operations combat arm. However, there is the potential possibility that drone swarms might conceivably take the place in Simpkin’s operational construct that he allotted to heliborne forces. Drones have two primary advantages over manned helicopters: they are far cheaper and they are far less vulnerable to enemy fires. With their unique capacity to blend mass and fires, drones could conceivably form the deep strike operational hammer that Simpkin saw rotary wing forces providing.

Just as interesting was Simpkin’s anticipation of the growing importance of information and electronic warfare in these environments. More on that later.

Richard Simpkin on 21st Century Trends in Mass and Firepower

Anvil of “troops” vs. anvil of fire. (Richard Simpkin, Race To The Swift: Thoughts on Twenty-First Century Warfare, Brassey’s: London, 1985, p. 51)

For my money, one of the most underrated analysts and theorists of modern warfare was the late Brigadier Richard Simpkin. A retired British Army World War II veteran, Simpkin helped design the Chieftan tank in the 60s and 70s. He is best known for his series of books analyzing Soviet and Western military theory and doctrine. His magnum opus was Race To The Swift: Thoughts on Twenty-First Century Warfare, published in 1985. A brilliant blend of military history, insightful analysis of tactics and technology as well as operations and strategy, and Simpkin’s idiosyncratic wit, the observations in Race To The Swift are becoming more prescient by the year.

Some of Simpkin’s analysis has not aged well, such as the focus on the NATO/Soviet confrontation in Central Europe, and a bold prediction that rotary wing combat forces would eventually supplant tanks as the primary combat arm. However, it would be difficult to find a better historical review of the role of armored forces in modern warfare and how trends in technology, tactics, and doctrine are interacting with strategy, policy, and politics to change the character of warfare in the 21st Century.

To follow on my previous post on the interchangeability of fire (which I gleaned from Simpkin, of course), I offer this nugget on how increasing weapons lethality would affect 21st Century warfare, written from the perspective of the mid 1980s:

While accidents of ground will always provide some kind of cover, the effect of modern firepower on land force tactics is equally revolutionary. Just as we saw in Part 2 how the rotary wing may well turn force structures inside out, firepower is already turning tactical concepts inside out, by replacing the anvil of troops with an anvil of fire (Fig. 5, page 51)*. The use of combat troops at high density to hold ground or to seize it is already likely to prove highly costly, and may soon become wholly unprofitable. The interesting question is what effect the dominance of firepower will have at operational level.

One school of thought, to which many defence academics on both sides of the Atlantic subscribe, is that it will reduce mobility and bring about a return to positional warfare. The opposite view is that it will put a premium on elusiveness, increasing mobility and reducing mass. On analysis, both these opinions appear rather simplistic, mainly because they ignore the interchangeability of troops and fire…—in other words the equivalence or complementarity of the movement of troops and the massing of fire. They also underrate the part played by manned and unmanned surveillance, and by communication. Another factor, little understood by soldiers and widely ignored, is the weight of fire a modern fast jet in its strike configuration, flying a lo-lo-lo profile, can put down very rapidly wherever required. With modern artillery and air support, a pair of eyes backed up by an unjammable radio and perhaps a thermal imager becomes the equivalent of at least a (company) combat team, perhaps a battle group. [Simpkin, Race To The Swift, pp. 168-169]

Sound familiar? I will return to Simpkin’s insights in future posts, but I suggest you all snatch up a copy of Race To The Swift for yourselves.

* See above.

“Quantity Has A Quality All Its Own”: How Robot Swarms Might Change Future Combat

Humans vs. machines in the film Matrix Revolutions (2003) [Screencap by The Matrix Wiki]

Yesterday, Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security, and prolific writer on the future of robotics and artificial intelligence, posted a fascinating argument on Twitter regarding swarms and mass in future combat.

His thread was in response to an article by Shmuel Shmuel posted on War on the Rocks, which made the case that the same computer processing technology enabling robotic vehicles combined with old fashioned kinetic weapons (i.e. anti-aircraft guns) offered a cost-effective solution to swarms.

Scharre agreed that robotic drones are indeed vulnerable to such countermeasures, but made this point in response:

He then went to contend that robotic swarms offer the potential to reestablish the role of mass in future combat. Mass, either in terms of numbers of combatants or volume of firepower, has played a decisive role in most wars. As the aphorism goes, usually credited to Josef Stalin, “mass has a quality all of its own.”

Scharre observed that the United States went in a different direction in its post-World War II approach to warfare, adopting instead “offset” strategies that sought to leverage superior technology to balance against the mass militaries of the Communist bloc.

While effective during the Cold War, Scharre concurs with the arguments that offset strategies are becoming far too expensive and may ultimately become self-defeating.

In order to avoid this fate, Scharre contends that

The entire thread is well worth reading.

Trevor Dupuy would have agreed with much of what Scharre’s asserts. He identified the relationship between increasing weapon lethality and battlefield dispersion that goes back to the 17th century. Dupuy believed that the primary factor driving this relationship was the human response to fear in a lethal environment, with soldiers dispersing in depth and frontage on battlefields in order to survive weapons of ever increasing destructiveness.

TDI Friday Read: Lethality, Dispersion, And Mass On Future Battlefields

Robots might very well change that equation. Whether autonomous or “human in the loop,” robotic swarms do not feel fear and are inherently expendable. Cheaply produced robots might very well provide sufficient augmentation to human combat units to restore the primacy of mass in future warfare.

Questioning The Validity Of The 3-1 Rule Of Combat

Canadian soldiers going “over the top” during an assault in the First World War. [History.com]
[This post was originally published on 1 December 2017.]

How many troops are needed to successfully attack or defend on the battlefield? There is a long-standing rule of thumb that holds that an attacker requires a 3-1 preponderance over a defender in combat in order to win. The aphorism is so widely accepted that few have questioned whether it is actually true or not.

Trevor Dupuy challenged the validity of the 3-1 rule on empirical grounds. He could find no historical substantiation to support it. In fact, his research on the question of force ratios suggested that there was a limit to the value of numerical preponderance on the battlefield.

TDI President Chris Lawrence has also challenged the 3-1 rule in his own work on the subject.

The validity of the 3-1 rule is no mere academic question. It underpins a great deal of U.S. military policy and warfighting doctrine. Yet, the only time the matter was seriously debated was in the 1980s with reference to the problem of defending Western Europe against the threat of Soviet military invasion.

It is probably long past due to seriously challenge the validity and usefulness of the 3-1 rule again.

Dupuy/DePuy

Trevor N. Dupuy (1916-1995) and General William E. DePuy (1919-1992)

I first became acquainted with Trevor Dupuy and his work after seeing an advertisement for his book Numbers, Prediction & War in Simulation Publications, Inc.’s (SPI) Strategy & Tactics war gaming magazine way back in the late 1970s. Although Dupuy was already a prolific military historian, this book brought him to the attention of an audience outside of the insular world of the U.S. government military operations research and analysis community.

Ever since, however, Trevor Dupuy has been occasionally been confused with one of his contemporaries, U.S. Army General William E. DePuy. DePuy was notable in his own right, primarily as the first commander of the U.S. Army Training and Doctrine Command (TRADOC) from 1973 to 1977, and as one of the driving intellectual forces behind the effort to reorient the U.S. Army back to conventional warfare following the Vietnam War.

The two men had a great deal in common. They were born within three years of one another and both served in the U.S. Army during World War II. Both possessed an analytical bent and each made significant contributions to institutional and public debates about combat and warfare in the late 20th century. Given that they tilled the same topical fields at about the same time, it does not seem too odd that they were mistaken for each other.

Perhaps the most enduring link between the two men has been a shared name, though they spelled and pronounced it differently. The surname Dupuy is of medieval French origin and has been traced back to LePuy, France, in the province of Languedoc. It has several variant spellings, including DePuy and Dupuis. The traditional French pronunciation is “do-PWEE.” This is how Trevor Dupuy said his name.

However, following French immigration to North America beginning in the 17th century, the name evolved an anglicized spelling, DePuy (or sometimes Depew), and pronunciation, “deh-PEW.” This is the way General DePuy said it.

It is this pronunciation difference in conversation that has tipped me off personally to the occasional confusion in identities. Though rare these days, it still occurs. While this is a historical footnote, it still seems worth gently noting that Trevor Dupuy and William DePuy were two different people.

TDI Friday Read: Measuring The Effects of Combat in Cities

Between 2001 and 2004, TDI undertook a series of studies on the effects of urban combat in cities for the U.S. Army Center for Army Analysis (CAA). These studies examined a total of 304 cases of urban combat at the divisional and battalion level that occurred between 1942 and 2003, as well as 319 cases of concurrent non-urban combat for comparison.

The primary findings of Phases I-III of the study were:

  • Urban terrain had no significantly measurable influence on the outcome of battle.
  • Attacker casualties in the urban engagements were less than in the non-urban engagements and the casualty exchange ratio favored the attacker as well.
  • One of the primary effects of urban terrain is that it slowed opposed advance rates. The average advance rate in urban combat was one-half to one-third that of non-urban combat.
  • There is little evidence that combat operations in urban terrain resulted in a higher linear density of troops.
  • Armor losses in urban terrain were the same as, or lower than armor losses in non-urban terrain. In some cases it appears that armor losses were significantly lower in urban than non-urban terrain.
  • Urban terrain did not significantly influence the force ratio required to achieve success or effectively conduct combat operations.
  • Overall, it appears that urban terrain was no more stressful a combat environment during actual combat operations than was non-urban terrain.
  • Overall, the expenditure of ammunition in urban operations was not greater than that in non-urban operations. There is no evidence that the expenditure of other consumable items (rations; water; or fuel, oil, or lubricants) was significantly different in urban as opposed to non-urban combat.
  • Since it was found that advance rates in urban combat were significantly reduced, then it is obvious that these two effects (advance rates and time) were interrelated. It does appear that the primary impact of urban combat was to slow the tempo of operations.

In order to broaden and deepen understanding of the effects of urban combat, TDI proposed several follow-up studies. To date, none of these have been funded:

  1. Conduct a detailed study of the Battle of Stalingrad. Stalingrad may also represent one of the most intense examples of urban combat, so may provide some clues to the causes of the urban outliers.
  2. Conduct a detailed study of battalion/brigade-level urban combat. This would begin with an analysis of battalion-level actions from the first two phases of this study (European Theater of Operations and Eastern Front), added to the battalion-level actions completed in this third phase of the study. Additional battalion-level engagements would be added as needed.
  3. Conduct a detailed study of the outliers in an attempt to discover the causes for the atypical nature of these urban battles.
  4. Conduct a detailed study of urban warfare in an unconventional warfare setting.

Details of the Phase I-III study reports and conclusions can be found below:

Measuring The Effects Of Combat In Cities, Phase I

Measuring the Effects of Combat in Cities, Phase II – part 1

Measuring the Effects of Combat in Cities, Phase II – part 2

Measuring the Effects of Combat in Cities, Phase III – part 1

Measuring the Effects of Combat in Cities, Phase III – part 2

Measuring the Effects of Combat in Cities, Phase III – part 2.1

Measuring the Effects of Combat in Cities, Phase III – part 3

Urban Phase IV – Stalingrad

Urban Combat in War by Numbers

Another Look At The Role Of Russian Mercenaries In Syria

Russian businessman Yevgeny Prigozhin and Russian President Vladimir Putin. Prigozhin—who reportedly has ties to Putin, the Russian Ministry of Defense, and Russian mercenaries—was indicted by Special Counsel Robert Mueller on 16 February 2018 for allegedly funding and guiding a Russian government effort to interfere with the 2016 U.S. presidential election. [Alexei Druzhinin/AP]

As I recently detailed, many details remain unclear regarding the 7 February 2018 engagement in Deir Ezzor, Syria, between Russian mercenaries, Syrian government troops, and militia fighters and U.S. Special Operations Forces, U.S. Marines, and their partnered Kurdish and Syrian militia forces. Aside from questions as to just how many Russians participated and how many were killed, the biggest mystery is why the attack occurred at all.

Kimberly Marten, chair of the Political Science Department at Barnard College and director of the Program on U.S.-Russia Relations at Columbia University’s Harriman Institute, takes another look at this in a new article on War on the Rocks.

Why did Moscow initially deny any Russians’ involvement, and then downplay the casualty numbers? And why didn’t the Russian Defense Ministry stop the attackers from crossing into the American zone, or warn them about the likelihood of a U.S. counterstrike? Western media have offered two contending explanations: that Wagner acted without the Kremlin’s authorization, or that this was a Kremlin-approved attack that sought to test Washington while maintaining plausible deniability. But neither explanation fully answers all of the puzzles raised by the publicly available evidence, even though both help us understand more generally the opaque relationship between the Russian state and these forces.

After reviewing what is known about the relationship between the Russian government and the various Russian mercenary organizations, Marten proposes another explanation.

A different, or perhaps additional, rationale takes into account the ruthless infighting between Russian security forces that goes on regularly, while Russian President Vladimir Putin looks the other way. Russian Defense Ministry motives in Deir al-Zour may actually have centered on domestic politics inside Russia — and been directed against Putin ally and Wagner backer Yevgeny Prigozhin.

She takes a detailed look at the institutional relationships in question and draws a disquieting conclusion:

We may never have enough evidence to solve definitively the puzzles of Russian behavior at Deir al-Zour. But an understanding of Russian politics and security affairs allows us to better interpret the evidence we do have. Since Moscow’s employment of groups like Wagner appears to be a growing trend, U.S. and allied forces should consider the possibility that in various locations around the world, they might end up inadvertently, and dangerously, ensnared in Russia’s internal power struggles.

As with the Institute for the Study of War’s contention that the Russians are deliberately testing U.S. resolve in the Middle East, Marten’s interpretation that the actions of various Russian mercenary groups might be the result of internal Russian politics points to the prospect of further military adventurism only loosely connected to Russian foreign policy direction. Needless to say, the implications of this are ominous in a region of the world already beset by conflict and regional and international competition.

Back To The Future: The Return Of Sieges To Modern Warfare

Ruins of the northern Syrian city of Aleppo, which was besieged by Syrian government forces from July 2012 to December 2016. [Getty Images]

U.S. Army Major Amos Fox has published a very intriguing analysis in the Association of the U.S. Army’s Institute of Land Warfare Landpower Essay series, titled “The Reemergence of the Siege: An Assessment of Trends in Modern Land Warfare.” Building upon some of his previous work (here and here), Fox makes a case that sieges have again become a salient feature in modern warfare: “a brief survey of history illustrates that the siege is a defining feature of the late 20th and early 21st centuries; perhaps today is the siege’s golden era.”

Noting that neither U.S. Army nor joint doctrine currently addresses sieges, Fox adopts the dictionary definition: “A military blockade of a city or fortified place to compel it to surrender, or a persistent or serious attack.” He also draws a distinction between a siege and siege warfare; “siege warfare implies a way of battle, whereas a siege implies one tool of many in the kitbag of warfare.” [original emphasis]

He characterizes modern sieges thusly:

The contemporary siege is a blending of the traditional definition with concentric attacks. The modern siege is not necessarily characterized by a blockade, but more by an isolation of an adversary through encirclement while maintaining sufficient firepower against the besieged to ensure steady pressure. The modern siege can be terrain-focused, enemy-focused or a blending of the two, depending on the action of the besieged and the goal of the attacker. The goal of the siege is either to achieve a decision, whether politically or militarily, or to slowly destroy the besieged.

He cites the siege of Sarajevo (1992-1996) as the first example of the modern phenomenon. Other cases include Grozny (1999-2000); Aleppo, Ghouta, Kobani, Raqaa, and Deir Ezzor in Syria (2012 to 2018); Mosul (2016-2017); and Ilovaisk, Second Donetsk Airport, and Debal’tseve in the Ukraine (2014-present).

Fox notes that employing sieges carries significant risk. Most occur in urban areas. The restrictive nature of this terrain serves as a combat multiplier for inferior forces, allowing them to defend effectively against a much larger adversary. This can raise the potential military costs of conducting a siege beyond what an attacker is willing or able to afford.

Modern sieges also risk incurring significant political costs through collateral civilian deaths or infrastructure damage that could lead to a loss of international credibility or domestic support for governments that attempt them.

However, Fox identifies a powerful incentive that can override these disadvantages: when skillfully executed, a siege affords an opportunity for an attacker to contain and tie down defending forces, which can then be methodically destroyed. Despite the risks, he believes the apparent battlefield decisiveness of recent sieges means they will remain part of modern warfare.

Given modern sieges’ destructiveness and sharp impact on the populations on which they are waged, almost all actors (to include the United States) demonstrate a clear willingness—politically and militarily—to flatten cities and inflict massive suffering on besieged populations in order to capitalize on the opportunities associated with having their adversaries centralized.

Fox argues that sieges will be a primary tactic employed by proxy military forces, which are currently being used effectively by a variety of state actors in the Eastern Europe and the Middle East. “[A]s long as intermediaries are doing the majority of fighting and dying within a siege—or holding the line for the siege—it is a tactic that will continue to populate current and future battlefields.”

This is an excellent analysis. Go check it out.