Tag theory

Why Do Americans Hate Military Theory?

B.A. Friedman, On Tactics: A Theory of Victory in Battle (U.S. Naval Institute Press, 2017)

In his new book, On Tactics: A Theory of Victory in Battle, Brett Friedman wrote:

[The] lack of strategic education has produced a United States military adrift. A cottage industry of shallow military thought attached itself to the Department of Defense like a parasite, selling “new” concepts that ranged from the specious (such as the RMA and effects-based operations), to the banal (like “hybrid” and “asymmetric” warfare), to the nonsensical (like 4th Generation Warfare and Gray Zone/Wars). An American officer corps, bereft of a solid understanding of strategic theory, seizes on concept after concept, seeking the next shiny silver bullet that it can fire to kill the specter of strategic disarray.

The U.S. military establishment’s general disregard and disinterest in theorizing about war and warfare is not new. Trevor Dupuy was also critical of the American approach to thinking about theory, especially its superficial appreciation for the value of military history. As he wrote in Understanding War: History and Theory of Combat (1987):

In general, and with only a few significant exceptions, until very recently American military theorists have shown little interest in the concept of a comprehensive theory or science of combat. While most Americans who think about such things are strong believers in the application of science to war, they seem not to believe, paradoxically, that waging war can be scientific, but that it is an art rather than a science. Even scientists concerned with and involved in military affairs, who perhaps overemphasize the role of science in war, also tend to believe that war is a random process conducted by unpredictable human beings, and thus not capable of being fitted into a scientific theoretical structure. [p. 51]

Like Friedman, Dupuy placed a good deal of the blame for this on the way U.S. military officers are instructed. He saw a distinct difference in the approach taken in the U.S. versus the way it was used by the (then) Soviet Union. In a 1989 conference paper, he contended that:

The United States Armed Forces pay lip service to the importance of military history. Officers are urged to read military history, but given little guidance on how military history can be really useful to them. The fundamental difference between the Soviet approach and the American approach, as I see it, is that the American officer is invited (but not really encouraged) to be a military history dilettante. The Soviets seriously study, and use military history. Figure 1 summarizes the differences in approaches of the U.S. and the Soviet armed forces to military history analysis.

Dupuy devoted an entire chapter of Understanding War to the Soviet scientific approach to the study and application of warfare. There was a time when the mention of Soviet/Russian military theory would have produced patronizing smirks from American commentators. In truth, Russian military theorizing has a long and robust tradition; much deeper than its American counterpart. Given the recent success Russia has had in leveraging its national security capabilities to influence favorable geopolitical outcomes, it might be that those theories are useful after all. One need not subscribe to the Soviet scientific approach to warfare to acknowledge the value of a scientific approach to studying warfare.

Multi-Domain Battle And The Maneuver Warfare Debate

The recent commitment by the U.S. Army and Marine Corps to developing the concept of multi-domain battle led me to wonder: is this going to re-ignite the currently-dormant-but-unresolved debate over maneuver vs. attrition in American land warfare thinking? Will long-range precision fires and cross-domain targeting change the relationship between fire and maneuver in modern combat tactics? With an emphasis on fires of the kinetic and non-kinetic variety as the remedy to the challenge of anti-access/area denial capabilities and strategies, are multi-domain warfare theorists swinging the pendulum to the side of attrition?

What Is The Role of Maneuver In Multi-Domain Battle?

Consider this description of the Army’s conception of multi-domain battle offered by General David G. Perkins, Commander, United States Army Training and Doctrine Command:

[F]uture multifunctional Army fires units will provide the joint task force with a single unit combining surface-to-surface (land and maritime), surface-to-air, electromagnetic, and cyberspace cross-domain fires. These fires formations integrate with emerging Navy, Air Force, Marine and special operations forces capabilities to provide the commander multiple resilient options for striking the enemy and covering joint force maneuver.

At the same time, ground forces with improved maneuver and close combat capabilities allow the joint force to overwhelm or infiltrate dispersed enemy formations concealed from joint targeting and fires. A joint force containing effective ground forces requires the enemy to expose their dispersed forces to defeat in ground combat, face destruction from joint fires if they concentrate, or the loss of key terrain if they displace.

Future Army and Marine tactical ground maneuver units will combine sufficient cross-domain fires capability to enable decentralized ground maneuver and the creation of durable domain windows for the joint force with the mobility, lethality and protection to close with and destroy enemy ground forces in close combat. With combined arms pushed to the lowest practical level, these units will be flexible and resilient with the ability to operate in degraded conditions and with sufficient endurance to sustain losses and continue operations for extended periods and across wide areas.

The Army clearly sees maneuver to be an integral part of multi-domain battle, with an emphasis on closing with enemy forces to engage in close combat. However, it seems to me that the same technological changes that are prompting consideration of the new concept raise some questions:

  • What does close combat mean when ground maneuver elements can be brought under devastating surprise long-range precision fire barrages enabled by drone reconnaissance and cyber and information operations long before they close with enemy combat forces?
  • If even infantry squads are equipped with stand-off weapons, what is the future of close quarters combat?
  • Is the ability to take and hold ground an anachronism in anti-access/area-denial environments?
  • Will the purpose of maneuver be to force enemy ground maneuver elements to expose themselves to targeting by long-range precision fires? Or will maneuver mean movement to advantageous long-range precision firing positions, particularly if targeting across domains?
  • Is an emphasis on technological determinism reducing the capabilities of land combat units to just what they shoot?

The Maneuver Warfare Debate

Such questions seem sure to renew debates regarding the relationship between fire and maneuver in U.S. land warfare doctrine. The contemporary concept of maneuver warfare emerged in the early 1980s, as military and civilian practitioners and thinkers in the U.S. and the NATO countries came to grips with the challenges posed by Soviet military power in Europe. Inspired by the tactical and operational successes of the German Army during World War II, William Lind, John Boyd, Robert Leonhard, and Richard Simpkin, among others, drew upon a variety of American, British, German, and even Soviet sources to fashion a concept that established maneuver and attrition as distinct forms of warfare. In this telling, the First World War had been dominated by an overemphasis on the attritional effects of firepower, which yielded only bloody positional stalemate. In response, the Germans innovated new tactics to restore maneuver to the battlefield, which when combined with tanks and aircraft, led to their spectacular “blitzkrieg” victories in World War II. Their adversaries learned and adapted in turn, and developed maneuver doctrines of their own that helped defeat the Germans.

Maneuver warfare theories informed development of the U.S. Army’s AirLand Battle concept and operational doctrine of the late 1980s. The U.S. Marine Corps also integrated maneuver warfare into its doctrine in the 1997 edition of its capstone manual, MCDP-1 Warfighting. The idea of a maneuver style of warfare had plenty of critics, however. By the early 1990s, the Army had settled for a balance between maneuver and firepower in its combat doctrine. Debates and discussions about deep operations persisted into the late 1990s, but were preempted in large measure by the shift to irregular warfare and counterinsurgency after September 11, 2001. U.S. land warfare doctrine did get a brief test during the invasion of Iraq in 2003, but the woefully outclassed Iraq Army was quickly and decisively overwhelmed by American combat power, yielding few insights into future warfare against peer or near-peer opponents.

The last notable public exchange on this topic occurred in 2008 in Small Wars Journal. British defense writer and analyst William F. Owen, argued that a distinction between maneuver and attrition “styles” of warfare was artificial and lacked intellectual rigor and historical support. Eric Walter, a contributor to U.S. Marine Corps doctrinal publications, conceded that existing maneuver warfare theorizing was “fuzzy” in some respects, but countered that the intellectual thinking behind it nevertheless stimulated the U.S. military to sharpen its conception and conduct of warfare. The ensuing discussion thread fleshed out the respective perspectives and the debate continues.

Despite the official enthusiasm of the Army and Marine Corps, there are many aspects of the concept of multi-domain warfare that will need to be worked out if it is to become a viable combat doctrine and not simply justification for development of new weapons. One task will be to overcome the suspicions of the sister services that it is merely a gambit in the ongoing interservice budget battles. (Similar skepticism dogs the associated Third Offset Strategy.) Developing a better sense of exactly how long-range precision fires, cyber and information operations, and other innovative technologies might affect ground combat would be a good place to start.

Military Effectiveness and Cheese-Eating Surrender Monkeys

The International Security Studies Forum (ISSF) has posted a roundtable review on H-Diplo of Jasen J. Castillo’s Endurance and War: The National Sources of Military Cohesion (Stanford, CA: Stanford University Press, 2014). As the introduction by Alexander B. Downes of The George Washington University lays out, there is a considerable political science literature that addresses the question of military effectiveness, or why some militaries are more effective combatants than others. Castillo focused on why some armies fight hard, even when faced with heavy casualties and the prospect of defeat, and why some become ineffective or simply collapse. The example most often cited in this context – as Downes and Castillo do – is the French Army. Why were the French routed so quickly in 1940 when they had fought so much harder and incurred far higher casualties in 1914? (Is this characterization of the French entirely fair? I’ll take a look at that question below.)

According to Downes, for his analysis, Castillo defined military cohesion as staying power and battlefield performance. He identified two factors that were primary in determining military cohesion: the persuasiveness of a regime’s ideology and coercive powers and the military’s ability to train its troops free from political interference. From this, Castillo drew two conclusions, one counterintuitive, the other in line with prevailing professional military thought.

  • “First, regimes that exert high levels of control over society—through a combination of an ideology that demands ‘unconditional loyalty’ (such as nationalism, communism, or fascism) and the power to compel recalcitrant individuals to conform—will field militaries with greater staying power than states with low levels of societal control.”
  • “Second, states that provide their military establishments with the autonomy necessary to engage in rigorous and realistic training will generate armies that fight in a determined yet flexible fashion.”

Based on his analysis, Castillo defines four military archetypes:

  • “Messianic militaries are the most fearsome of the lot. Produced by countries with high levels of regime control that give their militaries the autonomy to train, such as Nazi Germany, messianic militaries possess great staying power and superior battlefield performance.”
  • “Authoritarian militaries are also generated by nations with strong regime control over society, but are a notch below their messianic cousins because the regime systematically interferes in the military’s affairs. These militaries have strong staying power but are less nimble on the battlefield. The Red Army under Joseph Stalin is a good example.”
  • “Countries with low regime control but high military autonomy produce professional militaries. These militaries—such as the U.S. military in Vietnam—perform well in battle but gradually lose the will to fight as victory recedes into the distance.”
  • “Apathetic militaries, finally, are characteristic of states with both low regime control and low military autonomy, like France in 1940. These militaries fall apart quickly when faced with adversity.”

The discussion panel – Brendan Rittenhouse Green, (University of Cincinnati); Phil Haun (Yale University); Austin Long (Columbia University); and Caitlin Talmadge (The George Washington University) – reviewed Castillo’s work favorably. Their discussion and Castillo’s response are well worth the time to read.

Now, to the matter of France’s alleged “apathetic military.” The performance of the French Army in 1940 has earned the country the infamous reputation of being “cheese eating surrender monkeys.” Is this really fair? Well, if measured in terms of France’s perseverance in post-World War II counterinsurgency conflicts, the answer is most definitely no.

As detailed in Chris Lawrence’s book America’s Modern Wars, TDI looked at the relationship between national cost of foreign interventions and the outcome of insurgencies. One method used to measure national burden was the willingness of intervening states to sustain casualties. TDI found a strong correlation between high levels of casualties to intervening states and the failure of counterinsurgency efforts.

Among the cases in TDI’s database of post-World War II insurgencies, interventions, and peace-keeping operations, the French were the most willing, by far, to sustain the burden of casualties waging counterinsurgencies. In all but one of 17 years of continuous post-World War II conflict in Indochina and Algeria, democratic France’s apathetic military lost from 1 to 8 soldiers killed per 100,000 of its population.

In comparison, the U.S. suffered a similar casualty burden in Vietnam for only five years, incurring losses of 1.99 to 7.07 killed per 100,000 population between 1966 and 1970, which led to “Vietnamization” and withdrawal by 1973. The United Kingdom was even more sensitive to casualties. It waged multiple post-World War II insurgencies. Two that it won, in Malaya and Northern Ireland, produced casualty burdens of 0.09 British killed per 100,000 during its 13 years; Northern Ireland (1968–1998) never got above 0.19 British soldiers killed per 100,000 during its 31 years and for 20 of those years was below 0.025 per 100,000. The British also lost several counterinsurgencies with far lower casualty burdens than those of the French. Of those, the bloodiest was Palestine, where British losses peaked at 0.28 killed per 100,000 in 1948, which is also the year they withdrew.

Of the allegedly fearsome “authoritarian militaries,” only Portugal rivaled the staying power of the French. Portugal’s dictatorial Estado Novo government waged three losing counterinsurgencies in Africa over 14 years, suffering from 1 to 3.5 soldiers killed per 100,000 for 14 years, and between 2.5 and 3.5 killed per 100,000 in nine of those years. The failure of these wars also contributed to the overthrow of Portugal’s dictatorship.

The Soviet Union’s authoritarian military had a casualty burden between 0.22 and 0.75 soldiers killed per 100,000 in Afghanistan from 1980 through 1988. It withdrew after losing 14,571 dead (the U.S. suffered 58,000 killed in Vietnam) and the conflict is often cited as a factor in the collapse of the Soviet government in 1989.

Castillo’s analysis and analytical framework, which I have not yet read, appears intriguing and has received critical praise. Like much analysis of military history, however, it seems to explain the exceptions — the brilliant victories and unexpected defeats — rather than the far more prevalent cases of indecisive or muddled outcomes.

War Stories-The Podcast for Military History Nerds

Angry Staff Officer says, "Safety is paramount in all things. Terrorists fear safety glasses that are also reflective."
The Angry Staff Officer says, “Safety is paramount in all things. Terrorists fear safety glasses that are also reflective.”

If you like military history and podcasts, then I would like to recommend that you give War Stories a listen. It is a new production from two members of the Military Writers Guild, Adin Dobkin and a serving Army National Guardsman who posts publicly under the nom de guerre, Angry Staff Officer. Seeking to bridge the gap between military history that focuses on engagements or battles, and broad sweeping analysis, Dobkin and ASO tell stories that link specific instances with a broader narrative arc. In doing so, they hope to “engage the human interest angle while also tracing broader trends in warfare, through balancing narrative and dialogue.”

The first season of the podcast is tracing the development of modern tank warfare from its dawn on the battlefields of France in the First World War, through the present day. The result is Basil Liddell Hart meets This American Life. Dobkin and ASO both have engaging personalities and military history nerd-wit in abundance. They bring a youthful perspective leavened by recent military experience and the perceptive eye of today’s well-trained and highly educated military officer corps.

The first four episodes begin with doomed British cavalry charges on the Somme battlefield in 1916, to George Patton’s first combat experiences at the St. Mihiel salient in 1917, to the clash of Russian and German inter-war tanks in the Spanish Civil War, to the baptism of fire of the American tank destroyer corps in Tunisia in 1943.

The results are both informative and quite entertaining. My only (minor) quibble is that it would help to have some maps and photographs to go with the narrative to help pin down places, faces, and tank silhouettes. If you appreciate the fact — as Dobkin and ASO do — that the Soviet T-34 tank owes its existence to the American engineer  J. Walter Christie, this is the podcast for which you have been searching.

What Is The Relationship Between Rate of Fire and Military Effectiveness?

marine-firing-m240Over at his Best Defense blog, Tom Ricks recently posed an interesting question: Is rate of fire no longer a key metric in assessing military effectiveness?

Rate of fire doesn’t seem to be important in today’s militaries. I mean, everyone can go “full auto.” Rather, the problem seems to me firing too much and running out of ammunition.

I wonder if this affects how contemporary military historians look at the tactical level of war. Throughout most of history, the problem, it seems to me, was how many rocks, spears, arrows or bullets you could get off. Hence the importance of drill, which was designed to increase the volume of infantry fire (and to reduce people walking off the battlefield when they moved back to reload).

There are several ways to address this question from a historical perspective, but one place to start is to look at how rate of fire relates historically to combat.

Rate of fire is one of several measures of a weapon’s ability to inflict damage, i.e. its lethality. In the early 1960s, Trevor Dupuy and his associates at the Historical Evaluation Research Organization (HERO) assessed whether historical trends in increasing weapon lethality were changing the nature of combat. To measure this, they developed a methodology for scoring the inherent lethality of a given weapon, the Theoretical Lethality Index (TLI). TLI is the product of five factors:

  • rate of fire
  • targets per strike
  • range factor
  • accuracy
  • reliability

In the TLI methodology, rate of fire is defined as the number of effective strikes a weapon can deliver under ideal conditions in increments of one hour, and assumes no logistical limitation.

As measured by TLI, increased rates of fire do indeed increase weapon lethality. The TLI of an early 20th century semi-automatic rifle is nearly five times higher than a mid-19th century muzzle-loaded rifle due to its higher rate of fire. Despite having lower accuracy and reliability, a World War II-era machine gun has 10 times the TLI of a semi-automatic rifle due to its rate of fire. The rate of fire of small arms has not increased since the early-to-mid 20th century, and the assault rifle, adopted by modern armies following World War II, remains that standard infantry weapon in the early 21st century.

attrition-fig-11

Rate of fire is just but one of many factors that can influence a weapon’s lethality, however. Artillery has much higher TLI values than small arms despite lower rates of fire. This is for the obvious reasons that artillery has far greater range than small arms and because each round of ammunition can hit multiple targets per strike.

There are other methods for scoring weapon lethality but the TLI provides a logical and consistent methodology for comparing weapons to each other. Through the TLI, Dupuy substantiated the observation that indeed, weapons have become more lethal over time, particularly in the last century.

But if weapons have become more lethal, has combat become bloodier? No. Dupuy and his colleagues also discovered that, counterintuitively, the average casualty rates in land combat have been declining since the 17th century. Combat casualty rates did climb in the early and mid-19th century, but fell again precipitously from the later 19th century through the end of the 20th.

attrition-fig-13

The reason, Dupuy determined, was because armies have historically adapted to increases in weapon lethality by dispersing in greater depth on the battlefield, decentralizing tactical decision-making and enhancing mobility, and placing a greater emphasis on combined arms tactics. The area occupied by 100,000 soldiers increased 4,000 times between antiquity and the late 20th century. Average ground force dispersion increased by a third between World War II and the 1973 Yom Kippur War, and he estimated it had increased by another quarter by 1990.

attrition-fig-14

Simply put, even as weapons become more deadly, there are fewer targets on the battlefield for them to hit. Through the mid-19th century, the combination of low rates of fire and relatively shorter range required the massing of infantry fires in order to achieve lethal effect. Before 1850, artillery caused more battlefield casualties than infantry small arms. This ratio changed due to the increased rates of fire and range of rifled and breach loading weapons introduced in the 1850s and 1860s. The majority of combat casualties in  conflicts of the mid-to-late 19th century were inflicted by infantry small arms.

attrition-fig-19The lethality of modern small arms combined with machine guns led to further dispersion and the decentralization of tactical decision-making in early 20th century warfare. The increased destructiveness of artillery, due to improved range and more powerful ammunition, coupled with the invention of the field telephone and indirect fire techniques during World War I, restored the long arm to its role as king of the battlefield.

attrition-fig-35

Dupuy represented this historical relationship between lethality and dispersion on the battlefield by applying a dispersion factor to TLI values to obtain what he termed the Operational Lethality Index (OLI). By accounting for these effects, OLI values are a good theoretical approximation of relative weapon effectiveness.

npw-fig-2-5Although little empirical research has been done on this question, it seems logical that the trend toward greater use of precision-guided weapons is at least a partial response to the so-called “empty battlefield.” The developers of the Third Offset Strategy postulated that the emphasis on developing precision weaponry by the U.S. in the 1970s was a calculated response to offset the Soviet emphasis on mass firepower (i.e. the “second offset”). The goal of modern precision weapons is “one shot, one kill,” where a reduced rate of fire is compensated for by greater range and accuracy. Such weapons have become sufficiently lethal that the best way to survive on a modern battlefield is to not be seen.

At least, that was the conventional wisdom until recently. The U.S. Army in particular is watching how the Ukrainian separatist forces and their Russian enablers are making use of new artillery weapons, drone and information technology, and tactics to engage targets with mass fires. Some critics have alleged that the U.S. artillery arm has atrophied during the Global War on Terror and may no longer be capable of overmatching potential adversaries. It is not yet clear whether there will be a real competition between mass and precision fires on the battlefields of the near future, but it is possible that it signals yet another shift in the historical relationship between lethality, mobility, and dispersion in combat.

SOURCES

Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (Falls Church, VA: NOVA Publications, 1995)

_____., Understanding War: History and Theory of Combat (New York: Paragon House, 1987)

_____. The Evolution of Weapons and Warfare (Indianapolis, IN: The Bobbs-Merrill Company, Inc., 1980)

_____. Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979)

Are Long-Range Fires Changing The Character of Land Warfare?

Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)
Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)

Has U.S. land warfighting capability been compromised by advances by potential adversaries in long-range artillery capabilities? Michael Jacobson and Robert H. Scales argue that this is the case in an article on War on the Rocks.

While the U.S. Army has made major advances by incorporating precision into artillery, the ability and opportunity to employ precision are premised on a world of low-intensity conflict. In high-intensity conflict defined by combined-arms maneuver, the employment of artillery based on a precise point on the ground becomes a much more difficult proposition, especially when the enemy commands large formations of moving, armored vehicles, as Russia does. The U.S. joint force has recognized this dilemma and compensates for it by employing superior air forces and deep-strike fires. But Russia has undertaken a comprehensive upgrade of not just its military technology but its doctrine. We should not be surprised that Russia’s goal in this endeavor is to offset U.S. advantages in air superiority and double-down on its traditional advantages in artillery and rocket mass, range, and destructive power.

Jacobson and Scales provide a list of relatively quick fixes they assert would restore U.S. superiority in long-range fires: change policy on the use of cluster munitions; upgrade the U.S. self-propelled howitzer inventory from short-barreled 39 caliber guns to long-barreled 52 calibers and incorporate improved propellants and rocket assistance to double their existing range; reevaluate restrictions on the forthcoming Long Range Precision Fires rocket system in light of Russian attitudes toward the Intermediate Range Nuclear Forces treaty; and rebuild divisional and field artillery units atrophied by a decade of counterinsurgency warfare.

Their assessment echoes similar comments made earlier this year by Lieutenant General H. R. McMaster, director of the U.S. Army’s Capabilities Integration Center. Another option for countering enemy fire artillery capabilities, McMaster suggested, was the employment of “cross-domain fires.” As he explained, “When an Army fires unit arrives somewhere, it should be able to do surface-to-air, surface-to-surface, and shore-to-ship capabilities.

The notion of land-based fire elements engaging more than just other land or counter-air targets has given rise to a concept being called “multi-domain battle.” It’s proponents, Dr. Albert Palazzo of the Australian Army’s War Research Centre, and Lieutenant Colonel David P. McLain III, Chief, Integration and Operations Branch in the Joint and Army Concepts Division of the Army Capabilities Integration Center, argue (also at War on the Rocks) that

While Western forces have embraced jointness, traditional boundaries between land, sea, and air have still defined which service and which capability is tasked with a given mission. Multi-domain battle breaks down the traditional environmental boundaries between domains that have previously limited who does what where. The theater of operations, in this view, is a unitary whole. The most useful capability needs to get the mission no matter what domain it technically comes from. Newly emerging technologies will enable the land force to operate in ways that, in the past, have been limited by the boundaries of its domain. These technologies will give the land force the ability to dominate not just the land but also project power into and across the other domains.

Palazzo and McClain contend that future land warfare forces

…must be designed, equipped, and trained to gain and maintain advantage across all domains and to understand and respond to the requirements of the future operating environment… Multi-domain battle will create options and opportunities for the joint force, while imposing multiple dilemmas on the adversary. Through land-to-sea, land-to-air, land-to-land, land-to-space, and land-to-cyberspace fires and effects, land forces can deter, deny, and defeat the adversary. This will allow the joint commander to seize, retain, and exploit the initiative.

As an example of their concept, Palazzo and McClain cite a combined, joint operation from the Pacific Theater in World War II:

Just after dawn on September 4, 1943, Australian soldiers of the 9th Division came ashore near Lae, Papua in the Australian Army’s first major amphibious operation since Gallipoli. Supporting them were U.S. naval forces from VII Amphibious Force. The next day, the 503rd U.S. Parachute Regiment seized the airfield at Nadzab to the West of Lae, which allowed the follow-on landing of the 7th Australian Division.  The Japanese defenders offered some resistance on the land, token resistance in the air, and no resistance at sea. Terrain was the main obstacle to Lae’s capture.

From the beginning, the allied plan for Lae was a joint one. The allies were able to get their forces across the approaches to the enemy’s position, establish secure points of entry, build up strength, and defeat the enemy because they dominated the three domains of war relevant at the time — land, sea, and air.

The concept of multi-domain warfare seems like a logical conceptualization for integrating land-based weapons of increased range and effect into the sorts of near-term future conflicts envisioned by U.S. policy-makers and defense analysts. It comports fairly seamlessly with the precepts of the Third Offset Strategy.

However, as has been observed with the Third Offset Strategy, this raises questions about the role of long-range fires in conflicts that do not involve near-peer adversaries, such as counterinsurgencies. Is an emphasis on technological determinism reducing the capabilities of land combat units to just what they shoot? Is the ability to take and hold ground an anachronism in anti-access/area-denial environments? Do long-range fires obviate the relationship between fire and maneuver in modern combat tactics? If even infantry squads are equipped with stand-off weapons, what is the future of close quarters combat?

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

The Uncongenial Lessons of Past Conflicts

Williamson Murray, professor emeritus of history at Ohio State University, on the notion that military failures can be traced to an overemphasis on the lessons of the last war:

It is a myth that military organizations tend to do badly in each new war because they have studied too closely the last one; nothing could be farther from the truth. The fact is that military organizations, for the most part, study what makes them feel comfortable about themselves, not the uncongenial lessons of past conflicts. The result is that more often than not, militaries have to relearn in combat—and usually at a heavy cost—lessons that were readily apparent at the end of the last conflict.

[Williamson Murray, “Thinking About Innovation,” Naval War College Review, Spring 2001, 122-123. This passage was cited in a recent essay by LTG H.R. McMaster, “Continuity and Change: The Army Operating Concept and Clear Thinking About Future War,” Military Review, March-April 2015. I recommend reading both.]

The U.S. Army’s Theory of Warfare…?

TP 525-3-1Last week, I touched on the ongoing effort by the U.S. Army to assess the nature of Russian advances in military technology and how they might affect the nature of combat on future battlefields. In a previous post, I highlighted that the Army’s preliminary conclusions about changes in near-future ground combat were being challenged by the other armed services in the context of debates over the next fiscal year U.S. military budget.

According to recently-confirmed Secretary of the Army, Eric Fanning, in order to persuade its critics, the Army needs to a better job of explaining the role it plays. “What I would have to do first of all is… tell the Army story… and the reason to do that is to make sure that the Army is properly resourced.”

Nadia Schadlow, a senior program officer at the Smith Richardson Foundation, pushed back against the idea that the Army needs a better narrative. She contends that the Army has already developed a theory of warfare that spells out how it believes near and medium-term wars will be fought and that it is now up to the critics to explain what aspects of this theory they object to and why.

Schadlow sketched out the U.S. Army’s current theory of warfare as it has been explained by senior Army leaders and in doctrinal publications.

The Army view is that conflicts in the future, like those in the past, will ultimately be resolved on land. Army forces will be essential components of joint operations to create sustainable political outcomes while defeating enemies and adversaries who challenge U.S. advantages in all domains: land, air, maritime, space, and cyberspace. Army contributions to joint operations provide multiple options to civilian and military leaders. These capabilities include tailorable and scalable combinations of special operations and conventional forces, regionally aligned and globally responsive combined arms teams, and foundational theater capabilities to enable joint operations.

The notion of a military service defining its own theory of warfare—as opposed to adopting a general theory of warfare—is an interesting one. [Schadlow drew the paragraph above from TRADOC Pamphlet 525-3-1 The U.S. Army Operating Concept 2020-2040: Win in a Complex World (2014)] Schadlow referenced a recent article by U.S. Army Major Robert Chamberlain that assessed the German Army’s theory of warfare in the context of its military defeat at Verdun in 1916. Chamberlain defines a theory of warfare as

[A] description of how a military intends to produce strategic outcomes. In making a decision to apply a military remedy to a strategic problem, one employs a theory of warfare to determine how and if the proposed solution will work. In the modern world, the development of grand strategy often receives theories of warfare as a given. Due to the time and expense required to develop and train a modern military, the strategic decision-makers are bound by the military capabilities and doctrine that exist when they assume power.

He spelled out what a theory of warfare does for a military organization.

A theory of warfare provides the ordering principles of a military whether made explicit or not. It is a description of the strategic environment, of what the military is, and how it applies itself against an adversary. Everything else that a military does—how it dresses, organizes itself, procures equipment, imposes discipline, generates force, sees terrain, treats captured enemies, deals with civilians, and so forth—is largely a function of how it defines and achieves  success in war.

Chamberlain’s definition for a theory of warfare is idiosyncratic and he does not make reference to the very large body of existing scholarship on warfare theory. It sounds a good deal more like an operating concept rather than a general theory of warfare. Schadlow’s definition is also problematic in that it seems like a self-referential description of how the U.S. conceptualizes the contemporary operating environment and the tasks the Army carries out as part of the overall joint force responsibilities. She twice cites the Army’s contention that future conflicts will be ultimately decided on land, but does not explain why. An Army theory of warfare would be more compelling if it also explained warfare in the other domains, not just on the ground.

Nevertheless, theories and theorizing are useful exercises in critical thinking. Even if Chamberlain’s concept does not rise to the level of a theory of warfare, it does show that effort is being made within the U.S. military to break down these ideas into their constituent parts and rethink how they work together. This is a subject I plan to return to in the near future.

Trevor Dupuy and the 3-1 Rule

rulesDr. Reina Pennington, a professor of history at Norwich University, recently published an analysis of the Eastern Front during World War II which made the case that the Soviet superiority in manpower over Germany was not as large as is often claimed. In support of her argument, Pennington provided a table comparing the total number of Soviet and German combat forces and force ratios at different times during the conflict. She pointed out that for much of the war, Soviet forces were either outnumbered, or achieved modest numerical superiorities that did not exceed 3 to 1 until late in 1944. “A 2:1 advantage is significant,” Pennington argued, “but falls short of the 3:1 force ratio that is generally regarded as necessary for attacking forces, and it’s a long way from the double-digit advantage that is often claimed.”

To support her assertion of the relevance of the 3-1 force ratio, Pennington linked to an article by Trevor N. Dupuy, “Combat Data and the 3:1 Rule,” published in the summer 1989 edition of International Security. The problem with citing Dupuy is that his assessment of the 3-1 rule contradicts her assertion of it.

Dupuy criticized the 3-1 rule on empirical grounds. The so-called “3-1 rule” is a military aphorism that holds that attacking forces require a 3 to 1 advantage over defending forces in order to succeed. Although this rule has become widely-known and widely-held, especially in Western militaries, its origin is unknown and unattributed. It is not exactly clear to what exactly it refers, and there is no known original statement of the rule that can be consulted for clarification.

Dupuy questioned the ambiguity of the rule, which in turn undermined its relevance.

[W]hat is the force ratio to be used with the 3:1 force ratio planning factor? Is it numbers of men, or weapons? Is it firepower? Is it some other calculation of a combat power ratio? In any event, it is clear that neither numbers nor firepower tells us much unless we know the circumstances under which these numbers face each other and the manner in which the firepower is applied.[1]

In 1984, Dupuy’s Historical Evaluation Research Organization (HERO) compiled a database of battles from 1600 to 1973 for the U.S. Army Concepts Analysis Agency (CAA; now known as the U.S. Army Center for Army Analysis). CAA’s examination of the numerical force ratios in the database showed that attackers with advantages of 3-1 or more in manpower succeeded 74% of the time. It also showed that attackers won between 58% and 63% of the time when attacking with between a 1.5-1 numerical disadvantage and less than a 3-1 advantage. Attackers also managed to obtain a manpower advantage of 3-1 or greater in just 106 of 598 cases (17.7%) examined.[2]

CAA, Battle Outcome vs. Force Ratio

Dupuy concurred that a 3-1 ratio based on a simple numerical total of troop numbers had limited use as a general rule-of-thumb guide for military planning, but asserted that it was useless for analytical purposes Simply put, while there are many historical cases where an attacking force with a 3-1 numerical advantage succeeded, there are also many cases where attackers won with less than a 3-1 advantage, and even with a numerical inferiority. On the Eastern Front during World War II, for example, the German Army regularly conducted successful attacks against numerically superior Soviet forces.

Dupuy was so certain of the validity of the data on this that he made it an aphorism of his own: In the average modem battle, the attacker’s numerical strength is about double the defender’s.

This is because the attacker has the initiative and can initiate combat at a time and place of his choosing and in the manner of his choosing. The attacker can mass his forces at critical points on the battlefield to gain the advantage in strength which he believes necessary to assure the success of the attack.

A battle usually does not take place unless each side believes it has some chance for success. Otherwise, the attacker would avoid taking the initiative. The defender, if he could not avoid battle by withdrawal, would make every possible effort to reinforce the prospective battle area sufficiently to have a chance for successful defense. One circumstance in which a battle occurs without the tacit agreement or acceptance of the defender, is when the attacker achieves surprise. Alternatively, surprise by a defender (for instance, by ambush) may result in a battle taking place before the prospective attacker is ready.

Most military men are aware of the rule of thumb that an attacker can count on success if he has a three-to-one numerical superiority, while a defender can expect success if his inferiority is not less than one-to-two. But the side achieving surprise can count on the effects of surprise multiplying its force strength by a factor ranging between 1.5 and 2.5 (or even more in some cases). Thus, an attacker expecting to achieve surprise would be willing to attack with less than a three-to-one superiority.

Another factor which can influence an attacker to seek battle with less than a three-to-one superiority is confidence in the superior quality of his troops. This accounts for many instances in which the Germans attacked in World War II with less than the desirable numerical superiority, and for the similar instances of Israeli attacks in the Arab-Israeli wars without great numerical superiority.[3]

Dr. Pennington is on fairly firm ground in rejecting the idea that numerical superiority was the sole reason the Red Army defeated the German Army in World War II, but numbers did play an extremely important role in the Soviet success. The lack of a 3-1 manpower advantage did not preclude the Soviets from battlefield success; 2-1 was sufficient. By the time the Soviets achieved a 3-1 advantage, success was well in hand and the end in sight.

NOTES

[1] Trevor N. Dupuy. Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles. Indianapolis; New York: The Bobbs-Merrill Co., 1979, p. 13

[2] Joshua M. Epstein, “Dynamic Analysis and the Conventional Balance in Europe,” International Security, Spring 1988, p. 156; Robert Helmbold and Aqeel A. Khan. “Combat History Analysis Study Effort (CHASE): Progress Report for the Period August 1984-June 1985,” Bethesda, MD: U.S. Army Concepts Analysis Agency, August 1986

[3] Trevor N. Dupuy. Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War. Falls Church (VA): Nova Publications, 1995, pp. 98-99