Tag quantitative analysis

Human Factors In Warfare: Defensive Posture

U.S. Army troops shelter in defensive trenches at the Battle of Anzio, Italy, 1944. [U.S. Army Center for Military History]

Like dispersion on the battlefield, Trevor Dupuy believed that fighting on the defensive derived from the effects of the human element in combat.

When men believe that their chances of survival in a combat situation become less than some value (which is probably quantifiable, and is unquestionably related to a strength ratio or a power ratio), they cannot and will not advance. They take cover so as to obtain some protection, and by so doing they redress the strength or power imbalance. A force with strength y (a strength less than opponent’s strength x) has its strength multiplied by the effect of defensive posture (let’s give it the symbol p) to a greater power value, so that power py approaches, equals, or exceeds x, the unenhanced power value of the force with the greater strength x. It was because of this that [Carl von] Clausewitz–who considered that battle outcome was the result of a mathematical equation[1]–wrote that “defense is a stronger form of fighting than attack.”[2] There is no question that he considered that defensive posture was a combat multiplier in this equation. It is obvious that the phenomenon of the strengthening effect of defensive posture is a combination of physical and human factors.

Dupuy elaborated on his understanding of Clausewitz’s comparison of the impact of the defensive and offensive posture in combat in his book Understanding War.

The statement [that the defensive is the stronger form of combat] implies a comparison of relative strength. It is essentially scalar and thus ultimately quantitative. Clausewitz did not attempt to define the scale of his comparison. However, by following his conceptual approach it is possible to establish quantities for this comparison. Depending upon the extent to which the defender has had the time and capability to prepare for defensive combat, and depending also upon such considerations as the nature of the terrain which he is able to utilize for defense, my research tells me that the comparative strength of defense to offense can range from a factor with a minimum value of about 1.3 to maximum value of more than 3.0.[3]

NOTES

[1] Dupuy believed Clausewitz articulated a fundamental law for combat theory, which Dupuy termed the “Law of Numbers.” One should bear in mind this concept of a theory of combat is something different than a fundamental law of war or warfare. Dupuy’s interpretation of Clausewitz’s work can be found in Understanding War: History and Theory of Combat (New York: Paragon House, 1987), 21-30.

[2] Carl von Clausewitz, On War, translation by Colonel James John Graham (London: N. Trübner, 1873), Book One, Chapter One, Section 17

[3] Dupuy, Understanding War, 26.

Osipov

Back in 1915, a Russian named M. Osipov published a paper in a Tsarist military journal that was Lanchester like: http://www.dtic.mil/dtic/tr/fulltext/u2/a241534.pdf

He actually tested his equations to historical data, which are presented in his paper. He ended up coming up with something similar to Lanchester equations but it did not have a square law, but got a similar effect by putting things to the 3/2nds power.

As far as we know, because of the time it was published (June-October 1915), it was not influenced or done with any awareness of work that the far more famous Frederick Lanchester had done (and Lanchester was famous for a lot more than just his modeling equations).  Lanchester first published his work in the fall of 1914 (after the Great War had already started). It is possible that Osipov was aware of it, but he does not mention Lanchester. He was probably not aware of Lanchester’s work. It appears to be the case of him independently coming up with the use of differential equations to describe combat attrition. This was also the case with Rear Admiral J. V. Chase, who wrote a classified staff paper for U.S. Navy in 1902 that was not revealed until 1972.

Osipov, after he had written his paper, may have served in World War I, which was already underway at the time it was published. Between the war, the Russian revolutions, the civil war afterwards, the subsequent repressions by Cheka and later Stalin, we do not know what happened to M. Osipov. At the time I was asked by CAA if our Russian research team knew about him. I passed the question to Col. Sverdlov and Col. Vainer and they were not aware of him. It is probably possible to chase him down, but would probably take some effort. Perhaps some industrious researcher will find out more about him.

It does not appear that Osipov had any influence on Soviet operations research or military analysis. It appears that he was ignored or forgotten. His article was re-published in the September 1988  of the Soviet Military-Historical Journal with the propaganda influenced statement that they also had their own “Lanchester.” Of course, this “Soviet Lanchester” was publishing in a Tsarist military journal, hardly a demonstration of the strength of the Soviet system.

 

Human Factors In Warfare: Dispersion

Photo of Union soldiers on the Antietam battlefield by Alexander Gardener.

As I have written about before, the foundation of Trevor Dupuy’s theories on combat were based on an initial study in 1964 of the relationship between weapon lethality, casualty rates, and dispersion on the battlefield. The historical trend toward greater dispersion was a response to continual increases in the lethality of weapons.

While this relationship might appear primarily technological in nature, Dupuy considered it the result of the human factor of fear on the battlefield. He put it in more human terms in a symposium paper from 1989:

There is one basic reason for the dispersal of troops on modern battlefields: to mitigate the lethal effects of firepower upon troops. As Lewis Richardson wrote in The Statistics of Deadly Quarrels, there is a limit to the amount of punishment human beings can sustain. Dispersion was resorted to as a tactical response to firepower mostly because—as weapons became more lethal in the 17th Century—soldiers were already beginning to disperse without official sanction. This was because they sensed that on the bloody battlefields of that century they were approaching the limit of the punishment men can stand.

Attrition In Future Land Combat

Soldiers with Battery C, 1st Battalion, 82nd Field Artillery Regiment, 1st Brigade Combat Team, 1st Cavalry Division maneuver their Paladins through Hohenfels Training Area, Oct. 26. Photo Credit: Capt. John Farmer, 1st Brigade Combat Team, 1st Cav

Last autumn, U.S. Army Chief of Staff General Mark Milley asserted that “we are on the cusp of a fundamental change in the character of warfare, and specifically ground warfare. It will be highly lethal, very highly lethal, unlike anything our Army has experienced, at least since World War II.” He made these comments while describing the Army’s evolving Multi-Domain Battle concept for waging future combat against peer or near-peer adversaries.

How lethal will combat on future battlefields be? Forecasting the future is, of course, an undertaking fraught with uncertainties. Milley’s comments undoubtedly reflect the Army’s best guesses about the likely impact of new weapons systems of greater lethality and accuracy, as well as improved capabilities for acquiring targets. Many observers have been closely watching the use of such weapons on the battlefield in the Ukraine. The spectacular success of the Zelenopillya rocket strike in 2014 was a convincing display of the lethality of long-range precision strike capabilities.

It is possible that ground combat attrition in the future between peer or near-peer combatants may be comparable to the U.S. experience in World War II (although there were considerable differences between the experiences of the various belligerents). Combat losses could be heavier. It certainly seems likely that they would be higher than those experienced by U.S. forces in recent counterinsurgency operations.

Unfortunately, the U.S. Defense Department has demonstrated a tenuous understanding of the phenomenon of combat attrition. Despite wildly inaccurate estimates for combat losses in the 1991 Gulf War, only modest effort has been made since then to improve understanding of the relationship between combat and casualties. The U.S. Army currently does not have either an approved tool or a formal methodology for casualty estimation.

Historical Trends in Combat Attrition

Trevor Dupuy did a great deal of historical research on attrition in combat. He found several trends that had strong enough empirical backing that he deemed them to be verities. He detailed his conclusions in Understanding War: History and Theory of Combat (1987) and Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (1995).

Dupuy documented a clear relationship over time between increasing weapon lethality, greater battlefield dispersion, and declining casualty rates in conventional combat. Even as weapons became more lethal, greater dispersal in frontage and depth among ground forces led daily personnel loss rates in battle to decrease.

The average daily battle casualty rate in combat has been declining since 1600 as a consequence. Since battlefield weapons continue to increase in lethality and troops continue to disperse in response, it seems logical to presume the trend in loss rates continues to decline, although this may not necessarily be the case. There were two instances in the 19th century where daily battle casualty rates increased—during the Napoleonic Wars and the American Civil War—before declining again. Dupuy noted that combat casualty rates in the 1973 Arab-Israeli War remained roughly the same as those in World War II (1939-45), almost thirty years earlier. Further research is needed to determine if average daily personnel loss rates have indeed continued to decrease into the 21st century.

Dupuy also discovered that, as with battle outcomes, casualty rates are influenced by the circumstantial variables of combat. Posture, weather, terrain, season, time of day, surprise, fatigue, level of fortification, and “all out” efforts affect loss rates. (The combat loss rates of armored vehicles, artillery, and other other weapons systems are directly related to personnel loss rates, and are affected by many of the same factors.) Consequently, yet counterintuitively, he could find no direct relationship between numerical force ratios and combat casualty rates. Combat power ratios which take into account the circumstances of combat do affect casualty rates; forces with greater combat power inflict higher rates of casualties than less powerful forces do.

Winning forces suffer lower rates of combat losses than losing forces do, whether attacking or defending. (It should be noted that there is a difference between combat loss rates and numbers of losses. Depending on the circumstances, Dupuy found that the numerical losses of the winning and losing forces may often be similar, even if the winner’s casualty rate is lower.)

Dupuy’s research confirmed the fact that the combat loss rates of smaller forces is higher than that of larger forces. This is in part due to the fact that smaller forces have a larger proportion of their troops exposed to enemy weapons; combat casualties tend to concentrated in the forward-deployed combat and combat support elements. Dupuy also surmised that Prussian military theorist Carl von Clausewitz’s concept of friction plays a role in this. The complexity of interactions between increasing numbers of troops and weapons simply diminishes the lethal effects of weapons systems on real world battlefields.

Somewhat unsurprisingly, higher quality forces (that better manage the ambient effects of friction in combat) inflict casualties at higher rates than those with less effectiveness. This can be seen clearly in the disparities in casualties between German and Soviet forces during World War II, Israeli and Arab combatants in 1973, and U.S. and coalition forces and the Iraqis in 1991 and 2003.

Combat Loss Rates on Future Battlefields

What do Dupuy’s combat attrition verities imply about casualties in future battles? As a baseline, he found that the average daily combat casualty rate in Western Europe during World War II for divisional-level engagements was 1-2% for winning forces and 2-3% for losing ones. For a divisional slice of 15,000 personnel, this meant daily combat losses of 150-450 troops, concentrated in the maneuver battalions (The ratio of wounded to killed in modern combat has been found to be consistently about 4:1. 20% are killed in action; the other 80% include mortally wounded/wounded in action, missing, and captured).

It seems reasonable to conclude that future battlefields will be less densely occupied. Brigades, battalions, and companies will be fighting in spaces formerly filled with armies, corps, and divisions. Fewer troops mean fewer overall casualties, but the daily casualty rates of individual smaller units may well exceed those of WWII divisions. Smaller forces experience significant variation in daily casualties, but Dupuy established average daily rates for them as shown below.

For example, based on Dupuy’s methodology, the average daily loss rate unmodified by combat variables for brigade combat teams would be 1.8% per day, battalions would be 8% per day, and companies 21% per day. For a brigade of 4,500, that would result in 81 battle casualties per day, a battalion of 800 would suffer 64 casualties, and a company of 120 would lose 27 troops. These rates would then be modified by the circumstances of each particular engagement.

Several factors could push daily casualty rates down. Milley envisions that U.S. units engaged in an anti-access/area denial environment will be constantly moving. A low density, highly mobile battlefield with fluid lines would be expected to reduce casualty rates for all sides. High mobility might also limit opportunities for infantry assaults and close quarters combat. The high operational tempo will be exhausting, according to Milley. This could also lower loss rates, as the casualty inflicting capabilities of combat units decline with each successive day in battle.

It is not immediately clear how cyberwarfare and information operations might influence casualty rates. One combat variable they might directly impact would be surprise. Dupuy identified surprise as one of the most potent combat power multipliers. A surprised force suffers a higher casualty rate and surprisers enjoy lower loss rates. Russian combat doctrine emphasizes using cyber and information operations to achieve it and forces with degraded situational awareness are highly susceptible to it. As Zelenopillya demonstrated, surprise attacks with modern weapons can be devastating.

Some factors could push combat loss rates up. Long-range precision weapons could expose greater numbers of troops to enemy fires, which would drive casualties up among combat support and combat service support elements. Casualty rates historically drop during night time hours, although modern night-vision technology and persistent drone reconnaissance might will likely enable continuous night and day battle, which could result in higher losses.

Drawing solid conclusions is difficult but the question of future battlefield attrition is far too important not to be studied with greater urgency. Current policy debates over whether or not the draft should be reinstated and the proper size and distribution of manpower in active and reserve components of the Army hinge on getting this right. The trend away from mass on the battlefield means that there may not be a large margin of error should future combat forces suffer higher combat casualties than expected.

Insurgencies, Civil Conflicts, And Messy Endings

[© Reuters/Navesh Chitrakar]

The question of how insurgencies end is crucially important. There is no consensus on how to effectively wage counterinsurgency much less end one on favorable terms. Even successful counterinsurgencies may not end decisively. In the Dupuy Insurgency Spread Sheets (DISS) database of 83 post-World War II insurgencies, interventions, and stabilization operations, 42 are counterinsurgent successes and 11 had indeterminate conclusions. Of the counterinsurgent successes, about 1/3 failed to bring about stability or achieve long-term success.

George Frederick Willcoxon, an economist with the United Nations, recently looked into the question of why up to half of countries that suffer civil conflict relapse into violence between the same belligerents within a decade. He identified risk factors for reversion to war by examining the end of civil conflict and post-war recovery in 109 cases since 1970, drawing upon data from the Uppsala Conflict Data Program, the Peace Research Institute Oslo, the Polity IV project and the World Bank.

His conclusions were quite interesting:

Long-standing international conventional wisdom prioritizes economic reforms, transitional justice mechanisms or institutional continuity in post-war settings. However, my statistical analyses found that political institutions and military factors were actually the primary drivers of post-war risk. In particular, post-war states with more representative and competitive political systems as well as larger armed forces were better able to avoid war relapse.

These findings challenge a growing reluctance to consider early elections and political liberalization as critical steps for reestablishing authoritative, legitimate and sustainable political order after major armed conflict.

The non-results are perhaps as interesting as the results. With one exception discussed below, there is no evidence that the economic characteristics of post-war countries strongly influence the likelihood they will return to war. Income per capita, development assistance per capita, oil rents as a percent of GDP, overall unemployment rates and youth unemployment rates are not associated with civil war relapse.

Equally significant is there is no evidence that the culture, religion or geopolitics of the Middle East and North Africa will impede post-war recovery. I introduced into the statistical models measures for Islam, Arab culture and location in the region. None of these variables showed statistically significant correlations with the risk of war relapse since 1970, holding everything else constant, suggesting that such factors should not distinctively handicap post-war stabilization, recovery and transition in Iraq, Libya, Syria or Yemen.

Willcoxon’s research suggested a correlation between numbers of security forces and successfully preventing new violence.

Perhaps not surprisingly, larger security sectors reduce the risk of war relapse. For every additional soldier in the national armed forces per 1,000 people, the risk of relapse is about seven percent lower. Larger militaries are better able to deter renewed rebel activity, as well as prevent or reduce other forms of conflict such as terrorism, organized crime and communal violence.

He also found that the types of security forces had an influence as well.

The presence of outside troops also has significant influence on risk. The analysis lends support to a well-established finding in the political science literature that the presence of United Nations peacekeepers lowers the risk of conflict relapse. However, the presence of non-U.N. foreign troops almost triples the risk of relapsing back into civil war. There are at least two potential interpretations on this latter finding: Foreign troops may intervene in especially difficult circumstances, and therefore their presence indicates the post-war episodes most likely to fail; or foreign troops, particularly occupying armies, generate their own conflict risk.

These findings are strikingly similar to TDI’s research that suggests that higher force ratios of counterinsurgent troops to insurgents correlate with counterinsurgent success. You can check Willcoxon’s paper out here.

Predictions

We do like to claim we have predicted the casualty rates correctly in three wars (operations): 1) The 1991 Gulf War, 2) the 1995 Bosnia intervention, and 3) the Iraq insurgency.  Furthermore, these were predictions make of three very different types of operations, a conventional war, an “operation other than war” (OOTW) and an insurgency.

The Gulf War prediction was made in public testimony by Trevor Dupuy to Congress and published in his book If War Comes: How to Defeat Saddam Hussein. It is discussed in my book America’s Modern Wars (AMW) pages 51-52 and in some blog posts here.

The Bosnia intervention prediction is discussed in Appendix II of AMW and the Iraq casualty estimate is Chapter 1 and Appendix I.

We like to claim that we are three for three on these predictions. What does that really mean? If the odds of making a correct prediction are 50/50 (the same as a coin toss), then the odds of getting three correct predictions in a row is 12.5%. We may not be particularly clever, just a little lucky.

On the other hand, some might argue that these predictions were not that hard to make, and knowledgeable experts would certainly predict correctly at least two-thirds of the time. In that case the odds of getting three correct predictions in a row is more like 30%.

Still, one notes that there was a lot of predictions concerning the Gulf War that were higher than Trevor Dupuy’s. In the case of Bosnia, the Joint Staff was informed by a senior OR (Operations Research) office in the Army that there was no methodology for predicting losses in an “operation other than war” (AMW, page 309). In the case of the Iraq casualty estimate, we were informed by a director of an OR organization that our estimate was too high, and that the U.S. would suffer less than 2,000 killed and be withdrawn in a couple of years (Shawn was at that meeting). I think I left that out of my book in its more neutered final draft….my first draft was more detailed and maybe a little too “angry”. So maybe, predicting casualties in military operations is a little tricky. If the odds of a correct prediction was only one-in-three, then the odds of getting three correct predictions in a row is only 4%. For marketing purposes, we like this argument better 😉

Hard to say what are the odds of making a correct prediction are. The only war that had multiple public predictions (and of course, several private and classified ones) was the 1991 Gulf War. There were a number of predictions made and we believe most were pretty high. There was no other predictions we are aware of for Bosnia in 1995, other than the “it could turn into another Vietnam” ones. There are no other predictions we are aware of for Iraq in 2004, although lots of people were expressing opinions on the subject. So, it is hard to say how difficult it is to make a correct prediction in these cases.

P.S.: Yes, this post was inspired by my previous post on the Stanley Cup play-offs.

 

Logistics in Trevor Dupuy’s Combat Models

Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), p. 79

Mystics & Statistics reader Stiltzkin posed two interesting questions in response to my recent post on the new blog, Logistics in War:

Is there actually a reliable way of calculating logistical demand in correlation to “standing” ration strength/combat/daily strength army size?

Did Dupuy ever focus on logistics in any of his work?

The answer to his first question is, yes, there is. In fact, this has been a standard military staff function since before there were military staffs (Martin van Creveld’s book, Supplying War: Logistics from Wallenstein to Patton (2nd ed.) is an excellent general introduction). Staff officer’s guides and field manuals from various armies from the 19th century to the present are full of useful information on field supply allotments and consumption estimates intended to guide battlefield sustainment. The records of modern armies also contain reams of bureaucratic records documenting logistical functions as they actually occurred. Logistics and supply is a woefully under-studied aspect of warfare, but not because there are no sources upon which to draw.

As to his second question, the answer is also yes. Dupuy addressed logistics in his work in a couple of ways. He included two logistics multipliers in his combat models, one in the calculation for the battlefield effects of weapons, the Operational Lethality Index (OLI), and also as one element of the value for combat effectiveness, which is a multiplier in his combat power formula.

Dupuy considered the impact of logistics on combat to be intangible, however. From his historical study of combat, Dupuy understood that logistics impacted both weapons and combat effectiveness, but in the absence of empirical data, he relied on subject matter expertise to assign it a specific value in his model.

Logistics or supply capability is basic in its importance to combat effectiveness. Yet, as in the case of the leadership, training, and morale factors, it is almost impossible to arrive at an objective numerical assessment of the absolute effectiveness of a military supply system. Consequently, this factor also can be applied only when solid historical data provides a basis for objective evaluation of the relative effectiveness of the opposing supply capabilities.[1]

His approach to this stands in contrast to other philosophies of combat model design, which hold that if a factor cannot be empirically measured, it should not be included in a model. (It is up to the reader to decide if this is a valid approach to modeling real-world phenomena or not.)

Yet, as with many aspects of the historical study of combat, Dupuy and his colleagues at the Historical Evaluation Research Organization (HERO) had taken an initial cut at empirical research on the subject. In the late 1960s and early 1970s, Dupuy and HERO conducted a series of studies for the U.S. Air Force on the historical use of air power in support of ground warfare. One line of inquiry looked at the effects of air interdiction on supply, specifically at Operation STRANGLE, an effort by the U.S. and British air forces to completely block the lines of communication and supply of German ground forces defending Rome in 1944.

Dupuy and HERO dug deeply into Allied and German primary source documentation to extract extensive data on combat strengths and losses, logistical capabilities and capacities, supply requirements, and aircraft sorties and bombing totals. Dupuy proceeded from a historically-based assumption that combat units, using expedients, experience, and training, could operate unimpaired while only receiving up to 65% of their normal supply requirements. If the level of supply dipped below 65%, the deficiency would begin impinging on combat power at a rate proportional to the percentage of loss (i.e., a 60% supply rate would impose a 5% decline, represented as a combat effectiveness multiplier of .95, and so on).

Using this as a baseline, Dupuy and HERO calculated the amount of aerial combat power the Allies needed to apply to impact German combat effectiveness. They determined that Operation STRANGLE was able to reduce German supply capacity to about 41.8% of normal, which yielded a reduction in the combat power of German ground combat forces by an average of 6.8%.

He cautioned that these calculations were “directly relatable only to the German situation as it existed in Italy in late March and early April 1944.” As detailed as the analysis was, Dupuy stated that it “may be an oversimplification of a most complex combination of elements, including road and railway nets, supply levels, distribution of targets, and tonnage on targets. This requires much further exhaustive analysis in order to achieve confidence in this relatively simple relationship of interdiction effort to supply capability.”[2]

The historical work done by Dupuy and HERO on logistics and combat appears unique, but it seems highly relevant. There is no lack of detailed data from which to conduct further inquiries. The only impediment appears to be lack of interest.

NOTES

 [1] Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), p. 38.

[2] Ibid., pp. 78-94.

[NOTE: This post was edited to clarify the effect of supply reduction through aerial interdiction in the Operation STRANGLE study.]

Trevor Dupuy and Historical Trends Related to Weapon Lethality

There appears to be renewed interest in U.S. Army circles in Trevor Dupuy’s theory of a historical relationship between increasing weapon lethality, declining casualty rates, and greater dispersion on the battlefield. A recent article by Army officer and strategist Aaron Bazin, “Seven Charts That Help Explain American War” at The Strategy Bridge, used a composite version of two of Dupuy’s charts to explain the American military’s attraction to technology. (The graphic in Bazin’s article originated in a 2009 Australian Army doctrinal white paper, “Army’s Future Land Operating Concept,” which evidently did not cite Dupuy as the original source for the charts or the associated concepts.)

John McRea, like Bazin a U.S. Army officer, and a founding member of The Military Writer’s Guild, reposted Dupuy’s graphic in a blog post entitled “Outrageous Fortune: Spears and Arrows,” examining tactical and economic considerations in the use of asymmetrical technologies in warfare.

Dr. Conrad Crane, Chief of Historical Services for the U.S. Army Heritage and Education Center at the Army War College, also referenced Dupuy’s concepts in his look at human performance requirements, “The Future Soldier: Alone in a Crowd,” at War on the Rocks.

Dupuy originally developed his theory based on research and analysis undertaken by the Historical Evaluation and Research Organization (HERO) in 1964, for a study he directed, “Historical Trends Related to Weapon Lethality.” (Annex I, Annex II, Annex III). HERO had been contracted by the Advanced Tactics Project (AVTAC) of the U.S. Army Combat Developments Command, to provide unclassified support for Project OREGON TRAIL, a series of 45 classified studies of tactical nuclear weapons, tactics, and organization, which took 18 months to complete.

AVTAC asked HERO “to identify and analyze critical relationships and the cause-effect aspects of major advances in the lethality of weapons and associated changes in tactics and organization” from the Roman Era to the present. HERO’s study itself was a group project, incorporating 58 case studies from 21 authors, including such scholars as Gunther E. Rothenberg, Samuel P. Huntington, S.L.A. Marshall, R. Ernest Dupuy, Grace P. Hayes, Louis Morton, Peter Paret, Stefan T. Possony, and Theodore Ropp.

Dupuy synthesized and analyzed these case studies for the HERO study’s final report. He described what he was seeking to establish in his 1979 book, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles.

If the numbers of military history mean anything, it appears self-evident that there must be some kind of relationship between the quantities of weapons employed by opposing forces in combat, and the number of casualties suffered by each side. It also seems fairly obvious that some weapons are likely to cause more casualties than others, and that the effectiveness of weapons will depend upon their ability to reach their targets. So it becomes clear that the relationship of weapons to casualties is not quite the simple matter of comparing numbers to numbers. To compare weapons to casualties it is necessary to know not only the numbers of weapons, but also how many there are of each different type, and how effective or lethal each of these is.

The historical relationship between lethality, casualties, and dispersion that Dupuy deduced in this study provided the basis for his subsequent quest to establish an empirically-based, overarching theory of combat, which he articulated through his Quantified Judgement Model. Dupuy refined and updated the analysis from the 1964 HERO study in his 1980 book, The Evolution of Weapons and Warfare.

Military Effectiveness and Cheese-Eating Surrender Monkeys

The International Security Studies Forum (ISSF) has posted a roundtable review on H-Diplo of Jasen J. Castillo’s Endurance and War: The National Sources of Military Cohesion (Stanford, CA: Stanford University Press, 2014). As the introduction by Alexander B. Downes of The George Washington University lays out, there is a considerable political science literature that addresses the question of military effectiveness, or why some militaries are more effective combatants than others. Castillo focused on why some armies fight hard, even when faced with heavy casualties and the prospect of defeat, and why some become ineffective or simply collapse. The example most often cited in this context – as Downes and Castillo do – is the French Army. Why were the French routed so quickly in 1940 when they had fought so much harder and incurred far higher casualties in 1914? (Is this characterization of the French entirely fair? I’ll take a look at that question below.)

According to Downes, for his analysis, Castillo defined military cohesion as staying power and battlefield performance. He identified two factors that were primary in determining military cohesion: the persuasiveness of a regime’s ideology and coercive powers and the military’s ability to train its troops free from political interference. From this, Castillo drew two conclusions, one counterintuitive, the other in line with prevailing professional military thought.

  • “First, regimes that exert high levels of control over society—through a combination of an ideology that demands ‘unconditional loyalty’ (such as nationalism, communism, or fascism) and the power to compel recalcitrant individuals to conform—will field militaries with greater staying power than states with low levels of societal control.”
  • “Second, states that provide their military establishments with the autonomy necessary to engage in rigorous and realistic training will generate armies that fight in a determined yet flexible fashion.”

Based on his analysis, Castillo defines four military archetypes:

  • “Messianic militaries are the most fearsome of the lot. Produced by countries with high levels of regime control that give their militaries the autonomy to train, such as Nazi Germany, messianic militaries possess great staying power and superior battlefield performance.”
  • “Authoritarian militaries are also generated by nations with strong regime control over society, but are a notch below their messianic cousins because the regime systematically interferes in the military’s affairs. These militaries have strong staying power but are less nimble on the battlefield. The Red Army under Joseph Stalin is a good example.”
  • “Countries with low regime control but high military autonomy produce professional militaries. These militaries—such as the U.S. military in Vietnam—perform well in battle but gradually lose the will to fight as victory recedes into the distance.”
  • “Apathetic militaries, finally, are characteristic of states with both low regime control and low military autonomy, like France in 1940. These militaries fall apart quickly when faced with adversity.”

The discussion panel – Brendan Rittenhouse Green, (University of Cincinnati); Phil Haun (Yale University); Austin Long (Columbia University); and Caitlin Talmadge (The George Washington University) – reviewed Castillo’s work favorably. Their discussion and Castillo’s response are well worth the time to read.

Now, to the matter of France’s alleged “apathetic military.” The performance of the French Army in 1940 has earned the country the infamous reputation of being “cheese eating surrender monkeys.” Is this really fair? Well, if measured in terms of France’s perseverance in post-World War II counterinsurgency conflicts, the answer is most definitely no.

As detailed in Chris Lawrence’s book America’s Modern Wars, TDI looked at the relationship between national cost of foreign interventions and the outcome of insurgencies. One method used to measure national burden was the willingness of intervening states to sustain casualties. TDI found a strong correlation between high levels of casualties to intervening states and the failure of counterinsurgency efforts.

Among the cases in TDI’s database of post-World War II insurgencies, interventions, and peace-keeping operations, the French were the most willing, by far, to sustain the burden of casualties waging counterinsurgencies. In all but one of 17 years of continuous post-World War II conflict in Indochina and Algeria, democratic France’s apathetic military lost from 1 to 8 soldiers killed per 100,000 of its population.

In comparison, the U.S. suffered a similar casualty burden in Vietnam for only five years, incurring losses of 1.99 to 7.07 killed per 100,000 population between 1966 and 1970, which led to “Vietnamization” and withdrawal by 1973. The United Kingdom was even more sensitive to casualties. It waged multiple post-World War II insurgencies. Two that it won, in Malaya and Northern Ireland, produced casualty burdens of 0.09 British killed per 100,000 during its 13 years; Northern Ireland (1968–1998) never got above 0.19 British soldiers killed per 100,000 during its 31 years and for 20 of those years was below 0.025 per 100,000. The British also lost several counterinsurgencies with far lower casualty burdens than those of the French. Of those, the bloodiest was Palestine, where British losses peaked at 0.28 killed per 100,000 in 1948, which is also the year they withdrew.

Of the allegedly fearsome “authoritarian militaries,” only Portugal rivaled the staying power of the French. Portugal’s dictatorial Estado Novo government waged three losing counterinsurgencies in Africa over 14 years, suffering from 1 to 3.5 soldiers killed per 100,000 for 14 years, and between 2.5 and 3.5 killed per 100,000 in nine of those years. The failure of these wars also contributed to the overthrow of Portugal’s dictatorship.

The Soviet Union’s authoritarian military had a casualty burden between 0.22 and 0.75 soldiers killed per 100,000 in Afghanistan from 1980 through 1988. It withdrew after losing 14,571 dead (the U.S. suffered 58,000 killed in Vietnam) and the conflict is often cited as a factor in the collapse of the Soviet government in 1989.

Castillo’s analysis and analytical framework, which I have not yet read, appears intriguing and has received critical praise. Like much analysis of military history, however, it seems to explain the exceptions — the brilliant victories and unexpected defeats — rather than the far more prevalent cases of indecisive or muddled outcomes.

Concrete and COIN

A U.S. Soldier of 1-6 battalion, 2nd brigade, 1st Army Division, patrols near the wall in the Shiite enclave of Sadr city, Baghdad, Iraq, on Monday, June 9, 2008. The 12-foot concrete barrier is has been built along a main street dividing southern Sadr city from north and it is about 5 kilometers, (3.1 miles) long. (AP Photo/Petros Giannakouris)
A U.S. Soldier of 1-6 battalion, 2nd brigade, 1st Army Division, patrols near the wall in the Shiite enclave of Sadr city, Baghdad, Iraq, on Monday, June 9, 2008. The 12-foot concrete barrier is has been built along a main street dividing southern Sadr city from north and it is about 5 kilometers, (3.1 miles) long. (AP Photo/Petros Giannakouris)

U.S. Army Major John Spencer, an instructor at the Modern War Institute at West Point, has written an insightful piece about the utility of the ubiquitous concrete barrier in counterinsurgency warfare. Spencer’s ode is rooted in his personal experiences in Iraq in 2008.

When I deployed to Iraq as an infantry soldier in 2008 I never imagined I would become a pseudo-expert in concrete. But that is what happened—from small concrete barriers used for traffic control points to giant ones to protect against deadly threats like improvised explosive devices (IEDs) and indirect fire from rockets and mortars. Miniature concrete barriers were given out by senior leaders as gifts to represent entire tours. By the end my deployment, I could tell you how much each concrete barrier weighed. How much each barrier cost. What crane was needed to lift different types. How many could be emplaced in a single night. How many could be moved with a military vehicle before its hydraulics failed.

He goes on to explain how concrete barriers were used by U.S. forces for force protection in everything from combat outposts to forward operating bases; to interdict terrain from checkpoints to entire neighborhoods in Baghdad; and as fortified walls during the 2008 Battle for Sadr City. His piece is a testament to both the ingenuity of soldiers in the field and non-kinetic solutions to battlefield problems.

[NOTE: The post has been edited.]