Tag quantitative analysis

Quantifying the Holocaust

Odilo Globocnik, SS and Police Leader in the Lublin district of the General Government territory in German-occupied Poland, was placed in charge of Operation Reinhardt by SS Reichsführer Heinrich Himmler. [Wikipedia]

The devastation and horror of the Holocaust makes it difficult to truly wrap one’s head around its immense scale. Six million murdered Jews is a number so large that it is hard to comprehend, much less understand in detail. While there are many accounts of individual experiences, the wholesale destruction of the Nazi German documentation of their genocide has made it difficult to gauge the dynamics of their activities.

However, in a new study, Lewi Stone, Professor of Biomathematics at RMIT University in Australia, has used an obscure railroad dataset to reconstruct the size and scale of a specific action by the Germans in eastern Poland and western Ukraine in 1942. “Quantifying the Holocaust: Hyperintense kill rates during the Nazi genocide,” (Not paywalled. Yet.) published on 2 January in the journal Science Advances, uses train schedule data published in 1987 by historian Yitzhak Arad to track the geographical and temporal dimensions of some 1.7 Jews transported to the Treblinka, Belzec and Sobibor death camps in the late summer and early autumn of 1942.

This action, known as Operation Reinhardt, originated during the Wansee Conference in January 1942 as the plan to carry out Hitler’s Final Solution to exterminate Europe’s Jews. In July, Hitler “ordered all action speeded up” which led to a frenzy of roundups by SS (Schutzstaffel) groups from over 400 Jewish communities in Poland and Ukraine, and transport via 500 trains to the three camps along the Polish-Soviet border. In just 100 days, 1.7 million people had been relocated and almost 1.5 million of them were murdered (“special treatment” (Sonderbehandlung)), most upon arrival at the camps. This phase of Reinhardt came to an end in November 1942 because the Nazis had run out of people to kill.

This three-month period was by far the most intensely murderous phase of the Holocaust, carried out simultaneously with the German summer military offensive that culminated in disastrous battlefield defeat at the hands of the Soviets at Stalingrad at year’s end. 500,000 Jews were killed per month, or an average of 15,000 per day. Even parsed from the overall totals, these numbers remain hard to grasp.

Stone’s research is innovative and sobering. His article can currently be downloaded in PDF format. His piece in The Conversation includes interactive online charts. He also produced a video the presents his findings chronologically and spatially:

Dupuy’s Verities: Fortification

The Maginot Line was a 900-mile long network of underground bunkers, tunnels and concrete retractable gun batteries. Its heaviest defenses were located along the 280-mile long border with Germany. [WikiCommons]

The sixth of Trevor Dupuy’s Timeless Verities of Combat is:

Defenders’ chances of success are directly proportional to fortification strength.

From Understanding War (1987):

To some modern military thinkers this is a truism needing no explanation or justification. Others have asserted that prepared defenses are attractive traps to be avoided at all costs. Such assertions, however, either ignore or misread historical examples. History is so fickle that it is dangerous for historians to use such words as “always” or “never.” Nevertheless I offer a bold counter-assertion: never in history has a defense been weakened by the availability of fortifications; defensive works always enhance combat strength. At the very least, fortifications will delay an attacker and add to his casualties; at best, fortifications will enable the defender to defeat the attacker.

Anyone who suggests that breakthroughs of defensive positions in recent history demonstrate the bankruptcy of defensive posture and/or fortifications is seriously deceiving himself and is misinterpreting modern history. One can cite as historical examples the overcoming of the Maginot Line, the Mannerheim Line, the Siegfried Line, and the Bar Lev Line, and from these examples conclude that these fortifications failed. Such a conclusion is absolutely wrong. It is true that all of these fortifications were overcome, but only because a powerful enemy was willing to make a massive and costly effort. (Of course, the Maginot Line was not attacked frontally in 1940; the Germans were so impressed by its defensive strength that they bypassed it, and were threatening its rear when France surrendered.) All of these fortifications afforded time for the defenders to make new dispositions, to bring up reserves, or to mobilize. All were intended to obstruct, to permit the defenders to punish the attackers and, above all to delay; all were successful in these respects. The Bar Lev Line, furthermore, saved Israel from disastrous defeat, and became the base for a successful offensive.[p. 4]

Will field fortifications continue to enhance the combat power of land forces on future battlefields? This is an interesting question. While the character of existing types of fortifications—trenches, strongpoint, and bunkers—might change, seeking cover and concealment from the earth might become even more important.

Dr. Alexander Kott, Chief Scientist at the U.S. Army Research Laboratory, provided one perspective in a recently published paper titled “Ground Warfare in 2050: How It Might Look.” In it, Kott speculated about “tactical ground warfighting circa 2050, in a major conflict between technologically advanced peer competitors.”

Kott noted that on future battlefields dominated by sensor saturation and long-range precision fires, “Conventional entrenchments and other fortifications will become less effective when teams of intelligent munitions can maneuver into and within a trench or a bunker.” Light dismounted forces “will have limited, if any, protection either from antimissiles or armor (although they may be provided a degree of protection by armor deployed by their robotic helpers… Instead, they will use cluttered ground terrain to obtain cover and concealment. In addition, they will attempt to distract and deceive…by use of decoys.”

Heavy forces “capable of producing strong lethal effects—substantial in size and mounted on vehicles—will be unlikely to avoid detection, observation, and fires.” To mitigate continuous incoming precision fires, Kott envisions that heavy ground forces will employ a combination of cover and concealment, maneuver, dispersion, decoys, vigorous counter-ISR (intelligence, surveillance, and reconnaissance) attacks, and armor, but will rely primarily “on extensive use of intelligent antimissiles (evolutions of today’s Active Protection Systems [APSs], Counter Rocket, Artillery, and Mortar [C-RAM], Iron Dome, etc.)”

Conversely, Kott does not foresee underground cover and concealment disappearing from future battlefields. “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.” Not only will “large static assets such as supply dumps or munitions repair and manufacturing shops” be moved underground, but maneuver forces and field headquarters might conceivably rapidly dig themselves into below-ground fighting positions between operational bounds.

Comparing Force Ratios to Casualty Exchange Ratios

“American Marines in Belleau Wood (1918)” by Georges Scott [Wikipedia]

Comparing Force Ratios to Casualty Exchange Ratios
Christopher A. Lawrence

[The article below is reprinted from the Summer 2009 edition of The International TNDM Newsletter.]

There are three versions of force ratio versus casualty exchange ratio rules, such as the three-to-one rule (3-to-1 rule), as it applies to casualties. The earliest version of the rule as it relates to casualties that we have been able to find comes from the 1958 version of the U.S. Army Maneuver Control manual, which states: “When opposing forces are in contact, casualties are assessed in inverse ratio to combat power. For friendly forces advancing with a combat power superiority of 5 to 1, losses to friendly forces will be about 1/5 of those suffered by the opposing force.”[1]

The RAND version of the rule (1992) states that: “the famous ‘3:1 rule ’, according to which the attacker and defender suffer equal fractional loss rates at a 3:1 force ratio the battle is in mixed terrain and the defender enjoys ‘prepared ’defenses…” [2]

Finally, there is a version of the rule that dates from the 1967 Maneuver Control manual that only applies to armor that shows:

As the RAND construct also applies to equipment losses, then this formulation is directly comparable to the RAND construct.

Therefore, we have three basic versions of the 3-to-1 rule as it applies to casualties and/or equipment losses. First, there is a rule that states that there is an even fractional loss ratio at 3-to-1 (the RAND version), Second, there is a rule that states that at 3-to-1, the attacker will suffer one-third the losses of the defender. And third, there is a rule that states that at 3-to-1, the attacker and defender will suffer the same losses as the defender. Furthermore, these examples are highly contradictory, with either the attacker suffering three times the losses of the defender, the attacker suffering the same losses as the defender, or the attacker suffering 1/3 the losses of the defender.

Therefore, what we will examine here is the relationship between force ratios and exchange ratios. In this case, we will first look at The Dupuy Institute’s Battles Database (BaDB), which covers 243 battles from 1600 to 1900. We will chart on the y-axis the force ratio as measured by a count of the number of people on each side of the forces deployed for battle. The force ratio is the number of attackers divided by the number of defenders. On the x-axis is the exchange ratio, which is a measured by a count of the number of people on each side who were killed, wounded, missing or captured during that battle. It does not include disease and non-battle injuries. Again, it is calculated by dividing the total attacker casualties by the total defender casualties. The results are provided below:

As can be seen, there are a few extreme outliers among these 243 data points. The most extreme, the Battle of Tippennuir (l Sep 1644), in which an English Royalist force under Montrose routed an attack by Scottish Covenanter militia, causing about 3,000 casualties to the Scots in exchange for a single (allegedly self-inflicted) casualty to the Royalists, was removed from the chart. This 3,000-to-1 loss ratio was deemed too great an outlier to be of value in the analysis.

As it is, the vast majority of cases are clumped down into the corner of the graph with only a few scattered data points outside of that clumping. If one did try to establish some form of curvilinear relationship, one would end up drawing a hyperbola. It is worthwhile to look inside that clump of data to see what it shows. Therefore, we will look at the graph truncated so as to show only force ratios at or below 20-to-1 and exchange rations at or below 20-to-1.

Again, the data remains clustered in one corner with the outlying data points again pointing to a hyperbola as the only real fitting curvilinear relationship. Let’s look at little deeper into the data by truncating the data on 6-to-1 for both force ratios and exchange ratios. As can be seen, if the RAND version of the 3-to-1 rule is correct, then the data should show at 3-to-1 force ratio a 3-to-1 casualty exchange ratio. There is only one data point that comes close to this out of the 243 points we examined.

If the FM 105-5 version of the rule as it applies to armor is correct, then the data should show that at 3-to-1 force ratio there is a 1-to-1 casualty exchange ratio, at a 4-to-1 force ratio a 1-to-2 casualty exchange ratio, and at a 5-to-1 force ratio a 1-to-3 casualty exchange ratio. Of course, there is no armor in these pre-WW I engagements, but again no such exchange pattern does appear.

If the 1958 version of the FM 105-5 rule as it applies to casualties is correct, then the data should show that at a 3-to-1 force ratio there is 0.33-to-1 casualty exchange ratio, at a 4-to-1 force ratio a .25-to-1 casualty exchange ratio, and at a 5-to-1 force ratio a 0.20-to-5 casualty exchange ratio. As can be seen, there is not much indication of this pattern, or for that matter any of the three patterns.

Still, such a construct may not be relevant to data before 1900. For example, Lanchester claimed in 1914 in Chapter V, “The Principal of Concentration,” of his book Aircraft in Warfare, that there is greater advantage to be gained in modern warfare from concentration of fire.[3] Therefore, we will tap our more modern Division-Level Engagement Database (DLEDB) of 675 engagements, of which 628 have force ratios and exchange ratios calculated for them. These 628 cases are then placed on a scattergram to see if we can detect any similar patterns.

Even though this data covers from 1904 to 1991, with the vast majority of the data coming from engagements after 1940, one again sees the same pattern as with the data from 1600-1900. If there is a curvilinear relationship, it is again a hyperbola. As before, it is useful to look into the mass of data clustered into the corner by truncating the force and exchange ratios at 20-to-1. This produces the following:

Again, one sees the data clustered in the corner, with any curvilinear relationship again being a hyperbola. A look at the data further truncated to a 10-to-1 force or exchange ratio does not yield anything more revealing.

And, if this data is truncated to show only 5-to-1 force ratio and exchange ratios, one again sees:

Again, this data appears to be mostly just noise, with no clear patterns here that support any of the three constructs. In the case of the RAND version of the 3-to-1 rule, there is again only one data point (out of 628) that is anywhere close to the crossover point (even fractional exchange rate) that RAND postulates. In fact, it almost looks like the data conspires to make sure it leaves a noticeable “hole” at that point. The other postulated versions of the 3-to-1 rules are also given no support in these charts.

Also of note, that the relationship between force ratios and exchange ratios does not appear to significantly change for combat during 1600-1900 when compared to the data from combat from 1904-1991. This does not provide much support for the intellectual construct developed by Lanchester to argue for his N-square law.

While we can attempt to torture the data to find a better fit, or can try to argue that the patterns are obscured by various factors that have not been considered, we do not believe that such a clear pattern and relationship exists. More advanced mathematical methods may show such a pattern, but to date such attempts have not ferreted out these alleged patterns. For example, we refer the reader to Janice Fain’s article on Lanchester equations, The Dupuy Institute’s Capture Rate Study, Phase I & II, or any number of other studies that have looked at Lanchester.[4]

The fundamental problem is that there does not appear to be a direct cause and effect between force ratios and exchange ratios. It appears to be an indirect relationship in the sense that force ratios are one of several independent variables that determine the outcome of an engagement, and the nature of that outcome helps determines the casualties. As such, there is a more complex set of interrelationships that have not yet been fully explored in any study that we know of, although it is briefly addressed in our Capture Rate Study, Phase I & II.

NOTES

[1] FM 105-5, Maneuver Control (1958), 80.

[2] Patrick Allen, “Situational Force Scoring: Accounting for Combined Arms Effects in Aggregate Combat Models,” (N-3423-NA, The RAND Corporation, Santa Monica, CA, 1992), 20.

[3] F. W. Lanchester, Aircraft in Warfare: The Dawn of the Fourth Arm (Lanchester Press Incorporated, Sunnyvale, Calif., 1995), 46-60. One notes that Lanchester provided no data to support these claims, but relied upon an intellectual argument based upon a gross misunderstanding of ancient warfare.

[4] In particular, see page 73 of Janice B. Fain, “The Lanchester Equations and Historical Warfare: An Analysis of Sixty World War II Land Engagements,” Combat Data Subscription Service (HERO, Arlington, Va., Spring 1975).

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

The Great 3-1 Rule Debate

coldwarmap3[This piece was originally posted on 13 July 2016.]

Trevor Dupuy’s article cited in my previous post, “Combat Data and the 3:1 Rule,” was the final salvo in a roaring, multi-year debate between two highly regarded members of the U.S. strategic and security studies academic communities, political scientist John Mearsheimer and military analyst/polymath Joshua Epstein. Carried out primarily in the pages of the academic journal International Security, Epstein and Mearsheimer argued the validity of the 3-1 rule and other analytical models with respect the NATO/Warsaw Pact military balance in Europe in the 1980s. Epstein cited Dupuy’s empirical research in support of his criticism of Mearsheimer’s reliance on the 3-1 rule. In turn, Mearsheimer questioned Dupuy’s data and conclusions to refute Epstein. Dupuy’s article defended his research and pointed out the errors in Mearsheimer’s assertions. With the publication of Dupuy’s rebuttal, the International Security editors called a time out on the debate thread.

The Epstein/Mearsheimer debate was itself part of a larger political debate over U.S. policy toward the Soviet Union during the administration of Ronald Reagan. This interdisciplinary argument, which has since become legendary in security and strategic studies circles, drew in some of the biggest names in these fields, including Eliot Cohen, Barry Posen, the late Samuel Huntington, and Stephen Biddle. As Jeffery Friedman observed,

These debates played a prominent role in the “renaissance of security studies” because they brought together scholars with different theoretical, methodological, and professional backgrounds to push forward a cohesive line of research that had clear implications for the conduct of contemporary defense policy. Just as importantly, the debate forced scholars to engage broader, fundamental issues. Is “military power” something that can be studied using static measures like force ratios, or does it require a more dynamic analysis? How should analysts evaluate the role of doctrine, or politics, or military strategy in determining the appropriate “balance”? What role should formal modeling play in formulating defense policy? What is the place for empirical analysis, and what are the strengths and limitations of existing data?[1]

It is well worth the time to revisit the contributions to the 1980s debate. I have included a bibliography below that is not exhaustive, but is a place to start. The collapse of the Soviet Union and the end of the Cold War diminished the intensity of the debates, which simmered through the 1990s and then were obscured during the counterterrorism/ counterinsurgency conflicts of the post-9/11 era. It is possible that the challenges posed by China and Russia amidst the ongoing “hybrid” conflict in Syria and Iraq may revive interest in interrogating the bases of military analyses in the U.S and the West. It is a discussion that is long overdue and potentially quite illuminating.

NOTES

[1] Jeffery A. Friedman, “Manpower and Counterinsurgency: Empirical Foundations for Theory and Doctrine,” Security Studies 20 (2011)

BIBLIOGRAPHY

(Note: Some of these are behind paywalls, but some are available in PDF format. Mearsheimer has made many of his publications freely available here.)

John J. Mearsheimer, “Why the Soviets Can’t Win Quickly in Central Europe,” International Security, Vol. 7, No. 1 (Summer 1982)

Samuel P. Huntington, “Conventional Deterrence and Conventional Retaliation in Europe,” International Security 8, no. 3 (Winter 1983/84)

Joshua Epstein, Strategy and Force Planning (Washington, DC: Brookings, 1987)

Joshua M. Epstein, “Dynamic Analysis and the Conventional Balance in Europe,” International Security 12, no. 4 (Spring 1988)

John J. Mearsheimer, “Numbers, Strategy, and the European Balance,” International Security 12, no. 4 (Spring 1988)

Stephen Biddle, “The European Conventional Balance,” Survival 30, no. 2 (March/April 1988)

Eliot A. Cohen, “Toward Better Net Assessment: Rethinking the European Conventional Balance,International Security Vol. 13, No. 1 (Summer 1988)

Joshua M. Epstein, “The 3:1 Rule, the Adaptive Dynamic Model, and the Future of Security Studies,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, “Assessing the Conventional Balance,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, Barry R. Posen, Eliot A. Cohen, “Correspondence: Reassessing Net Assessment,” International Security 13, No. 4 (Spring 1989)

Trevor N. Dupuy, “Combat Data and the 3:1 Rule,” International Security 14, no. 1 (Summer 1989)

Stephen Biddle et al., Defense at Low Force Levels (Alexandria, VA: Institute for Defense Analyses, 1991)

Force Ratios in Conventional Combat

American soldiers of the 117th Infantry Regiment, Tennessee National Guard, part of the 30th Infantry Division, move past a destroyed American M5A1 “Stuart” tank on their march to recapture the town of St. Vith during the Battle of the Bulge, January 1945. [Wikipedia]
[This piece was originally posted on 16 May 2017.]

This post is a partial response to questions from one of our readers (Stilzkin). On the subject of force ratios in conventional combat….I know of no detailed discussion on the phenomenon published to date. It was clearly addressed by Clausewitz. For example:

At Leuthen Frederick the Great, with about 30,000 men, defeated 80,000 Austrians; at Rossbach he defeated 50,000 allies with 25,000 men. These however are the only examples of victories over an opponent two or even nearly three times as strong. Charles XII at the battle of Narva is not in the same category. The Russian at that time could hardly be considered as Europeans; moreover, we know too little about the main features of that battle. Bonaparte commanded 120,000 men at Dresden against 220,000—not quite half. At Kolin, Frederick the Great’s 30,000 men could not defeat 50,000 Austrians; similarly, victory eluded Bonaparte at the desperate battle of Leipzig, though with his 160,000 men against 280,000, his opponent was far from being twice as strong.

These examples may show that in modern Europe even the most talented general will find it very difficult to defeat an opponent twice his strength. When we observe that the skill of the greatest commanders may be counterbalanced by a two-to-one ratio in the fighting forces, we cannot doubt that superiority in numbers (it does not have to more than double) will suffice to assure victory, however adverse the other circumstances.

and:

If we thus strip the engagement of all the variables arising from its purpose and circumstance, and disregard the fighting value of the troops involved (which is a given quantity), we are left with the bare concept of the engagement, a shapeless battle in which the only distinguishing factors is the number of troops on either side.

These numbers, therefore, will determine victory. It is, of course, evident from the mass of abstractions I have made to reach this point that superiority of numbers in a given engagement is only one of the factors that determines victory. Superior numbers, far from contributing everything, or even a substantial part, to victory, may actually be contributing very little, depending on the circumstances.

But superiority varies in degree. It can be two to one, or three or four to one, and so on; it can obviously reach the point where it is overwhelming.

In this sense superiority of numbers admittedly is the most important factor in the outcome of an engagement, as long as it is great enough to counterbalance all other contributing circumstance. It thus follows that as many troops as possible should be brought into the engagement at the decisive point.

And, in relation to making a combat model:

Numerical superiority was a material factor. It was chosen from all elements that make up victory because, by using combinations of time and space, it could be fitted into a mathematical system of laws. It was thought that all other factors could be ignored if they were assumed to be equal on both sides and thus cancelled one another out. That might have been acceptable as a temporary device for the study of the characteristics of this single factor; but to make the device permanent, to accept superiority of numbers as the one and only rule, and to reduce the whole secret of the art of war to a formula of numerical superiority at a certain time and a certain place was an oversimplification that would not have stood up for a moment against the realities of life.

Force ratios were discussed in various versions of FM 105-5 Maneuver Control, but as far as I can tell, this was not material analytically developed. It was a set of rules, pulled together by a group of anonymous writers for the sake of being able to adjudicate wargames.

The only detailed quantification of force ratios was provided in Numbers, Predictions and War by Trevor Dupuy. Again, these were modeling constructs, not something that was analytically developed (although there was significant background research done and the model was validated multiple times). He then discusses the subject in his book Understanding War, which I consider the most significant book of the 90+ that he wrote or co-authored.

The only analytically based discussion of force ratios that I am aware of (or at least can think of at this moment) is my discussion in my upcoming book War by Numbers: Understanding Conventional Combat. It is the second chapter of the book: https://dupuyinstitute.dreamhosters.com/2016/02/17/war-by-numbers-iii/

In this book, I assembled the force ratios required to win a battle based upon a large number of cases from World War II division-level combat. For example (page 18 of the manuscript):

I did this for the ETO, for the battles of Kharkov and Kursk (Eastern Front 1943, divided by when the Germans are attacking and when the Soviets are attacking) and for PTO (Manila and Okinawa 1945).

There is more than can be done on this, and we do have the data assembled to do this, but as always, I have not gotten around to it. This is why I am already considering a War by Numbers II, as I am already thinking about all the subjects I did not cover in sufficient depth in my first book.

Dupuy’s Verities: Initiative

German Army soldiers advance during the Third Battle of Kharkov in early 1943. This was the culmination of a counteroffensive by German Field Marshal Erich von Manstein that blunted the Soviet offensive drive following the recapture of Stalingrad in late 1942. [Photo: KonchitsyaLeto/Reddit]

The fifth of Trevor Dupuy’s Timeless Verities of Combat is:

Initiative permits application of preponderant combat power.

From Understanding War (1987):

The importance of seizing and maintaining the initiative has not declined in our times, nor will it in the future. This has been the secret of success of all of the great captains of history. It was as true of MacArthur as it was of Alexander the Great, Grant or Napoleon. Some modern Soviet theorists have suggested that this is even more important now in an era of high technology than formerly. They may be right. This has certainly been a major factor in the Israeli victories over the Arabs in all of their wars.

Given the prominent role initiative has played in warfare historically, it is curious that it is not a principle of war in its own right. However, it could be argued that it is sufficiently embedded in the principles of the offensive and maneuver that it does not need to be articulated separately. After all, the traditional means of sizing the initiative on the battlefield is through a combination of the offensive and maneuver.

Initiative is a fundamental aspect of current U.S. Army doctrine, as stated in ADP 3-0 Operations (2017):

The central idea of operations is that, as part of a joint force, Army forces seize, retain, and exploit the initiative to gain and maintain a position of relative advantage in sustained land operations to prevent conflict, shape the operational environment, and win our Nation’s wars as part of unified action.

For Dupuy, the specific connection between initiative and combat power is likely why he chose to include it as a verity in its own right. Combat power was the central concept in his theory of combat and initiative was not just the basic means of achieving a preponderance of combat power through superior force strength (i.e. numbers), but also in harnessing the effects of the circumstantial variables of combat that multiply combat power (i.e. surprise, mobility, vulnerability, combat effectiveness). It was precisely through the exploitation of this relationship between initiative and combat power that allowed inferior numbers of German and Israeli combat forces to succeed time and again in combat against superior numbers of Soviet and Arab opponents.

Using initiative to apply preponderant combat power in battle is the primary way the effects of maneuver (to “gain and maintain a position of relative advantage“) are abstracted in Dupuy’s Quantified Judgement Model (QJM)/Tactical Numerical Deterministic Model (TNDM). The QJM/TNDM itself is primarily a combat attrition adjudicator that determines combat outcomes through calculations of relative combat power. The numerical force strengths of the opposing forces engaged as determined by maneuver can be easily inputted into the QJM/TNDM and then modified by the applicable circumstantial variables of combat related to maneuver to obtain a calculation of relative combat power. As another of Dupuy’s verities states, “superior combat power always wins.”

Human Factors In Warfare: Fear In A Lethal Environment

Chaplain (Capt.) Emil Kapaun (right) and Capt. Jerome A. Dolan, a medical officer with the 8th Cavalry Regiment, 1st Cavalry Division, carry an exhausted Soldier off the battlefield in Korea, early in the war. Kapaun was famous for exposing himself to enemy fire. When his battalion was overrun by a Chinese force in November 1950, rather than take an opportunity to escape, Kapaun voluntarily remained behind to minister to the wounded. In 2013, Kapaun posthumously received the Medal of Honor for his actions in the battle and later in a prisoner of war camp, where he died in May 1951. [Photo Credit: Courtesy of the U.S. Army Center of Military History]

[This piece was originally published on 27 June 2017.]

Trevor Dupuy’s theories about warfare were sometimes criticized by some who thought his scientific approach neglected the influence of the human element and chance and amounted to an attempt to reduce war to mathematical equations. Anyone who has read Dupuy’s work knows this is not, in fact, the case.

Moral and behavioral (i.e human) factors were central to Dupuy’s research and theorizing about combat. He wrote about them in detail in his books. In 1989, he presented a paper titled “The Fundamental Information Base for Modeling Human Behavior in Combat” at a symposium on combat modeling that provided a clear, succinct summary of his thinking on the topic.

He began by concurring with Carl von Clausewitz’s assertion that

[P]assion, emotion, and fear [are] the fundamental characteristics of combat… No one who has participated in combat can disagree with this Clausewitzean emphasis on passion, emotion, and fear. Without doubt, the single most distinctive and pervasive characteristic of combat is fear: fear in a lethal environment.

Despite the ubiquity of fear on the battlefield, Dupuy pointed out that there is no way to study its impact except through the historical record of combat in the real world.

We cannot replicate fear in laboratory experiments. We cannot introduce fear into field tests. We cannot create an environment of fear in training or in field exercises.

So, to study human reaction in a battlefield environment we have no choice but to go to the battlefield, not the laboratory, not the proving ground, not the training reservation. But, because of the nature of the very characteristics of combat which we want to study, we can’t study them during the battle. We can only do so retrospectively.

We have no choice but to rely on military history. This is why military history has been called the laboratory of the soldier.

He also pointed out that using military history analytically has its own pitfalls and must be handled carefully lest it be used to draw misleading or inaccurate conclusions.

I must also make clear my recognition that military history data is far from perfect, and that–even at best—it reflects the actions and interactions of unpredictable human beings. Extreme caution must be exercised when using or analyzing military history. A single historical example can be misleading for either of two reasons: (a) The data is inaccurate, or (b) The example may be true, but also be untypical.

But, when a number of respectable examples from history show consistent patterns of human behavior, then we can have confidence that behavior in accordance with the pattern is typical, and that behavior inconsistent with the pattern is either untypical, or is inaccurately represented.

He then stated very concisely the scientific basis for his method.

My approach to historical analysis is actuarial. We cannot predict the future in any single instance. But, on the basis of a large set of reliable experience data, we can predict what is likely to occur under a given set of circumstances.

Dupuy listed ten combat phenomena that he believed were directly or indirectly related to human behavior. He considered the list comprehensive, if not exhaustive.

I shall look at Dupuy’s treatment of each of these in future posts (click links above).

Artillery Effectiveness vs. Armor (Part 5-Summary)

U.S. Army 155mm field howitzer in Normandy. [padresteve.com]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

Table IX shows the distribution of cause of loss by type or armor vehicle. From the distribution it might be inferred that better protected armored vehicles may be less vulnerable to artillery attack. Nevertheless, the heavily armored vehicles still suffered a minimum loss of 5.6 percent due to artillery. Unfortunately the sample size for heavy tanks was very small, 18 of 980 cases or only 1.8 percent of the total.

The data are limited at this time to the seven cases.[6] Further research is necessary to expand the data sample so as to permit proper statistical analysis of the effectiveness of artillery versus tanks.

NOTES

[18] Heavy armor includes the KV-1, KV-2, Tiger, and Tiger II.

[19] Medium armor includes the T-34, Grant, Panther, and Panzer IV.

[20] Light armor includes the T-60, T-70. Stuart, armored cars, and armored personnel carriers.

Artillery Effectiveness vs. Armor (Part 4-Ardennes)

Knocked-out Panthers in Krinkelt, Belgium, Battle of the Bulge, 17 December 1944. [worldwarphotos.info]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

NOTES

[14] From ORS Joint Report No. 1. A total of an estimated 300 German armor vehicles were found following the battle.

[15] Data from 38th Infantry After Action Report (including “Sketch showing enemy vehicles destroyed by 38th Inf Regt. and attached units 17-20 Dec. 1944″), from 12th SS PzD strength report dated 8 December 1944, and from strengths indicated on the OKW briefing maps for 17 December (1st [circa 0600 hours], 2d [circa 1200 hours], and 3d [circa 1800 hours] situation), 18 December (1st and 2d situation), 19 December (2d situation), 20 December (3d situation), and 21 December (2d and 3d situation).

[16] Losses include confirmed and probable losses.

[17] Data from Combat Interview “26th Infantry Regiment at Dom Bütgenbach” and from 12th SS PzD, ibid.