Tag human factors

TDI Friday Read: Principles Of War & Verities Of Combat

[izquotes.com]

Trevor Dupuy distilled his research and analysis on combat into a series of verities, or what he believed were empirically-derived principles. He intended for his verities to complement the classic principles of war, a slightly variable list of maxims of unknown derivation and provenance, which describe the essence of warfare largely from the perspective of Western societies. These are summarized below.

What Is The Best List Of The Principles Of War?

The Timeless Verities of Combat

Trevor N. Dupuy’s Combat Attrition Verities

Trevor Dupuy’s Combat Advance Rate Verities

Military History and Validation of Combat Models

Soldiers from Britain’s Royal Artillery train in a “virtual world” during Exercise Steel Sabre, 2015 [Sgt Si Longworth RLC (Phot)/MOD]

Military History and Validation of Combat Models

A Presentation at MORS Mini-Symposium on Validation, 16 Oct 1990

By Trevor N. Dupuy

In the operations research community there is some confusion as to the respective meanings of the words “validation” and “verification.” My definition of validation is as follows:

“To confirm or prove that the output or outputs of a model are consistent with the real-world functioning or operation of the process, procedure, or activity which the model is intended to represent or replicate.”

In this paper the word “validation” with respect to combat models is assumed to mean assurance that a model realistically and reliably represents the real world of combat. Or, in other words, given a set of inputs which reflect the anticipated forces and weapons in a combat encounter between two opponents under a given set of circumstances, the model is validated if we can demonstrate that its outputs are likely to represent what would actually happen in a real-world encounter between these forces under those circumstances

Thus, in this paper, the word “validation” has nothing to do with the correctness of computer code, or the apparent internal consistency or logic of relationships of model components, or with the soundness of the mathematical relationships or algorithms, or with satisfying the military judgment or experience of one individual.

True validation of combat models is not possible without testing them against modern historical combat experience. And so, in my opinion, a model is validated only when it will consistently replicate a number of military history battle outcomes in terms of: (a) Success-failure; (b) Attrition rates; and (c) Advance rates.

“Why,” you may ask, “use imprecise, doubtful, and outdated history to validate a modem, scientific process? Field tests, experiments, and field exercises can provide data that is often instrumented, and certainly more reliable than any historical data.”

I recognize that military history is imprecise; it is only an approximate, often biased and/or distorted, and frequently inconsistent reflection of what actually happened on historical battlefields. Records are contradictory. I also recognize that there is an element of chance or randomness in human combat which can produce different results in otherwise apparently identical circumstances. I further recognize that history is retrospective, telling us only what has happened in the past. It cannot predict, if only because combat in the future will be fought with different weapons and equipment than were used in historical combat.

Despite these undoubted problems, military history provides more, and more accurate information about the real world of combat, and how human beings behave and perform under varying circumstances of combat, than is possible to derive or compile from arty other source. Despite some discrepancies, patterns are unmistakable and consistent. There is always a logical explanation for any individual deviations from the patterns. Historical examples that are inconsistent, or that are counter-intuitive, must be viewed with suspicion as possibly being poor or false history.

Of course absolute prediction of a future event is practically impossible, although not necessarily so theoretically. Any speculations which we make from tests or experiments must have some basis in terms of projections from past experience.

Training or demonstration exercises, proving ground tests, field experiments, all lack the one most pervasive and most important component of combat: Fear in a lethal environment. There is no way in peacetime, or non-battlefield, exercises, test, or experiments to be sure that the results are consistent with what would have been the behavior or performance of individuals or units or formations facing hostile firepower on a real battlefield.

We know from the writings of the ancients (for instance Sun Tze—pronounced Sun Dzuh—and Thucydides) that have survived to this day that human nature has not changed since the dawn of history. The human factor the way in which humans respond to stimuli or circumstances is the most important basis for speculation and prediction. What about the “scientific” approach of those who insist that we cart have no confidence in the accuracy or reliability of historical data, that it is therefore unscientific, and therefore that it should be ignored? These people insist that only “scientific” data should be used in modeling.

In fact, every model is based upon fundamental assumptions that are intuitive and unprovable. The first step in the creation of a model is a step away from scientific reality in seeking a basis for an unreal representation of a real phenomenon. I have shown that the unreality is perpetuated when we use other imitations of reality as the basis for representing reality. History is less than perfect, but to ignore it, and to use only data that is bound to be wrong, assures that we will not be able to represent human behavior in real combat.

At the risk of repetition, and even of protesting too much, let me assure you that I am well aware of the shortcomings of military history:

The record which is available to us, which is history, only approximately reflects what actually happened. It is incomplete. It is often biased, it is often distorted. Even when it is accurate, it may be reflecting chance rather than normal processes. It is neither precise nor consistent. But, it provides more, and more accurate, information on the real world of battle than is available from the most thoroughly documented field exercises, proving ground less, or laboratory or field experiments.

Military history is imperfect. At best it reflects the actions and interactions of unpredictable human beings. We must always realize that a single historical example can be misleading for either of two reasons: (1) The data may be inaccurate, or (2) The data may be accurate, but untypical.

Nevertheless, history is indispensable. I repeat that the most pervasive characteristic of combat is fear in a lethal environment. For all of its imperfections, military history and only military history represents what happens under the environmental condition of fear.

Unfortunately, and somewhat unfairly, the reported findings of S.L.A. Marshall about human behavior in combat, which he reported in Men Against Fire, have been recently discounted by revisionist historians who assert that he never could have physically performed the research on which the book’s findings were supposedly based. This has raised doubts about Marshall’s assertion that 85% of infantry soldiers didn’t fire their weapons in combat in World War ll. That dramatic and surprising assertion was first challenged in a New Zealand study which found, on the basis of painstaking interviews, that most New Zealanders fired their weapons in combat. Thus, either Americans were different from New Zealanders, or Marshall was wrong. And now American historians have demonstrated that Marshall had had neither the time nor the opportunity to conduct his battlefield interviews which he claimed were the basis for his findings.

I knew Marshall, moderately well. I was fully as aware of his weaknesses as of his strengths. He was not a historian. I deplored the imprecision and lack of documentation in Men Against Fire. But the revisionist historians have underestimated the shrewd journalistic assessment capability of “SLAM” Marshall. His observations may not have been scientifically precise, but they were generally sound, and his assessment has been shared by many American infantry officers whose judgements l also respect. As to the New Zealand study, how many people will, after the war, admit that they didn’t fire their weapons?

Perhaps most important, however, in judging the assessments of SLAM Marshall, is a recent study by a highly-respected British operations research analyst, David Rowland. Using impeccable OR methods Rowland has demonstrated that Marshall’s assessment of the inefficient performance, or non-performance, of most soldiers in combat was essentially correct. An unclassified version of Rowland’s study, “Assessments of Combat Degradation,” appeared in the June 1986 issue of the Royal United Services Institution Journal.

Rowland was led to his investigations by the fact that soldier performance in field training exercises, using the British version of MILES technology, was not consistent with historical experience. Even after allowances for degradation from theoretical proving ground capability of weapons, defensive rifle fire almost invariably stopped any attack in these field trials. But history showed that attacks were often in fact, usually successful. He therefore began a study in which he made both imaginative and scientific use of historical data from over 100 small unit battles in the Boer War and the two World Wars. He demonstrated that when troops are under fire in actual combat, there is an additional degradation of performance by a factor ranging between 10 and 7. A degradation virtually of an order of magnitude! And this, mind you, on top of a comparable built-in degradation to allow for the difference between field conditions and proving ground conditions.

Not only does Rowland‘s study corroborate SLAM Marshall’s observations, it showed conclusively that field exercises, training competitions and demonstrations, give results so different from real battlefield performance as to render them useless for validation purposes.

Which brings us back to military history. For all of the imprecision, internal contradictions, and inaccuracies inherent in historical data, at worst the deviations are generally far less than a factor of 2.0. This is at least four times more reliable than field test or exercise results.

I do not believe that history can ever repeat itself. The conditions of an event at one time can never be precisely duplicated later. But, bolstered by the Rowland study, I am confident that history paraphrases itself.

If large bodies of historical data are compiled, the patterns are clear and unmistakable, even if slightly fuzzy around the edges. Behavior in accordance with this pattern is therefore typical. As we have already agreed, sometimes behavior can be different from the pattern, but we know that it is untypical, and we can then seek for the reason, which invariably can be discovered.

This permits what l call an actuarial approach to data analysis. We can never predict precisely what will happen under any circumstances. But the actuarial approach, with ample data, provides confidence that the patterns reveal what is to happen under those circumstances, even if the actual results in individual instances vary to some extent from this “norm” (to use the Soviet military historical expression.).

It is relatively easy to take into account the differences in performance resulting from new weapons and equipment. The characteristics of the historical weapons and the current (or projected) weapons can be readily compared, and adjustments made accordingly in the validation procedure.

In the early 1960s an effort was made at SHAPE Headquarters to test the ATLAS Model against World War II data for the German invasion of Western Europe in May, 1940. The first excursion had the Allies ending up on the Rhine River. This was apparently quite reasonable: the Allies substantially outnumbered the Germans, they had more tanks, and their tanks were better. However, despite these Allied advantages, the actual events in 1940 had not matched what ATLAS was now predicting. So the analysts did a little “fine tuning,” (a splendid term for fudging). Alter the so-called adjustments, they tried again, and ran another excursion. This time the model had the Allies ending up in Berlin. The analysts (may the Lord forgive them!) were quite satisfied with the ability of ATLAS to represent modem combat. (Or at least they said so.) Their official conclusion was that the historical example was worthless, since weapons and equipment had changed so much in the preceding 20 years!

As I demonstrated in my book, Options of Command, the problem was that the model was unable to represent the German strategy, or to reflect the relative combat effectiveness of the opponents. The analysts should have reached a different conclusion. ATLAS had failed validation because a model that cannot with reasonable faithfulness and consistency replicate historical combat experience, certainly will be unable validly to reflect current or future combat.

How then, do we account for what l have said about the fuzziness of patterns, and the fact that individual historical examples may not fit the patterns? I will give you my rules of thumb:

  1. The battle outcome should reflect historical success-failure experience about four times out of five.
  2. For attrition rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.
  3. For the advance rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.

Just as the heavens are the laboratory of the astronomer, so military history is the laboratory of the soldier and the military operations research analyst. The scientific basis for both astronomy and military science is the recording of the movements and relationships of bodies, and then analysis of those movements. (In the one case the bodies are heavenly, in the other they are very terrestrial.)

I repeat: Military history is the laboratory of the soldier. Failure of the analyst to use this laboratory will doom him to live with the scientific equivalent of Ptolomean astronomy, whereas he could use the evidence available in his laboratory to progress to the military science equivalent of Copernican astronomy.

Human Factors In Warfare: Combat Effectiveness

An Israeli tank unit crosses the Sinai, heading for the Suez Canal, during the 1973 Arab-Israeli War [Israeli Government Press Office/HistoryNet]

It has been noted throughout the history of human conflict that some armies have consistently fought more effectively on the battlefield than others. The armies of Sparta in ancient Greece, for example, have come to epitomize the warrior ideal in Western societies. Rome’s legions have acquired a similar legendary reputation. Within armies too, some units are known to be superior combatants than others. The U.S. 1st Infantry Division, the British Expeditionary Force of 1914, Japan’s Special Naval Landing Forces, the U.S. Marine Corps, the German 7th Panzer Division, and the Soviet Guards divisions are among the many superior fighting forces from history.

Trevor Dupuy found empirical substantiation of this in his analysis of historical combat data. He discovered that in 1943-1944 during World War II, after accounting for environmental and operational factors, the German Army consistently performed more effectively in ground combat than the U.S. and British armies. This advantage—measured in terms of casualty exchanges, terrain held or lost, and mission accomplishment—manifested whether the Germans were attacking or defending, or winning or losing. Dupuy observed that the Germans demonstrated an even more marked effectiveness in battle against the Soviet Army throughout the war.

He found the same disparity in battlefield effectiveness in combat data on the 1967 and 1973 Arab-Israeli wars. The Israeli Army performed uniformly better in ground combat over all of the Arab armies it faced in both conflicts, regardless of posture or outcome.

The clear and consistent patterns in the historical data led Dupuy to conclude that superior combat effectiveness on the battlefield was attributable to moral and behavioral (i.e. human) factors. Those factors he believed were the most important contributors to combat effectiveness were:

  • Leadership
  • Training or Experience
  • Morale, which may or may not include
  • Cohesion

Although the influence of human factors on combat effectiveness was identifiable and measurable in the aggregate, Dupuy was skeptical whether all of the individual moral and behavioral intangibles could be discreetly quantified. He thought this particularly true for a set of factors that also contributed to combat effectiveness, but were a blend of human and operational factors. These include:

  • Logistical effectiveness
  • Time and Space
  • Momentum
  • Technical Command, Control, Communications
  • Intelligence
  • Initiative
  • Chance

Dupuy grouped all of these intangibles together into a composite factor he designated as relative combat effectiveness value, or CEV. The CEV, along with environmental and operational factors (Vf), comprise the Circumstantial Variables of Combat, which when multiplied by force strength (S), determines the combat power (P) of a military force in Dupuy’s formulation.

P = S x Vf x CEV

Dupuy did not believe that CEVs were static values. As with human behavior, they vary somewhat from engagement to engagement. He did think that human factors were the most substantial of the combat variables. Therefore any model or theory of combat that failed to account for them would invariably be inaccurate.

NOTES

This post is drawn from Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), Chapters 5, 7 and 9; Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), Chapters 8 and 10; and Trevor N. Dupuy, “The Fundamental Information Base for Modeling Human Behavior in Combat, ” presented at the Military Operations Research Society (MORS) Mini-Symposium, “Human Behavior and Performance as Essential Ingredients in Realistic Modeling of Combat – MORIMOC II,” 22-24 February 1989, Center for Naval Analyses, Alexandria, Virginia.

Human Factors In Warfare: Interaction Of Variable Factors

The Second Battle of Ypres, 22 April to 25 May 1915 by Richard Jack [Canadian War Museum]

Trevor Dupuy thought that it was possible to identify and quantify the effects of some individual moral and behavioral (i.e. human) factors on combat. He also believed that many of these factors interacted with each other and with environmental and operational (i.e. physical) variables in combat as well, although parsing and quantifying these effects was a good deal more difficult. Among the combat phenomena he considered to be the result of interaction with human factors were:

Dupuy was critical of combat models and simulations that failed to address these relationships. The prevailing approach to the design of combat modeling used by the U.S. Department of Defense is known as the aggregated, hierarchical, or “bottom-up” construct. Bottom-up models generally use the Lanchester equations, or some variation on them, to calculate combat outcomes between individual soldiers, tanks, airplanes, and ships. These results are then used as inputs for models representing warfare at the brigade/division level, the outputs of which are then fed into theater-level simulations. Many in the American military operations research community believe bottom-up models to be the most realistic method of modeling combat.

Dupuy criticized this approach for many reasons (including the inability of the Lanchester equations to accurately replicate real-world combat outcomes), but mainly because it failed to represent human factors and their interactions with other combat variables.

It is almost undeniable that there must be some interaction among and within the effects of physical as well as behavioral variable factors. I know of no way of measuring this. One thing that is reasonably certain is that the use of the bottom-up approach to model design and development cannot capture such interactions. (Most models in use today are bottom-up models, built up from one-on-one weapons interactions to many-on-many.) Presumably these interactions are captured in a top-down model derived from historical experience, of which there is at least one in existence [by which, Dupuy meant his own].

Dupuy was convinced that any model of combat that failed to incorporate human factors would invariably be inaccurate, which put him at odds with much of the American operations research community.

War does not consist merely of a number of duels. Duels, in fact, are only a very small—though integral—part of combat. Combat is a complex process involving interaction over time of many men and numerous weapons combined in a great number of different, and differently organized, units. This process cannot be understood completely by considering the theoretical interactions of individual men and weapons. Complete understanding requires knowing how to structure such interactions and fit them together. Learning how to structure these interactions must be based on scientific analysis of real combat data.[1]

While this unresolved debate went dormant some time ago, bottom-up models became the simulations of choice in Defense Department campaign planning and analysis. It should be noted, however, that the Defense Department disbanded its campaign-level modeling capabilities in 2011 because the use of the simulations in strategic analysis was criticized as “slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty.”

NOTES

[1] Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), p. 195.

Human Factors In Warfare: Diminishing Returns In Combat

[Jan Spousta; Wikimedia Commons]

One of the basic problems facing military commanders at all levels is deciding how to allocate available forces to accomplish desired objectives. A guiding concept in this sort of decision-making is economy of force, one of the fundamental and enduring principles of war. As defined in the 1954 edition of U.S. Army Field Manual FM 100-5, Field Service Regulations, Operations (which Trevor Dupuy believed contained the best listing of the principles):

Economy of Force

Minimum essential means must be employed at points other than that of decision. To devote means to unnecessary secondary efforts or to employ excessive means on required secondary efforts is to violate the principle of both mass and the objective. Limited attacks, the defensive, deception, or even retrograde action are used in noncritical areas to achieve mass in the critical area.

How do leaders determine the appropriate means for accomplishing a particular mission? The risk of failing to assign too few forces to a critical task is self-evident, but is it possible to allocate too many? Determining the appropriate means in battle has historically involved subjective calculations by commanders and their staff advisors of the relative combat power of friendly and enemy forces. Most often, it entails a rudimentary numerical comparison of numbers of troops and weapons and estimates of the influence of environmental and operational factors. An exemplar of this is the so-called “3-1 rule,” which holds that an attacking force must achieve a three to one superiority in order to defeat a defending force.

Through detailed analysis of combat data from World War II and the 1967 and 1973 Arab-Israeli wars, Dupuy determined that combat appears subject to a law of diminishing returns and that it is indeed possible to over-allocate forces to a mission.[1] By comparing the theoretical outcomes of combat engagements with the actual results, Dupuy discovered that a force with a combat power advantage greater than double that of its adversary seldom achieved proportionally better results than a 2-1 advantage. A combat power superiority of 3 or 4 to 1 rarely yielded additional benefit when measured in terms of casualty rates, ground gained or lost, and mission accomplishment.

Dupuy also found that attackers sometimes gained marginal benefits from combat power advantages greater than 2-1, though less proportionally and economically than the numbers of forces would suggest. Defenders, however, received no benefit at all from a combat power advantage beyond 2-1.

Two human factors contributed to this apparent force limitation, Dupuy believed, Clausewitzian friction and breakpoints. As described in a previous post, friction accumulates on the battlefield through the innumerable human interactions between soldiers, degrading combat performance. This phenomenon increases as the number of soldiers increases.

A breakpoint represents a change of combat posture by a unit on the battlefield, for example, from attack to defense, or from defense to withdrawal. A voluntary breakpoint occurs due to mission accomplishment or a commander’s order. An involuntary breakpoint happens when a unit spontaneously ceases an attack, withdraws without orders, or breaks and routs. Involuntary breakpoints occur for a variety of reasons (though contrary to popular wisdom, seldom due to casualties). Soldiers are not automatons and will rarely fight to the death.

As Dupuy summarized,

It is obvious that the law of diminishing returns applies to combat. The old military adage that the greater the superiority the better, is not necessarily true. In the interests of economy of force, it appears to be unnecessary, and not really cost-effective, to build up a combat power superiority greater than two-to-one. (Note that this is not the same as a numerical superiority of two-to-one.)[2] Of course, to take advantage of this phenomenon, it is essential that a commander be satisfied that he has a reliable basis for calculating relative combat power. This requires an ability to understand and use “combat multipliers” with greater precision than permitted by U.S. Army doctrine today.[3] [Emphasis added.]

NOTES

[1] This section is drawn from Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), Chapter 11.

[2] This relates to Dupuy’s foundational conception of combat power, which is clearly defined and explained in Understanding War, Chapter 8.

[3] Dupuy, Understanding War, p. 139.

Human Factors In Warfare: Friction

The Prussian military philosopher Carl von Clausewitz identified the concept of friction in warfare in his book On War, published in 1832.

Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war… Countless minor incidents—the kind you can never really foresee—combine to lower the general level of performance, so that one always falls far short of the intended goal… Friction is the only concept that more or less corresponds to the factors that distinguish real war from war on paper… None of [the military machine’s] components is of one piece: each part is composed of individuals, every one of whom retains his potential of friction [and] the least important of whom may chance to delay things or somehow make them go wrong…

[Carl von Clausewitz, On War, Edited and translated by Michael Howard and Peter Paret (Princeton, NJ: Princeton University Press, 1984). Book One, Chapter 7, 119-120.]

While recognizing this hugely significant intangible element, Clausewitz also asserted that “[F]riction…brings about effects that cannot be measured, just they are largely due to chance.” Nevertheless, the clearly self-evident nature of friction in warfare subsequently led to the assimilation of the concept into the thinking of most military theorists and practitioners.

Flash forward 140 years or so. While listening to a lecture on combat simulation, Trevor Dupuy had a flash of insight that led him to conclude that it was indeed possible to measure the effects of friction.[1] Based on his work with historical combat data, Dupuy knew that smaller-sized combat forces suffer higher casualty rates than do larger-sized forces. As the diagram at the top demonstrates, this is partly explained by the fact that small units have a much higher proportion of their front line troops exposed to hostile fire than large units.

However, this relationship can account for only a fraction of friction’s total effect. The average exposure of a company of 200 soldiers is about seven times greater than an army group of 100,000. Yet, casualty rates for a company in intensive combat can be up to 70 times greater than that of an army group. This discrepancy clearly shows the influence of another factor at work.

Dupuy hypothesized that this reflected the apparent influence of the relationship between dispersion, deployment, and friction on combat. As friction in combat accumulates through the aggregation of soldiers into larger-sized units, its effects degrade the lethal effects of weapons from their theoretical maximum. Dupuy calculated that friction affects a force of 100,000 ten times more than it does a unit of 200. Being an ambient, human factor on the battlefield, higher quality forces do a better job of managing friction’s effects than do lower quality ones.

After looking at World War II combat casualty data to calculate the effect of friction on combat, Dupuy looked at casualty rates from earlier eras and found a steady correlation, which he believed further validated his hypothesis.

Despite the consistent fit of the data, Dupuy felt that his work was only the beginning of a proper investigation into the phenomenon.

During the periods of actual combat, the lower the level, the closer the loss rates will approach the theoretical lethalities of the weapons in the hands of the opposing combatants. But there will never be a very close relationship of such rates with the theoretical lethalities. War does not consist merely of a number of duels. Duels, in fact, are only a very small—though integral—part of combat. Combat is a complex process involving interaction over time of many men and numerous weapons combined in a great number of different, and differently organized, units. This process cannot be understood completely by considering the theoretical interactions of individual men and weapons. Complete understanding requires knowing how to structure such interactions and fit them together. Learning how to structure these interactions must be based on scientific analysis of real combat data.

NOTES

[1] This post is based on Trevor N. Dupuy, Understanding War: History and Theory of Combat (New York: Paragon House, 1987), Chapter 14.

War By Numbers Published

Christopher A. Lawrence, War by Numbers Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

Human Factors In Warfare: Fatigue

Tom Lea, “The 2,000 Yard Stare” 1944 [Oil on canvas, 36 x 28 Life Collection of Art WWII, U.S. Army Center of Military History, Fort Belvoir, Virginia]

That idea that fatigue is a human factor in combat seems relatively uncontroversial. Military history is replete with examples of how the limits of human physical and mental endurance have affected the character of fighting and the outcome of battles. Perhaps the most salient aspect of military training is preparing soldiers to deal with the rigors of warfare.

Trevor Dupuy was aware that fatigue has a degrading effect on the effectiveness of troops in combat, but he never was able to study the topic specifically himself. He was aware of other examinations of historical experience that were relevant to the issue.

The effectiveness of a military force declines steadily every day that it is engaged in sustained combat. This is an indication that fear has a physical effect on human beings equitable with severe exertion. S.L.A. Marshall documented this extremely well in a report that he wrote a few years before he died. I shall shortly have more to say about S.L.A. Marshall…

An approximate value for the daily effect of fatigue upon the effectiveness of weapons employment emerged from a HERO study several years ago. There is no question that fatigue has a comparable degrading effect upon the ability of a force to advance. I know of no research to ascertain that effect. Until such research is performed, I have arbitrarily assumed that the degrading effect of fatigue upon advance rates is the same as its degrading effect upon weapons effectiveness. To those who might be shocked at such an assumption, my response is: We know there is an effect; it is better to use a crude approximation of that effect than to ignore it…

During World War II when Colonel S.L.A. Marshall was the Chief Historian of the US European Theater of Operations, he undertook a number of interviews of units just after they had been in combat. After the war, in his book Men Against Fire, Marshall asserted that his interviews revealed that only 15% of US infantry soldiers fired their small arms weapons in combat. This revelation created something of a sensation at the time.

It has since been demonstrated that Marshall did not really have solid, scientific data for his assertion. But those who criticize Marshall for unscholarly, unscientific work should realize that in private life he was an exceptionally good newspaper reporter. His conclusions, based upon his observations, may have been largely intuitive, but I am convinced that they were generally, if not specifically, sound…

One of the few examples of the use of military history in the West in recent years was an important study done at the British Defence Operational Analysis Establishment (DOAE) by David Rowland. An unclassified condensation of that study was published in the June 1986 issue of the Journal of the Royal United Services Institution (RUSI). The article, “Assessments of Combat Degradation,” demonstrates conclusively that, in historical combat, small arms weapons have had only one-seventh to one-tenth of their theoretical effectiveness. Rowland does not attempt to say why this is so, but it is interesting that his value of one-seventh is very close to the S. L. A. Marshall 15% figure. Both values translate into casualty effects very similar to those that have emerged from my own research.

The intent of this post is not to rehash the debate on Marshall. As Dupuy noted above, even if Marshall’s conclusions were not based on empirical evidence, his observations on combat were nevertheless on to something important. (Details on the Marshall debate can be easily found with a Google search. A brief discussion took place on the old TDI Forum in 2007.)

David Rowland also presented a paper on the same topic Dupuy referenced above at the Military Operations Research Society (MORS) MORIMOC II conference in 1989, “Assessment of Combat Performance With Small Arms” He later published a book detailing his research on the subject in 2006, The Stress of Battle: Quantifying Human Performance in Combat, which is very much worth tracking down and reading.

Dupuy provided a basic version of his theoretical combat exhaustion methodology on pages 223-224 in Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979).

Rules For Exhaustion Rates, 20th Century*

  1. The exhaustion factor (ex) of a fresh unit is 1.0; this is the maximum ex value.
  2. At the conclusion of an engagement, a new ex factor will be calculated for each side.
  3. A unit in normal offensive or defensive combat has its ex factor reduced by .05 for each consecutive day of combat; the ex factor cannot be less than 0.5.
  4. An attacking unit opposed by delaying tactics has its ex factor reduced by 0.05 per day.
  5. A defending unit in delay posture neither loses nor gains in its ex factor.
  6. A withdrawing unit, not seriously engaged, has its ex factor augmented at the rate of 0.05 per day.
  7. An advancing unit in pursuit, and not seriously delayed, neither loses nor gains in its ex factor.
  8. For a unit in reserve, or in non-active posture, an exhaustion factor of less than 1.0 is augmented at the rate of .1 per day.
  9. When a unit in combat, or recently in combat, is reinforced by a unit at least half of its size (in numbers of men), it adopts the ex factor of the reinforcing unit or—if the ex factor of the reinforcing unit is the same or lower than that of the reinforced—both adopt an ex factor 0.1 higher than that of the reinforced unit at the time of reinforcement, save that an ex factor cannot be greater than 1.0.
  10. When a unit in combat, or recently in combat, is reinforced by a unit less than half its size, but not less than one quarter its size, augmentations or modifications of ex factors will be 0.5 times those provided for in paragraph 9, above. When the reinforcing unit is less than one-quarter the size of the reinforced unit, but not less than one-tenth its size, augmentations or modifications of ex factors will be 0.25 times those provided for in paragraph 9, above.

* Approximate reflection of preliminary QJM assessment of effects of casualty and fatigue, WWII engagements. These rates are for division or smaller size; for corps and larger units exhaustion rates are calculated for component divisions and smaller separate units.

EXAMPLES OF APPLICATION

  1. A division in continuous offensive combat for five days stays in the line in inactive posture for two days, then resumes the offensive:
    1. Combat exhaustion effect: 1 – (5 x .05) = 0.75;
    2. Recuperation effect: 75 + (2 x .l) = 0.95.
  2. A division in defensive posture for fifteen days is ordered to undertake a counterattack:
    1. Combat exhaustion effect: 1 – (15 x .05) =0.25; this is below the minimum ex factor, which therefore applies: 0.5;
    2. Recuperation effect: None; ex factor is 0.5.
  3. A division in offensive posture for three days is reinforced by two fresh brigades:
    1. Combat exhaustion effect: 1 – (3 x .05) = 0.85;
    2. Reinforcement effect: Augmentation from 0.85 to 1.0.
  4. A division in offensive posture for three days is reinforced by one fresh brigade:
    1. Combat exhaustion effect: 1 – (3 x .05) = 0.85;
    2. Reinforcement effect: 0.5 x augmentation from 0.85 to 1 = 0.93.

Human Factors In Warfare: Combat Intensity

Battle of Spotsylvania by Thure de Thulstrup (1886) [Library of Congress]

Trevor Dupuy considered intensity to be another combat phenomena influenced by human factors. The variation in the intensity of combat is an aspect of battle that is widely acknowledged but little studied.

No one who has paid any attention at all to historical combat statistics can have failed to notice that some battles have been very bloody and hard-fought, while others—often under circumstances superficially similar—have reached a conclusion with relatively light casualties on one or both sides. I don’t believe that it is terribly important to find a quantitative reason for such differences, mainly because I don’t think there is any quantitative reason. The differences are usually due to such things as the general circumstances existing when the battles are fought, the personalities of the commanders, and the natures of the missions or objectives of one or both of the hostile forces, and the interactions of these personalities and missions.

From my standpoint the principal reason for trying to quantify the intensity of a battle is for purposes of comparative analysis. Just because casualties are relatively low on one or both sides does not necessarily mean that the battle was not intensive. And if the casualty rates are misinterpreted, then the analysis of the outcome can be distorted. For instance, a battle fought on a flat plain between two military forces will almost invariably have higher casualty rates for both sides than will a battle between those same two forces in mountainous terrain. A battle between those two forces in a heavy downpour, or in cold, wintry weather, will have lower casualties than when the forces are opposed to each other, under otherwise identical circumstances, in good weather. Casualty rates for small forces in a given set of circumstances are invariably higher than the rates for larger forces under otherwise identical circumstances.

If all of these things are taken into consideration, then it is possible to assess combat intensity fairly consistently. The formula I use is as follows:

CI = CR / (sz’ x rc x hc)

When:     CI = Combat Intensity Measure

CR = Casualty rate in percent per day

sz’ = Square root of sz, a factor reflecting the effect of size upon casualty rates, derived from historical experience

rc = The effect of terrain on casualty rates, derived from historical experience

hc = The effect of weather on casualty rates, derived from historical experience

I then (somewhat arbitrarily) identify seven levels of intensity:

0.00 to 0.49 Very low intensity (1)

0.50 to 0.99 Low intensity (56)

1.00 to 1.99 Normal intensity (213)

2.00 to 2.99 High intensity (101)

3.00 to 3.99 Very high intensity (30)

4.00 to 5.00 Extremely high intensity (17)

Over 5.00 Catastrophic outcome (20)

The numbers in parentheses show the distribution of intensity on each side in 219 battles in DMSi’s QJM data base. The catastrophic battles include: the Russians in the Battles of Tannenberg and Gorlice Tarnow on the Eastern Front in World War I; the Russians on the first day of the Battle of Kursk in July 1943; a British defeat in Malaya in December, 1941; and 16 Japanese defeats on Okinawa. Each of these catastrophic instances, quantitatively identified, is consistent with a qualitative assessment of the outcome.

[UPDATE]

As Clinton Reilly pointed out in the comments, this works better when the equation variables are provided. These are from Trevor N. Dupuy, Attrition Forecasting Battle Casualties and Equipment Losses in Modern War (Fall Church, VA: NOVA Publications, 1995), pp. 146, 147, 149.