Tag Doctrine

Dupuy’s Verities: The Utility Of Defense

Battle of Franklin, 1864 by Kurz and Allison. Restoration by Adam Cuerden [Wikimedia Commons]

The third of Trevor Dupuy’s Timeless Verities of Combat is:

Defensive posture is necessary when successful offense is impossible.

From Understanding War (1987):

Even though offensive action is essential to ultimate combat success, a combat commander opposed by a more powerful enemy has no choice but to assume a defensive posture. Since defensive posture automatically increases the combat power of his force, the defending commander at least partially redresses the imbalance of forces. At a minimum he is able to slow down the advance of the attacking enemy, and he might even beat him. In this way, through negative combat results, the defender may ultimately hope to wear down the attacker to the extent that his initial relative weakness is transformed into relative superiority, thus offering the possibility of eventually assuming the offensive and achieving positive combat results. The Franklin and Nashville Campaign of our Civil War, and the El Alamein Campaign of World War II are examples.

Sometimes the commander of a numerically superior offensive force may reduce the strength of portions of his force in order to achieve decisive superiority for maximum impact on the enemy at some other critical point on the battlefield, with the result that those reduced-strength components are locally outnumbered. A contingent thus reduced in strength may therefore be required to assume a defensive posture, even though the overall operational posture of the marginally superior force is offensive, and the strengthened contingent of the same force is attacking with the advantage of superior combat power. A classic example was the role of Davout at Auerstadt when Napoléon was crushing the Prussians at Jena. Another is the role played by “Stonewall” Jackson’s corps at the Second Battle of Bull Run. [pp. 2-3]

This verity is both derivative of Dupuy’s belief that the defensive posture is a human reaction to the lethal environment of combat, and his concurrence with Clausewitz’s dictum that the defense is the stronger form of combat. Soldiers in combat will sometimes reach a collective conclusion that they can no longer advance in the face of lethal opposition, and will stop and seek cover and concealment to leverage the power of the defense. Exploiting the multiplying effect of the defensive is also a way for a force with weaker combat power to successfully engage a stronger one.

It also relates to the principle of war known as economy of force, as defined in the 1954 edition of the U.S. Army’s Field Manual FM 100-5, Field Service Regulations, Operations:

Minimum essential means must be employed at points other than that of decision. To devote means to unnecessary secondary efforts or to employ excessive means on required secondary efforts is to violate the principle of both mass and the objective. Limited attacks, the defensive, deception, or even retrograde action are used in noncritical areas to achieve mass in the critical area.

These concepts are well ingrained in modern U.S. Army doctrine. FM 3-0 Operations (2017) summarizes the defensive this way:

Defensive tasks are conducted to defeat an enemy attack, gain time, economize forces, and develop conditions favorable for offensive or stability tasks. Normally, the defense alone cannot achieve a decisive victory. However, it can set conditions for a counteroffensive or counterattack that enables Army forces to regain and exploit the initiative. Defensive tasks are a counter to enemy offensive actions. They defeat attacks, destroying as much of an attacking enemy as possible. They also preserve and maintain control over land, resources, and populations. The purpose of defensive tasks is to retain key terrain, guard populations, protect lines of communications, and protect critical capabilities against enemy attacks and counterattacks. Commanders can conduct defensive tasks to gain time and economize forces, so offensive tasks can be executed elsewhere. [Para 1-72]

UPDATE: Just as I posted this, out comes a contrarian view from U.S. Army CAPT Brandon Morgan via the Modern War Institute at West Point blog. He argues that the U.S. Army is not placing enough emphasis on preparing to conduct defensive operations:

In his seminal work On War, Carl von Clausewitz famously declared that, in comparison to the offense, “the defensive form of warfare is intrinsically stronger than the offensive.”

This is largely due to the defender’s ability to occupy key terrain before the attack, and is most true when there is sufficient time to prepare the defense. And yet within the doctrinal hierarchy of the four elements of decisive action (offense, defense, stability, and defense support of civil authorities), the US Army prioritizes offensive operations. Ultimately, this has led to training that focuses almost exclusively on offensive operations at the cost of deliberate planning for the defense. But in the context of a combined arms fight against a near-peer adversary, US Army forces will almost assuredly find themselves initially fighting in a defense. Our current neglect of deliberate planning for the defense puts these soldiers who will fight in that defense at grave risk.

Are There Only Three Ways of Assessing Military Power?

military-power[This article was originally posted on 11 October 2016]

In 2004, military analyst and academic Stephen Biddle published Military Power: Explaining Victory and Defeat in Modern Battle, a book that addressed the fundamental question of what causes victory and defeat in battle. Biddle took to task the study of the conduct of war, which he asserted was based on “a weak foundation” of empirical knowledge. He surveyed the existing literature on the topic and determined that the plethora of theories of military success or failure fell into one of three analytical categories: numerical preponderance, technological superiority, or force employment.

Numerical preponderance theories explain victory or defeat in terms of material advantage, with the winners possessing greater numbers of troops, populations, economic production, or financial expenditures. Many of these involve gross comparisons of numbers, but some of the more sophisticated analyses involve calculations of force density, force-to-space ratios, or measurements of quality-adjusted “combat power.” Notions of threshold “rules of thumb,” such as the 3-1 rule, arise from this. These sorts of measurements form the basis for many theories of power in the study of international relations.

The next most influential means of assessment, according to Biddle, involve views on the primacy of technology. One school, systemic technology theory, looks at how technological advances shift balances within the international system. The best example of this is how the introduction of machine guns in the late 19th century shifted the advantage in combat to the defender, and the development of the tank in the early 20th century shifted it back to the attacker. Such measures are influential in international relations and political science scholarship.

The other school of technological determinacy is dyadic technology theory, which looks at relative advantages between states regardless of posture. This usually involves detailed comparisons of specific weapons systems, tanks, aircraft, infantry weapons, ships, missiles, etc., with the edge going to the more sophisticated and capable technology. The use of Lanchester theory in operations research and combat modeling is rooted in this thinking.

Biddle identified the third category of assessment as subjective assessments of force employment based on non-material factors including tactics, doctrine, skill, experience, morale or leadership. Analyses on these lines are the stock-in-trade of military staff work, military historians, and strategic studies scholars. However, international relations theorists largely ignore force employment and operations research combat modelers tend to treat it as a constant or omit it because they believe its effects cannot be measured.

The common weakness of all of these approaches, Biddle argued, is that “there are differing views, each intuitively plausible but none of which can be considered empirically proven.” For example, no one has yet been able to find empirical support substantiating the validity of the 3-1 rule or Lanchester theory. Biddle notes that the track record for predictions based on force employment analyses has also been “poor.” (To be fair, the problem of testing theory to see if applies to the real world is not limited to assessments of military power, it afflicts security and strategic studies generally.)

So, is Biddle correct? Are there only three ways to assess military outcomes? Are they valid? Can we do better?

Should The Marines Take Responsibility For Counterinsurgency?

United States Marines in Nacaragua with the captured flag of Augusto César Sandino, 1932. [Wikipedia]

Sydney J. Freedberg, Jr recently reported in Breaking Defense that the Senate Armed Services Committee (SASC), led by chairman Senator John McCain, has asked Defense Secretary James Mattis to report on progress toward preparing the U.S. armed services to carry out the recently published National Defense Strategy oriented toward potential Great Power conflict.

Among a series of questions that challenge existing service roles and missions, Freedberg reported that the SASC wants to know if responsibility for carrying out “low-intensity missions,” such as counterinsurgency, should be the primary responsibility of one service:

Make the Marines a counterinsurgency force? The Senate starts by asking whether the military “would benefit from having one Armed Force dedicated primarily to low-intensity missions, thereby enabling the other Armed Forces to focus more exclusively on advanced peer competitors.” It quickly becomes clear that “one Armed Force” means “the Marines.” The bill questions the Army’s new Security Force Assistance Brigades (SFABs) and suggest shifting that role to the Marines. It also questions the survivability of Navy-Marine flotillas in the face of long-range sensors and precision missiles — so-called Anti-Access/Area Denial (A2/AD) systems — and asked whether the Marines’ core mission, “amphibious forced entry operations,” should even “remain an enduring mission for the joint force” given the difficulties. It suggests replacing large-deck amphibious ships, which carry both Marine aircraft and landing forces, with small aircraft carriers that could carry “larger numbers of more diverse strike aircraft” (but not amphibious vehicles or landing craft). Separate provisions of the bill restrict spending on the current Amphibious Assault Vehicle (Sec. 221) and the future Amphibious Combat Vehicle (Sec. 128) until the Pentagon addresses the viability of amphibious landings.

This proposed change would drastically shift the U.S. Marine Corps’ existing role and missions, something that will inevitably generate political and institutional resistance. Deemphasizing the ability to execute amphibious forced entry operations would be both a difficult strategic choice and an unpalatable political decision to fundamentally alter the Marine Corps’ institutional identity. Amphibious warfare has defined the Marines since the 1920s. It would, however, be a concession to the reality that technological change is driving the evolving character of warfare.

Perhaps This Is Not A Crazy Idea After All

The Marine Corps also has a long history with so-called “small wars”: contingency operations and counterinsurgencies. Tasking the Marines as the proponents for low-intensity conflict would help alleviate one of the basic conundrums facing U.S. land power: the U.S. Army’s inability to optimize its force structure due to the strategic need to be prepared to wage both low-intensity conflict and conventional combined arms warfare against peer or near peer adversaries. The capabilities needed for waging each type of conflict are diverging, and continuing to field a general purpose force is running an increasing risk of creating an Army dangerously ill-suited for either. Giving the Marine Corps responsibility for low-intensity conflict would permit the Army to optimize most of its force structure for combined arms warfare, which poses the most significant threat to American national security (even if it less likely than potential future low-intensity conflicts).

Making the Marines the lead for low-intensity conflict would also play to another bulwark of its institutional identity, as the world’s premier light infantry force (“Every Marine is a rifleman”). Even as light infantry becomes increasingly vulnerable on modern battlefields dominated by the lethality of long-range precision firepower, its importance for providing mass in irregular warfare remains undiminished. Technology has yet to solve the need for large numbers of “boots on the ground” in counterinsurgency.

The crucial role of manpower in counterinsurgency makes it somewhat short-sighted to follow through with the SASC’s suggestions to eliminate the Army’s new Security Force Assistance Brigades (SFABs) and to reorient Special Operations Forces (SOF) toward support for high-intensity conflict. As recent, so-called “hybrid warfare” conflicts in Lebanon and the Ukraine have demonstrated, future battlefields will likely involve a mix of combined arms and low-intensity warfare. It would be risky to assume that Marine Corps’ light infantry, as capable as they are, could tackle all of these challenges alone.

Giving the Marines responsibility for low-intensity conflict would not likely require a drastic change in force structure. Marines could continue to emphasize sea mobility and littoral warfare in circumstances other than forced entry. Giving up the existing large-deck amphibious landing ships would be a tough concession, admittedly, one that would likely reduce the Marines’ effectiveness in responding to contingencies.

It is not likely that a change as big as this will be possible without a protracted political and institutional fight. But fresh thinking and drastic changes in the U.S.’s approach to warfare are going to be necessary to effectively address both near and long-term strategic challenges.

Dupuy’s Verities: The Power Of Defense

Leonidas at Thermopylae, by Jacques-Louis David, 1814. [Wikimedia]

The second of Trevor Dupuy’s Timeless Verities of Combat is:

Defensive strength is greater than offensive strength.

From Understanding War (1987):

[Prussian military theorist, Carl von] Clausewitz expressed this: “Defense is the stronger form of combat.” It is possible to demonstrate by the qualitative comparison of many battles that Clausewitz is right and that posture has a multiplicative effect on the combat power of a military force that takes advantage of terrain and fortifications, whether hasty and rudimentary, or intricate and carefully prepared. There are many well-known examples of the need of an attacker for a preponderance of strength in order to carry the day against a well-placed and fortified defender. One has only to recall Thermopylae, the Alamo, Fredericksburg, Petersburg, and El Alamein to realize the advantage enjoyed by a defender with smaller forces, well placed, and well protected. [p. 2]

The advantages of fighting on the defensive and the benefits of cover and concealment in certain types of terrain have long been basic tenets in military thinking. Dupuy, however, considered defensive combat posture and defensive value of terrain not just to be additive, but combat power multipliers, or circumstantial variables of combat that when skillfully applied and exploited, the effects of which could increase the overall fighting capability of a military force.

The statement [that the defensive is the stronger form of combat] implies a comparison of relative strength. It is essentially scalar and thus ultimately quantitative. Clausewitz did not attempt to define the scale of his comparison. However, by following his conceptual approach it is possible to establish quantities for this comparison. Depending upon the extent to which the defender has had the time and capability to prepare for defensive combat, and depending also upon such considerations as the nature of the terrain which he is able to utilize for defense, my research tells me that the comparative strength of defense to offense can range from a factor with a minimum value of about 1.3 to maximum value of more than 3.0. [p. 26]

The values Dupuy established for posture and terrain based on historical combat experience were as follows:

For example, Dupuy calculated that mounting even a hasty defense in rolling, gentle terrain with some vegetation could increase a force’s combat power by more than 50%. This is a powerful effect, achievable without the addition of any extra combat capability.

It should be noted that these values are both descriptive, in terms of defining Dupuy’s theoretical conception of the circumstantial variables of combat, as well as factors specifically calculated for use in his combat models. Some of these factors have found their way into models and simulations produced by others and some U.S. military doctrinal publications, usually without attribution and shorn of explanatory context. (A good exploration of the relationship between the values Dupuy established for the circumstantial variables of combat and his combat models, and the pitfalls of applying them out of context can be found here.)

While the impact of terrain on combat is certainly an integral part of current U.S. Army thinking at all levels, and is constantly factored into combat planning and assessment, its doctrine does not explicitly acknowledge the classic Clausewitzian notion of a power disparity between the offense and defense. Nor are the effects of posture or terrain thought of as combat multipliers.

However, the Army does implicitly recognize the advantage of the defensive through its stubbornly persistent adherence to the so-called 3-1 rule of combat. Its version of this (which the U.S. Marine Corps also uses) is described in doctrinal publications as “historical minimum planning ratios,” which proscribe that a 3-1 advantage in numerical force ratio is necessary for an attacker to defeat a defender in a prepared or fortified position. Overcoming a defender in a hasty defense posture requires a 2.5-1 force ratio advantage. The force ratio advantages the Army considers necessary for decisive operations are even higher. While the 3-1 rule is a deeply problematic construct, the fact that is the only quantitative planning factor included in current doctrine reveals a healthy respect for the inherent power of the defensive.

Dupuy’s Verities: Offensive Action

Sheridan’s final charge at Winchester by Thune de Thulstrup (ca. 1886) [Library of Congress]

The first of Trevor Dupuy’s Timeless Verities of Combat is:

Offensive action is essential to positive combat results.

As he explained in Understanding War (1987):

This is like saying, “A team can’t score in football unless it has the ball.” Although subsequent verities stress the strength, value, and importance of defense, this should not obscure the essentiality of offensive action to ultimate combat success. Even in instances where a defensive strategy might conceivably assure a favorable war outcome—as was the case of the British against Napoleon, and as the Confederacy attempted in the American Civil War—selective employment of offensive tactics and operations is required if the strategic defender is to have any chance of final victory. [pp. 1-2]

The offensive has long been a staple element of the principles of war. From the 1954 edition of the U.S. Army Field Manual FM 100-5, Field Service Regulations, Operations:

71. Offensive

Only offensive action achieves decisive results. Offensive action permits the commander to exploit the initiative and impose his will on the enemy. The defensive may be forced on the commander, but it should be deliberately adopted only as a temporary expedient while awaiting an opportunity for offensive action or for the purpose of economizing forces on a front where a decision is not sought. Even on the defensive the commander seeks every opportunity to seize the initiative and achieve decisive results by offensive action. [Original emphasis]

Interestingly enough, the offensive no longer retains its primary place in current Army doctrinal thought. The Army consigned its list of the principles of war to an appendix in the 2008 edition of FM 3-0 Operations and omitted them entirely from the 2017 revision. As the current edition of FM 3-0 Operations lays it out, the offensive is now placed on the same par as the defensive and stability operations:

Unified land operations are simultaneous offensive, defensive, and stability or defense support of civil authorities’ tasks to seize, retain, and exploit the initiative to shape the operational environment, prevent conflict, consolidate gains, and win our Nation’s wars as part of unified action (ADRP 3-0)…

At the heart of the Army’s operational concept is decisive action. Decisive action is the continuous, simultaneous combinations of offensive, defensive, and stability or defense support of civil authorities tasks (ADRP 3-0). During large-scale combat operations, commanders describe the combinations of offensive, defensive, and stability tasks in the concept of operations. As a single, unifying idea, decisive action provides direction for an entire operation. [p. I-16; original emphasis]

It is perhaps too easy to read too much into this change in emphasis. On the very next page, FM 3-0 describes offensive “tasks” thusly:

Offensive tasks are conducted to defeat and destroy enemy forces and seize terrain, resources, and population centers. Offensive tasks impose the commander’s will on the enemy. The offense is the most direct and sure means of seizing and exploiting the initiative to gain physical and cognitive advantages over an enemy. In the offense, the decisive operation is a sudden, shattering action that capitalizes on speed, surprise, and shock effect to achieve the operation’s purpose. If that operation does not destroy or defeat the enemy, operations continue until enemy forces disintegrate or retreat so they no longer pose a threat. Executing offensive tasks compels an enemy to react, creating or revealing additional weaknesses that an attacking force can exploit. [p. I-17]

The change in emphasis likely reflects recent U.S. military experience where decisive action has not yielded much in the way of decisive outcomes, as is mentioned in FM 3-0’s introduction. Joint force offensives in 2001 and 2003 “achieved rapid initial military success but no enduring political outcome, resulting in protracted counterinsurgency campaigns.” The Army now anticipates a future operating environment where joint forces can expect to “work together and with unified action partners to successfully prosecute operations short of conflict, prevail in large-scale combat operations, and consolidate gains to win enduring strategic outcomes” that are not necessarily predicated on offensive action alone. We may have to wait for the next edition of FM 3-0 to see if the Army has drawn valid conclusions from the recent past or not.

The Lanchester Equations and Historical Warfare

Allied force dispositions at the Battle of Anzio, on 1 February 1944. [U.S. Army/Wikipedia]

[The article below is reprinted from History, Numbers And War: A HERO Journal, Vol. 1, No. 1, Spring 1977, pp. 34-52]

The Lanchester Equations and Historical Warfare: An Analysis of Sixty World War II Land Engagements

By Janice B. Fain

Background and Objectives

The method by which combat losses are computed is one of the most critical parts of any combat model. The Lanchester equations, which state that a unit’s combat losses depend on the size of its opponent, are widely used for this purpose.

In addition to their use in complex dynamic simulations of warfare, the Lanchester equations have also sewed as simple mathematical models. In fact, during the last decade or so there has been an explosion of theoretical developments based on them. By now their variations and modifications are numerous, and “Lanchester theory” has become almost a separate branch of applied mathematics. However, compared with the effort devoted to theoretical developments, there has been relatively little empirical testing of the basic thesis that combat losses are related to force sizes.

One of the first empirical studies of the Lanchester equations was Engel’s classic work on the Iwo Jima campaign in which he found a reasonable fit between computed and actual U.S. casualties (Note 1). Later studies were somewhat less supportive (Notes 2 and 3), but an investigation of Korean war battles showed that, when the simulated combat units were constrained to follow the tactics of their historical counterparts, casualties during combat could be predicted to within 1 to 13 percent (Note 4).

Taken together, these various studies suggest that, while the Lanchester equations may be poor descriptors of large battles extending over periods during which the forces were not constantly in combat, they may be adequate for predicting losses while the forces are actually engaged in fighting. The purpose of the work reported here is to investigate 60 carefully selected World War II engagements. Since the durations of these battles were short (typically two to three days), it was expected that the Lanchester equations would show a closer fit than was found in studies of larger battles. In particular, one of the objectives was to repeat, in part, Willard’s work on battles of the historical past (Note 3).

The Data Base

Probably the most nearly complete and accurate collection of combat data is the data on World War II compiled by the Historical Evaluation and Research Organization (HERO). From their data HERO analysts selected, for quantitative analysis, the following 60 engagements from four major Italian campaigns:

Salerno, 9-18 Sep 1943, 9 engagements

Volturno, 12 Oct-8 Dec 1943, 20 engagements

Anzio, 22 Jan-29 Feb 1944, 11 engagements

Rome, 14 May-4 June 1944, 20 engagements

The complete data base is described in a HERO report (Note 5). The work described here is not the first analysis of these data. Statistical analyses of weapon effectiveness and the testing of a combat model (the Quantified Judgment Method, QJM) have been carried out (Note 6). The work discussed here examines these engagements from the viewpoint of the Lanchester equations to consider the question: “Are casualties during combat related to the numbers of men in the opposing forces?”

The variables chosen for this analysis are shown in Table 1. The “winners” of the engagements were specified by HERO on the basis of casualties suffered, distance advanced, and subjective estimates of the percentage of the commander’s objective achieved. Variable 12, the Combat Power Ratio, is based on the Operational Lethality Indices (OLI) of the units (Note 7).

The general characteristics of the engagements are briefly described. Of the 60, there were 19 attacks by British forces, 28 by U.S. forces, and 13 by German forces. The attacker was successful in 34 cases; the defender, in 23; and the outcomes of 3 were ambiguous. With respect to terrain, 19 engagements occurred in flat terrain; 24 in rolling, or intermediate, terrain; and 17 in rugged, or difficult, terrain. Clear weather prevailed in 40 cases; 13 engagements were fought in light or intermittent rain; and 7 in medium or heavy rain. There were 28 spring and summer engagements and 32 fall and winter engagements.

Comparison of World War II Engagements With Historical Battles

Since one purpose of this work is to repeat, in part, Willard’s analysis, comparison of these World War II engagements with the historical battles (1618-1905) studied by him will be useful. Table 2 shows a comparison of the distribution of battles by type. Willard’s cases were divided into two categories: I. meeting engagements, and II. sieges, attacks on forts, and similar operations. HERO’s World War II engagements were divided into four types based on the posture of the defender: 1. delay, 2. hasty defense, 3. prepared position, and 4. fortified position. If postures 1 and 2 are considered very roughly equivalent to Willard’s category I, then in both data sets the division into the two gross categories is approximately even.

The distribution of engagements across force ratios, given in Table 3, indicated some differences. Willard’s engagements tend to cluster at the lower end of the scale (1-2) and at the higher end (4 and above), while the majority of the World War II engagements were found in mid-range (1.5 – 4) (Note 8). The frequency with which the numerically inferior force achieved victory is shown in Table 4. It is seen that in neither data set are force ratios good predictors of success in battle (Note 9).

Table 3.

Results of the Analysis Willard’s Correlation Analysis

There are two forms of the Lanchester equations. One represents the case in which firing units on both sides know the locations of their opponents and can shift their fire to a new target when a “kill” is achieved. This leads to the “square” law where the loss rate is proportional to the opponent’s size. The second form represents that situation in which only the general location of the opponent is known. This leads to the “linear” law in which the loss rate is proportional to the product of both force sizes.

As Willard points out, large battles are made up of many smaller fights. Some of these obey one law while others obey the other, so that the overall result should be a combination of the two. Starting with a general formulation of Lanchester’s equations, where g is the exponent of the target unit’s size (that is, g is 0 for the square law and 1 for the linear law), he derives the following linear equation:

log (nc/mc) = log E + g log (mo/no) (1)

where nc and mc are the casualties, E is related to the exchange ratio, and mo and no are the initial force sizes. Linear regression produces a value for g. However, instead of lying between 0 and 1, as expected, the) g‘s range from -.27 to -.87, with the majority lying around -.5. (Willard obtains several values for g by dividing his data base in various ways—by force ratio, by casualty ratio, by historical period, and so forth.) A negative g value is unpleasant. As Willard notes:

Military theorists should be disconcerted to find g < 0, for in this range the results seem to imply that if the Lanchester formulation is valid, the casualty-producing power of troops increases as they suffer casualties (Note 3).

From his results, Willard concludes that his analysis does not justify the use of Lanchester equations in large-scale situations (Note 10).

Analysis of the World War II Engagements

Willard’s computations were repeated for the HERO data set. For these engagements, regression produced a value of -.594 for g (Note 11), in striking agreement with Willard’s results. Following his reasoning would lead to the conclusion that either the Lanchester equations do not represent these engagements, or that the casualty producing power of forces increases as their size decreases.

However, since the Lanchester equations are so convenient analytically and their use is so widespread, it appeared worthwhile to reconsider this conclusion. In deriving equation (1), Willard used binomial expansions in which he retained only the leading terms. It seemed possible that the poor results might he due, in part, to this approximation. If the first two terms of these expansions are retained, the following equation results:

log (nc/mc) = log E + log (Mo-mc)/(no-nc) (2)

Repeating this regression on the basis of this equation leads to g = -.413 (Note 12), hardly an improvement over the initial results.

A second attempt was made to salvage this approach. Starting with raw OLI scores (Note 7), HERO analysts have computed “combat potentials” for both sides in these engagements, taking into account the operational factors of posture, vulnerability, and mobility; environmental factors like weather, season, and terrain; and (when the record warrants) psychological factors like troop training, morale, and the quality of leadership. Replacing the factor (mo/no) in Equation (1) by the combat power ratio produces the result) g = .466 (Note 13).

While this is an apparent improvement in the value of g, it is achieved at the expense of somewhat distorting the Lanchester concept. It does preserve the functional form of the equations, but it requires a somewhat strange definition of “killing rates.”

Analysis Based on the Differential Lanchester Equations

Analysis of the type carried out by Willard appears to produce very poor results for these World War II engagements. Part of the reason for this is apparent from Figure 1, which shows the scatterplot of the dependent variable, log (nc/mc), against the independent variable, log (mo/no). It is clear that no straight line will fit these data very well, and one with a positive slope would not be much worse than the “best” line found by regression. To expect the exponent to account for the wide variation in these data seems unreasonable.

Here, a simpler approach will be taken. Rather than use the data to attempt to discriminate directly between the square and the linear laws, they will be used to estimate linear coefficients under each assumption in turn, starting with the differential formulation rather than the integrated equations used by Willard.

In their simplest differential form, the Lanchester equations may be written;

Square Law; dA/dt = -kdD and dD/dt = kaA (3)

Linear law: dA/dt = -k’dAD and dD/dt = k’aAD (4)

where

A(D) is the size of the attacker (defender)

dA/dt (dD/dt) is the attacker’s (defender’s) loss rate,

ka, k’a (kd, k’d) are the attacker’s (defender’s) killing rates

For this analysis, the day is taken as the basic time unit, and the loss rate per day is approximated by the casualties per day. Results of the linear regressions are given in Table 5. No conclusions should be drawn from the fact that the correlation coefficients are higher in the linear law case since this is expected for purely technical reasons (Note 14). A better picture of the relationships is again provided by the scatterplots in Figure 2. It is clear from these plots that, as in the case of the logarithmic forms, a single straight line will not fit the entire set of 60 engagements for either of the dependent variables.

To investigate ways in which the data set might profitably be subdivided for analysis, T-tests of the means of the dependent variable were made for several partitionings of the data set. The results, shown in Table 6, suggest that dividing the engagements by defense posture might prove worthwhile.

Results of the linear regressions by defense posture are shown in Table 7. For each posture, the equation that seemed to give a better fit to the data is underlined (Note 15). From this table, the following very tentative conclusions might be drawn:

  • In an attack on a fortified position, the attacker suffers casualties by the square law; the defender suffers casualties by the linear law. That is, the defender is aware of the attacker’s position, while the attacker knows only the general location of the defender. (This is similar to Deitchman’s guerrilla model. Note 16).
  • This situation is apparently reversed in the cases of attacks on prepared positions and hasty defenses.
  • Delaying situations seem to be treated better by the square law for both attacker and defender.

Table 8 summarizes the killing rates by defense posture. The defender has a much higher killing rate than the attacker (almost 3 to 1) in a fortified position. In a prepared position and hasty defense, the attacker appears to have the advantage. However, in a delaying action, the defender’s killing rate is again greater than the attacker’s (Note 17).

Figure 3 shows the scatterplots for these cases. Examination of these plots suggests that a tentative answer to the study question posed above might be: “Yes, casualties do appear to be related to the force sizes, but the relationship may not be a simple linear one.”

In several of these plots it appears that two or more functional forms may be involved. Consider, for example, the defender‘s casualties as a function of the attacker’s initial strength in the case of a hasty defense. This plot is repeated in Figure 4, where the points appear to fit the curves sketched there. It would appear that there are at least two, possibly three, separate relationships. Also on that plot, the individual engagements have been identified, and it is interesting to note that on the curve marked (1), five of the seven attacks were made by Germans—four of them from the Salerno campaign. It would appear from this that German attacks are associated with higher than average defender casualties for the attacking force size. Since there are so few data points, this cannot be more than a hint or interesting suggestion.

Future Research

This work suggests two conclusions that might have an impact on future lines of research on combat dynamics:

  • Tactics appear to be an important determinant of combat results. This conclusion, in itself, does not appear startling, at least not to the military. However, it does not always seem to have been the case that tactical questions have been considered seriously by analysts in their studies of the effects of varying force levels and force mixes.
  • Historical data of this type offer rich opportunities for studying the effects of tactics. For example, consideration of the narrative accounts of these battles might permit re-coding the engagements into a larger, more sensitive set of engagement categories. (It would, of course, then be highly desirable to add more engagements to the data set.)

While predictions of the future are always dangerous, I would nevertheless like to suggest what appears to be a possible trend. While military analysis of the past two decades has focused almost exclusively on the hardware of weapons systems, at least part of our future analysis will be devoted to the more behavioral aspects of combat.

Janice Bloom Fain, a Senior Associate of CACI, lnc., is a physicist whose special interests are in the applications of computer simulation techniques to industrial and military operations; she is the author of numerous reports and articles in this field. This paper was presented by Dr. Fain at the Military Operations Research Symposium at Fort Eustis, Virginia.

NOTES

[1.] J. H. Engel, “A Verification of Lanchester’s Law,” Operations Research 2, 163-171 (1954).

[2.] For example, see R. L. Helmbold, “Some Observations on the Use of Lanchester’s Theory for Prediction,” Operations Research 12, 778-781 (1964); H. K. Weiss, “Lanchester-Type Models of Warfare,” Proceedings of the First International Conference on Operational Research, 82-98, ORSA (1957); H. K. Weiss, “Combat Models and Historical Data; The U.S. Civil War,” Operations Research 14, 750-790 (1966).

[3.] D. Willard, “Lanchester as a Force in History: An Analysis of Land Battles of the Years 1618-1905,” RAC-TD-74, Research Analysis Corporation (1962). what appears to be a possible trend. While military analysis of the past two decades has focused almost exclusively on the hardware of weapons systems, at least part of our future analysis will be devoted to the more behavioral aspects of combat.

[4.] The method of computing the killing rates forced a fit at the beginning and end of the battles. See W. Fain, J. B. Fain, L. Feldman, and S. Simon, “Validation of Combat Models Against Historical Data,” Professional Paper No. 27, Center for Naval Analyses, Arlington, Virginia (1970).

[5.] HERO, “A Study of the Relationship of Tactical Air Support Operations to Land Combat, Appendix B, Historical Data Base.” Historical Evaluation and Research Organization, report prepared for the Defense Operational Analysis Establishment, U.K.T.S.D., Contract D-4052 (1971).

[6.] T. N. Dupuy, The Quantified Judgment Method of Analysis of Historical Combat Data, HERO Monograph, (January 1973); HERO, “Statistical Inference in Analysis in Combat,” Annex F, Historical Data Research on Tactical Air Operations, prepared for Headquarters USAF, Assistant Chief of Staff for Studies and Analysis, Contract No. F-44620-70-C-0058 (1972).

[7.] The Operational Lethality Index (OLI) is a measure of weapon effectiveness developed by HERO.

[8.] Since Willard’s data did not indicate which side was the attacker, his force ratio is defined to be (larger force/smaller force). The HERO force ratio is (attacker/defender).

[9.] Since the criteria for success may have been rather different for the two sets of battles, this comparison may not be very meaningful.

[10.] This work includes more complex analysis in which the possibility that the two forces may be engaging in different types of combat is considered, leading to the use of two exponents rather than the single one, Stochastic combat processes are also treated.

[11.] Correlation coefficient = -.262;

Intercept = .00115; slope = -.594.

[12.] Correlation coefficient = -.184;

Intercept = .0539; slope = -,413.

[13.] Correlation coefficient = .303;

Intercept = -.638; slope = .466.

[14.] Correlation coefficients for the linear law are inflated with respect to the square law since the independent variable is a product of force sizes and, thus, has a higher variance than the single force size unit in the square law case.

[15.] This is a subjective judgment based on the following considerations Since the correlation coefficient is inflated for the linear law, when it is lower the square law case is chosen. When the linear law correlation coefficient is higher, the case with the intercept closer to 0 is chosen.

[16.] S. J. Deitchman, “A Lanchester Model of Guerrilla Warfare,” Operations Research 10, 818-812 (1962).

[17.] As pointed out by Mr. Alan Washburn, who prepared a critique on this paper, when comparing numerical values of the square law and linear law killing rates, the differences in units must be considered. (See footnotes to Table 7).

What Is A Breakpoint?

French retreat from Russia in 1812 by Illarion Mikhailovich Pryanishnikov (1812) [Wikipedia]

After discussing with Chris the series of recent posts on the subject of breakpoints, it seemed appropriate to provide a better definition of exactly what a breakpoint is.

Dorothy Kneeland Clark was the first to define the notion of a breakpoint in her study, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, The Johns Hopkins University: Baltimore, 1954). She found it was not quite as clear-cut as it seemed and the working definition she arrived at was based on discussions and the specific combat outcomes she found in her data set [pp 9-12].

DETERMINATION OF BREAKPOINT

The following definitions were developed out of many discussions. A unit is considered to have lost its combat effectiveness when it is unable to carry out its mission. The onset of this inability constitutes a breakpoint. A unit’s mission is the objective assigned in the current operations order or any other instructional directive, written or verbal. The objective may be, for example, to attack in order to take certain positions, or to defend certain positions. 

How does one determine when a unit is unable to carry out its mission? The obvious indication is a change in operational directive: the unit is ordered to stop short of its original goal, to hold instead of attack, to withdraw instead of hold. But one or more extraneous elements may cause the issue of such orders: 

(1) Some other unit taking part in the operation may have lost its combat effectiveness, and its predicament may force changes, in the tactical plan. For example the inability of one infantry battalion to take a hill may require that the two adjoining battalions be stopped to prevent exposing their flanks by advancing beyond it. 

(2) A unit may have been assigned an objective on the basis of a G-2 estimate of enemy weakness which, as the action proceeds, proves to have been over-optimistic. The operations plan may, therefore, be revised before the unit has carried out its orders to the point of losing combat effectiveness. 

(3) The commanding officer, for reasons quite apart from the tactical attrition, may change his operations plan. For instance, General Ridgway in May 1951 was obliged to cancel his plans for a major offensive north of the 38th parallel in Korea in obedience to top level orders dictated by political considerations. 

(4) Even if the supposed combat effectiveness of the unit is the determining factor in the issuance of a revised operations order, a serious difficulty in evaluating the situation remains. The commanding officer’s decision is necessarily made on the basis of information available to him plus his estimate of his unit’s capacities. Either or both of these bases may be faulty. The order may belatedly recognize a collapse which has in factor occurred hours earlier, or a commanding officer may withdraw a unit which could hold for a much longer time. 

It was usually not hard to discover when changes in orders resulted from conditions such as the first three listed above, but it proved extremely difficult to distinguish between revised orders based on a correct appraisal of the unit’s combat effectiveness and those issued in error. It was concluded that the formal order for a change in mission cannot be taken as a definitive indication of the breakpoint of a unit. It seemed necessary to go one step farther and search the records to learn what a given battalion did regardless of provisions in formal orders… 

CATEGORIES OF BREAKPOINTS SELECTED 

In the engagements studied the following categories of breakpoint were finally selected: 

Category of Breakpoint 

No. Analyzed 

I. Attack [Symbol] rapid reorganization [Symbol] attack 

9 

II. Attack [Symbol] defense (no longer able to attack without a few days of recuperation and reinforcement 

21 

III. Defense [Symbol] withdrawal by order to a secondary line 

13 

IV. Defense [Symbol] collapse 

5 

Disorganization and panic were taken as unquestionable evidence of loss of combat effectiveness. It appeared, however, that there were distinct degrees of magnitude in these experiences. In addition to the expected breakpoints at attack [Symbol] defense and defense [Symbol] collapse, a further category, I, seemed to be indicated to include situations in which an attacking battalion was ‘pinned down” or forced to withdraw in partial disorder but was able to reorganize in 4 to 24 hours and continue attacking successfully. 

Category II includes (a) situations in which an attacking battalion was ordered into the defensive after severe fighting or temporary panic; (b) situations in which a battalion, after attacking successfully, failed to gain ground although still attempting to advance and was finally ordered into defense, the breakpoint being taken as occurring at the end of successful advance. In other words, the evident inability of the unit to fulfill its mission was used as the criterion for the breakpoint whether orders did or did not recognize its inability. Battalions after experiencing such a breakpoint might be able to recuperate in a few days to the point of renewing successful attack or might be able to continue for some time in defense. 

The sample of breakpoints coming under category IV, defense [Symbol] collapse, proved to be very small (5) and unduly weighted in that four of the examples came from the same engagement. It was, therefore, discarded as probably not representative of the universe of category IV breakpoints,* and another category (III) was added: situations in which battalions on the defense were ordered withdrawn to a quieter sector. Because only those instances were included in which the withdrawal orders appeared to have been dictated by the condition of the unit itself, it is believed that casualty levels for this category can be regarded as but slightly lower than those associated with defense [Symbol] collapse. 

In both categories II and III, “‘defense” represents an active situation in which the enemy is attacking aggressively. 

* It had been expected that breakpoints in this category would be associated with very high losses. Such did not prove to be the case. In whatever way the data were approached, most of the casualty averages were only slightly higher than those associated with category II (attack [Symbol] defense), although the spread in data was wider. It is believed that factors other than casualties, such as bad weather, difficult terrain, and heavy enemy artillery fire undoubtedly played major roles in bringing about the collapse in the four units taking part in the same engagement. Furthermore, the casualty figures for the four units themselves is in question because, as the situation deteriorated, many of the men developed severe cases of trench foot and combat exhaustion, but were not evacuated, as they would have been in a less desperate situation, and did not appear in the casualty records until they had made their way to the rear after their units had collapsed.

In 1987-1988, Trevor Dupuy and colleagues at Data Memory Systems, Inc. (DMSi), Janice Fain, Rich Anderson, Gay Hammerman, and Chuck Hawkins sought to create a broader, more generally applicable definition for breakpoints for the study, Forced Changes of Combat Posture (DMSi, Fairfax, VA, 1988) [pp. I-2-3]

The combat posture of a military force is the immediate intention of its commander and troops toward the opposing enemy force, together with the preparations and deployment to carry out that intention. The chief combat postures are attack, defend, delay, and withdraw.

A change in combat posture (or posture change) is a shift from one posture to another, as, for example, from defend to attack or defend to withdraw. A posture change can be either voluntary or forced. 

A forced posture change (FPC) is a change in combat posture by a military unit that is brought about, directly or indirectly, by enemy action. Forced posture changes are characteristically and almost always changes to a less aggressive posture. The most usual FPCs are from attack to defend and from defend to withdraw (or retrograde movement). A change from withdraw to combat ineffectiveness is also possible. 

Breakpoint is a term sometimes used as synonymous with forced posture change, and sometimes used to mean the collapse of a unit into ineffectiveness or rout. The latter meaning is probably more common in general usage, while forced posture change is the more precise term for the subject of this study. However, for brevity and convenience, and because this study has been known informally since its inception as the “Breakpoints” study, the term breakpoint is sometimes used in this report. When it is used, it is synonymous with forced posture change.

Hopefully this will help clarify the previous discussions of breakpoints on the blog.

U.S. Army Invests In Revitalizing Long Range Precision Fires Capabilities

U.S. Marines from the The 11th MEU fire their M777 Lightweight 155mm Howitzer during Exercise Alligator Dagger, Dec. 18, 2016. (U.S. Marine Corps/Lance Cpl. Zachery C. Laning/Military.com)

In 2016, Michael Jacobson and Robert H. Scales amplified a warning that after years of neglect during the counterinsurgency war in Iraq and Afghanistan, the U.S. was falling behind potential adversaries in artillery and long range precision fires capabilities. The U.S. Army had already taken note of the performance of Russian artillery in Ukraine, particularly the strike at Zelenopillya in 2014.

Since then, the U.S. Army and Marine Corps have started working on a new Multi-Domain Battle concept aimed at countering the anti-access/area denial (A2/AD) capabilities of potential foes. In 2017, U.S. Army Chief of Staff General Mark Milley made rapid improvement in long range precision fires capabilities the top priority for the service’s modernization effort. It currently aims to field new field artillery, rocket, and missile weapons capable of striking at distances from 70 to 500 kilometers – double the existing ranges – within five years.

The value of ground-based long-range precision fires has been demonstrated recently by the effectiveness of U.S. artillery support, particularly U.S. Army and Marine Corps 155mm howitzers, for Iraqi security forces in retaking Mosul, Syrian Democratic Forces assaulting Raqaa, and in protection of Syrian Kurds being attacked by Russian mercenaries and Syrian regime forces.

According to Army historian Luke O’Brian, the Fiscal Year 2019 Defense budget includes funds to buy 28,737 XM1156 Precision Guided Kit (PGK) 155mm howitzer munitions, which includes replacements for the 6,269 rounds expended during Operation INHERENT RESOLVE. O’Brian also notes that the Army will also buy 2,162 M982 Excalibur 155mm rounds in 2019 and several hundred each in following years.

In addition, in an effort to reduce the dependence on potentially vulnerable Global Positioning System (GPS) satellite networks for precision fires capabilities, the Army has awarded a contract to BAE Systems to develop Precision Guided Kit-Modernization (PGK-M) rounds with internal navigational capacity.

While the numbers appear large at first glance, data on U.S. artillery expenditures in Operation DESERT STORM and IRAQI FREEDOM (also via Luke O’Brian) shows just how much the volume of long-range fires has changed just since 1991. For the U.S. at least, precision fires have indeed replaced mass fires on the battlefield.

Breakpoints in U.S. Army Doctrine

U.S. Army prisoners of war captured by German forces during the Battle of the Bulge in 1944. [Wikipedia]

One of the least studied aspects of combat is battle termination. Why do units in combat stop attacking or defending? Shifts in combat posture (attack, defend, delay, withdrawal) are usually voluntary, directed by a commander, but they can also be involuntary, as a result of direct or indirect enemy action. Why do involuntary changes in combat posture, known as breakpoints, occur?

As Chris pointed out in a previous post, the topic of breakpoints has only been addressed by two known studies since 1954. Most existing military combat models and wargames address breakpoints in at least a cursory way, usually through some calculation based on personnel casualties. Both of the breakpoints studies suggest that involuntary changes in posture are seldom related to casualties alone, however.

Current U.S. Army doctrine addresses changes in combat posture through discussions of culmination points in the attack, and transitions from attack to defense, defense to counterattack, and defense to retrograde. But these all pertain to voluntary changes, not breakpoints.

Army doctrinal literature has little to say about breakpoints, either in the context of friendly forces or potential enemy combatants. The little it does say relates to the effects of fire on enemy forces and is based on personnel and material attrition.

According to ADRP 1-02 Terms and Military Symbols, an enemy combat unit is considered suppressed after suffering 3% personnel casualties or material losses, neutralized by 10% losses, and destroyed upon sustaining 30% losses. The sources and methodology for deriving these figures is unknown, although these specific terms and numbers have been a part of Army doctrine for decades.

The joint U.S. Army and U.S. Marine Corps vision of future land combat foresees battlefields that are highly lethal and demanding on human endurance. How will such a future operational environment affect combat performance? Past experience undoubtedly offers useful insights but there seems to be little interest in seeking out such knowledge.

Trevor Dupuy criticized the U.S. military in the 1980s for its lack of understanding of the phenomenon of suppression and other effects of fire on the battlefield, and its seeming disinterest in studying it. Not much appears to have changed since then.

‘Love’s Tables’: U.S. War Department Casualty Estimation in World War II

The same friend of TDI who asked about ‘Evett’s Rates,” the British casualty estimation methodology during World War II, also mentioned that the work of Albert G. Love III was now available on-line. Rick Atkinson also referenced “Love’s Tables” in The Guns At Last Light.

In 1931, Lieutenant Colonel (later Brigadier General) Love, then a Medical Corps physician in the U.S. Army Medical Field Services School, published a study of American casualty data in the recent Great War, titled “War Casualties.”[1] This study was likely the source for tables used for casualty estimation by the U.S. Army through 1944.[2]

Love, who had no advanced math or statistical training, undertook his study with the support of the Army Surgeon General, Merritte W. Ireland, and initial assistance from Dr. Lowell J. Reed, a professor of biostatistics at John Hopkins University. Love’s posting in the Surgeon General’s Office afforded him access to an array of casualty data collected from the records of the American Expeditionary Forces in France, as well as data from annual Surgeon General reports dating back to 1819, the official medical history of the U.S. Civil War, and U.S. general population statistics.

Love’s research was likely the basis for rate tables for calculating casualties that first appeared in the 1932 edition of the War Department’s Staff Officer’s Field Manual.[3]

Battle Casualties, including Killed, in Percent of Unit Strength, Staff Officer’s Field Manual (1932).

The 1932 Staff Officer’s Field Manual estimation methodology reflected Love’s sophisticated understanding of the factors influencing combat casualty rates. It showed that both the resistance and combat strength (and all of the factors that comprised it) of the enemy, as well as the equipment and state of training and discipline of the friendly troops had to be taken into consideration. The text accompanying the tables pointed out that loss rates in small units could be quite high and variable over time, and that larger formations took fewer casualties as a fraction of overall strength, and that their rates tended to become more constant over time. Casualties were not distributed evenly, but concentrated most heavily among the combat arms, and in the front-line infantry in particular. Attackers usually suffered higher loss rates than defenders. Other factors to be accounted for included the character of the terrain, the relative amount of artillery on each side, and the employment of gas.

The 1941 iteration of the Staff Officer’s Field Manual, now designated Field Manual (FM) 101-10[4], provided two methods for estimating battle casualties. It included the original 1932 Battle Casualties table, but the associated text no longer included the section outlining factors to be considered in calculating loss rates. This passage was moved to a note appended to a new table showing the distribution of casualties among the combat arms.

Rather confusingly, FM 101-10 (1941) presented a second table, Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength. It included rates for front line regiments and divisions, corps and army units, reserves, and attached cavalry. The rates were broken down by posture and tactical mission.

Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength, FM 101-10 (1941)

The source for this table is unknown, nor the method by which it was derived. No explanatory text accompanied it, but a footnote stated that “this table is intended primarily for use in school work and in field exercises.” The rates in it were weighted toward the upper range of the figures provided in the 1932 Battle Casualties table.

The October 1943 edition of FM 101-10 contained no significant changes from the 1941 version, except for the caveat that the 1932 Battle Casualties table “may or may not prove correct when applied to the present conflict.”

The October 1944 version of FM 101-10 incorporated data obtained from World War II experience.[5] While it also noted that the 1932 Battle Casualties table might not be applicable, the experiences of the U.S. II Corps in North Africa and one division in Italy were found to be in agreement with the table’s division and corps loss rates.

FM 101-10 (1944) included another new table, Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), meaning that it now provided three distinct methods for estimating battle casualties.

Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), FM 101-10 (1944)

Like the 1941 Estimated Daily Losses in Campaign table, the sources for this new table were not provided, and the text contained no guidance as to how or when it should be used. The rates it contained fell roughly within the span for daily rates for severe (6-8%) to maximum (12%) combat listed in the 1932 Battle Casualty table, but would produce vastly higher overall rates if applied consistently, much higher than the 1932 table’s 1% daily average.

FM 101-10 (1944) included a table showing the distribution of losses by branch for the theater based on experience to that date, except for combat in the Philippine Islands. The new chart was used in conjunction with the 1944 Estimate of Battle Losses for a Front-Line Division table to determine daily casualty distribution.

Distribution of Battle Losses–Theater of Operations, FM 101-10 (1944)

The final World War II version of FM 101-10 issued in August 1945[6] contained no new casualty rate tables, nor any revisions to the existing figures. It did finally effectively invalidate the 1932 Battle Casualties table by noting that “the following table has been developed from American experience in active operations and, of course, may not be applicable to a particular situation.” (original emphasis)

NOTES

[1] Albert G. Love, War Casualties, The Army Medical Bulletin, No. 24, (Carlisle Barracks, PA: 1931)

[2] This post is adapted from TDI, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 314-317).

[3] U.S. War Department, Staff Officer’s Field Manual, Part Two: Technical and Logistical Data (Government Printing Office, Washington, D.C., 1932)

[4] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., June 15, 1941)

[5] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., October 12, 1944)

[6] U.S. War Department, FM 101-10 Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., August 1, 1945)