[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]
[11] Five of the 13 counted as unknown were penetrated by both armor piercing shot and by infantry hollow charge weapons. There was no evidence to indicate which was the original cause of the loss.
[12] From ORS Report No. 17
[13] From ORS Report No. 15. The “Pocket” was the area west of the line Falaise-Argentan and east of the line Vassy-Gets-Domfront in Normandy that was the site in August 1944 of the beginning of the German retreat from France. The German forces were being enveloped from the north and south by Allied ground forces and were under constant, heavy air attack.
German Army 150mm heavy field howitzer 18 L/29.5 battery. [Panzer DB/Pinterest]
[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]
Curiously, at Kursk, in the case where the highest percent loss was recorded, the German forces opposing the Soviet 1st Tank Army—mainly the XLVIII Panzer Corps of the Fourth Panzer Army—were supported by proportionately fewer artillery pieces (approximately 56 guns and rocket launchers per division) than the US 1st Infantry Division at Dom Bütgenbach (the equivalent of approximately 106 guns per division)[4]. Nor does it appear that the German rate of fire at Kursk was significantly higher than that of the American artillery at Dom Bütgenbach. On 20 July at Kursk, the 150mm howitzers of the 11th Panzer Division achieved a peak rate of fire of 87.21 rounds per gum. On 21 December at Dom Bütgenbach, the 155mm howitzers of the 955th Field Artillery Battalion achieved a peak rate of fire of 171.17 rounds per gun.[5]
NOTES
[4] The US artillery at Dom Bütgenbach peaked on 21 December 1944 when a total of 210 divisional and corps pieces fired over 10,000 rounds in support of the 1st Division’s 26th Infantry.
[5] Data collected on German rates of fire are fragmentary, but appear to be similar to that of the American Army in World War ll. An article on artillery rates of fire that explores the data in more detail will be forthcoming in a future issue of this newsletter. [NOTE: This article was not completed or published.]
Notes to Table I.
[8] The data were found in reports of the 1st Tank Army (Fond 299, Opis‘ 3070, Delo 226). Obvious math errors in the original document have been corrected (the total lost column did not always agree with the totals by cause). The total participated column evidently reflected the starting strength of the unit, plus replacement vehicles. “Burned'” in Soviet wartime documents usually indicated a total loss, however it appears that in this case “burned” denoted vehicles totally lost due to direct fire antitank weapons. “Breakdown” apparently included both mechanical breakdown and repairable combat damage.
[9] Note that the brigade report (Fond 3304, Opis‘ 1, Delo 24) contradicts the army report. The brigade reported that a total of 28 T-34s were lost (9 to aircraft and 19 to “artillery”) and one T-60 was destroyed by a mine. However, this report was made on 11 July, during the battle, and may not have been as precise as the later report recorded by 1st Tank Army. Furthermore, it is not as clear in the brigade report that “artillery” referred only to indirect fire HE and not simply lo both direct and indirect fire guns.
[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]
The effectiveness of artillery against exposed personnel and other “soft” targets has long been accepted. Fragments and blast are deadly to those unfortunate enough to not be under cover. What has also long been accepted is the relative—if not total—immunity of armored vehicles when exposed to shell fire. In a recent memorandum, the United States Army Armor School disputed the results of tests of artillery versus tanks by stating, “…the Armor School nonconcurred with the Artillery School regarding the suppressive effects of artillery…the M-1 main battle tank cannot be destroyed by artillery…”
This statement may in fact be true,[1] if the advancement of armored vehicle design has greatly exceeded the advancement of artillery weapon design in the last fifty years. [Original emphasis] However, if the statement is not true, then recent research by TDI[2] into the effectiveness of artillery shell fire versus tanks in World War II may be illuminating.
The TDI search found that an average of 12.8 percent of tank and other armored vehicle losses[3] were due to artillery fire in seven eases in World War II where the cause of loss could be reliably identified. The highest percent loss due to artillery was found to be 14.8 percent in the case of the Soviet 1st Tank Army at Kursk (Table II). The lowest percent loss due to artillery was found to be 5.9 percent in the case of Dom Bütgenbach (Table VIII).
The seven cases are split almost evenly between those that show armor losses to a defender and those that show losses to an attacker. The first four cases (Kursk, Normandy l. Normandy ll, and the “Pocket“) are engagements in which the side for which armor losses were recorded was on the defensive. The last three cases (Ardennes, Krinkelt. and Dom Bütgenbach) are engagements in which the side for which armor losses were recorded was on the offensive.
Four of the seven eases (Normandy I, Normandy ll, the “Pocket,” and Ardennes) represent data collected by operations research personnel utilizing rigid criteria for the identification of the cause of loss. Specific causes of loss were only given when the primary destructive agent could be clearly identified. The other three cases (Kursk, Krinkelt, and Dom Bütgenbach) are based upon combat reports that—of necessity—represent less precise data collection efforts.
However, the similarity in results remains striking. The largest identifiable cause of tank loss found in the data was, predictably, high-velocity armor piercing (AP) antitank rounds. AP rounds were found to be the cause of 68.7 percent of all losses. Artillery was second, responsible for 12.8 percent of all losses. Air attack as a cause was third, accounting for 7.4 percent of the total lost. Unknown causes, which included losses due to hits from multiple weapon types as well as unidentified weapons, inflicted 6.3% of the losses and ranked fourth. Other causes, which included infantry antitank weapons and mines, were responsible for 4.8% of the losses and ranked fifth.
NOTES
[1] The statement may be true, although it has an “unsinkable Titanic,” ring to it. It is much more likely that this statement is a hypothesis, rather than a truism.
[2] As pan of this article a survey of the Research Analysis Corporation’s publications list was made in an attempt to locate data from previous operations research on the subject. A single reference to the study of tank losses was found. Group 1 Alvin D. Coox and L. Van Loan Naisawald, Survey of Allied Tank Casualties in World War II, CONFIDENTIAL ORO Report T-117, 1 March 1951.
[3] The percentage loss by cause excludes vehicles lost due to mechanical breakdown or abandonment. lf these were included, they would account for 29.2 percent of the total lost. However, 271 of the 404 (67.1%) abandoned were lost in just two of the cases. These two cases (Normandy ll and the Falaise Pocket) cover the period in the Normandy Campaign when the Allies broke through the German defenses and began the pursuit across France.
I have taken a look in previous posts at how the historical relationship identified by Trevor Dupuy between weapon lethality, battlefield dispersion, and casualty rates argues against this assumption with regard to personnel attrition and tank loss rates. What about artillery loss rates? Will long-range precision fires make ground-based long-range precision fire platforms themselves more vulnerable? Historical research suggests that trend was already underway before the advent of the new technology.
In 1976, Trevor Dupuy and the Historical Evaluation and Research Organization (HERO; one of TDI’s corporate ancestors) conducted a study sponsored by Sandia National Laboratory titled “Artillery Survivability in Modern War.” (PDF) The study focused on looking at historical artillery loss rates and the causes of those losses. It drew upon quantitative data from the 1973 Arab-Israel War, the Korean War, and the Eastern Front during World War II.
Conclusions
1. In the early wars of the 20th Century, towed artillery pieces were relatively invulnerable, and they were rarely severely damaged or destroyed except by very infrequent direct hits.
2. This relative invulnerability of towed artillery resulted in general lack of attention to the problems of artillery survivability through World War II.
3. The lack of effective hostile counter-artillery resources in the Korean and Vietnam wars contributed to continued lack of attention to the problem of artillery survivability, although increasingly armies (particularly the US Army) were relying on self-propelled artillery pieces.
4. Estimated Israeli loss statistics of the October 1973 War suggest that because of size and characteristics, self-propelled artillery is more vulnerable to modern counter-artillery means than was towed artillery in that and previous wars; this greater historical physical vulnerability of self-propelled weapons is consistent with recent empirical testing by the US Army.
5. The increasing physical vulnerability of modern self-propelled artillery weapons is compounded by other modern combat developments, including:
a. Improved artillery counter-battery techniques and resources; b. Improved accuracy of air-delivered munitions; c..increased lethality of modern artillery ammunition; and d. Increased range of artillery and surface-to-surface missiles suitable for use against artillery.
6. Despite this greater vulnerability of self-propelled weapons, Israeli experience in the October war demonstrated that self-propelled artillery not only provides significant protection to cannoneers but also that its inherent mobility permits continued effective operation under circumstances in which towed artillery crews would be forced to seek cover, and thus be unable to fire their weapons. ‘
7. Paucity of available processed, compiled data on artillery survivability and vulnerability limits analysis and the formulation of reliable artillery loss experience tables or formulae.
8. Tentative analysis of the limited data available for this study indicates the following:
a. In “normal” deployment, percent weapon losses by standard weight classification are in the following proportions:
b. Towed artillery losses to hostile artillery (counterbattery) appear in general to very directly with battle intensity (as measured by percent personnel casualties per day), at a rate somewhat less than half of the percent personnel losses for units of army strength or greater; this is a straight-line relationship, or close to it; the stronger or more effective the hostile artillery is, the steeper the slope of the curve;
c. Towed artillery losses to all hostile anti-artillery means appears in general to vary directly with battle intensity at a rate about two-thirds of the-percent personnel losses for units of army strength or greater; the curve rises slightly more rapidly in high intensity combat than in normal or low-intensity combat; the stronger or more effective the hostile anti-artillery means (primarily air and counter-battery), the steeper the slope of the curve;
d. Self-propelled artillery losses appear to be generally consistent with towed losses, but at rates at least twice as great in comparison to battle intensity.
9. There are available in existing records of US and German forces in World war II, and US forces in the Korean and Vietnam Wars, unit records and reports that will permit the formulation of reliable artillery loss experience tables and formulae for those conflicts; these, with currently available and probably improved, data from the Arab-Israeli wars, will permit the formulation of reliable artillery loss experience tables and formulae for simulations of modern combat under current and foreseeable future conditions.
The study caveated these conclusions with the following observations:
Most of the artillery weapons in World War II were towed weapons. By the time the United States had committed small but significant numbers of self-propelled artillery pieces in Europe, German air and artillery counter-battery retaliatory capabilities had been significantly reduced. In the Korean and Vietnam wars, although most American artillery was self-propelled, the enemy had little counter-artillery capability either in the air or in artillery weapons and counter-battery techniques.
It is evident from vulnerability testing of current Army self-propelled weapons, that these weapons–while offering much more protection to cannoneers and providing tremendous advantages in mobility–are much more vulnerable to hostile action than are towed weapons, and that they are much more subject to mechanical breakdowns involving either the weapons mountings or the propulsion elements. Thus there cannot be a direct relationship between aggregated World War II data, or even aggregated Korean war or October War data, and current or future artillery configurations. On the other hand, the body of data from the October war where artillery was self-propelled is too small and too specialized by environmental and operational circumstances to serve alone as a paradigm of artillery vulnerability.
Despite the intriguing implications of this research, HERO’s proposal for follow on work was not funded. HERO only used easily accessible primary and secondary source data for the study. It noted much more primary source data was likely available but that it would require a significant research effort to compile it. (Research is always the expensive tent-pole in quantitative historical analysis. This seems to be why so little of it ever gets funded.) At the time of the study in 1976, no U.S. Army organization could identify any existing quantitative historical data or analysis on artillery losses, classified or otherwise. A cursory search on the Internet reveals no other such research as well. Like personnel attrition and tank loss rates, it would seem that artillery loss rates would be another worthwhile subject for quantitative analysis as part of the ongoing effort to develop the MDB concept.
There is probably no obscurity of combat requiring clarification and understanding more urgently than that of suppression… Suppression usually is defined as the effect of fire (primarily artillery fire) upon the behavior of hostile personnel, reducing, limiting, or inhibiting their performance of combat duties. Suppression lasts as long as the fires continue and for some brief, indeterminate period thereafter. Suppression is the most important effect of artillery fire, contributing directly to the ability of the supported maneuver units to accomplish their missions while preventing the enemy units from accomplishing theirs. (p. 251)
Official US Army field artillery doctrine makes a distinction between “suppression” and “neutralization.” Suppression is defined to be instantaneous and fleeting; neutralization, while also temporary, is relatively longer-lasting. Neutralization, the doctrine says, results when suppressive effects are so severe and long-lasting that a target is put out of action for a period of time after the suppressive fire is halted. Neutralization combines the psychological effects of suppressive gunfire with a certain amount of damage. The general concept of neutralization, as distinct from the more fleeting suppression, is a reasonable one. (p. 252)
Despite widespread acknowledgement of the existence of suppression and neutralization, the lack of interest in analyzing its effects was a source of professional frustration for Dupuy. As he commented in 1989,
The British did some interesting but inconclusive work on suppression in their battlefield operations research in World War II. In the United States I am aware of considerable talk about suppression, but very little accomplishment, over the past 20 years. In the light of the significance of suppression, our failure to come to grips with the issue is really quite disgraceful.
This lack of interest is curious, given that suppression and neutralization remain embedded in U.S. Army combat doctrine to this day. The current Army definitions are:
Suppression – In the context of the computed effects of field artillery fires, renders a target ineffective for a short period of time producing at least 3-percent casualties or materiel damage. [Army Doctrine Reference Publication (ADRP) 1-02, Terms and Military Symbols, December 2015, p. 1-87]
Neutralization – In the context of the computed effects of field artillery fires renders a target ineffective for a short period of time, producing 10-percent casualties or materiel damage. [ADRP 1-02, p. 1-65]
A particular source for Dupuy’s irritation was the fact that these definitions were likely empirically wrong. As he argued in Understanding War,
This is almost certainly the wrong way to approach quantification of neutralization. Not only is there no historical evidence that 10% casualties are enough to achieve this effect, there is no evidence that any level of losses is required to achieve the psycho-physiological effects of suppression or neutralization. Furthermore, the time period in which casualties are incurred is probably more important than any arbitrary percentage of loss, and the replacement of casualties and repair of damage are probably irrelevant. (p. 252)
Thirty years after Dupuy pointed this problem out, the construct remains enshrined in U.S. doctrine, unquestioned and unsubstantiated. Dupuy himself was convinced that suppression probably had little, if anything, to do with personnel loss rates.
I believe now that suppression is related to and probably a component of disruption caused by combat processes other than surprise, such as a communications failure. Further research may reveal, however, that suppression is a very distinct form of disruption that can be measured or estimated quite independently of disruption caused by any other phenomenon. (Understanding War, p. 251)
He had developed a hypothesis for measuring the effects of suppression, but was unable to interest anyone in the U.S. government or military in sponsoring a study on it. Suppression as a combat phenomenon remains only vaguely understood.
Numbers matter in war and warfare. Armies cannot function effectively without reliable counts of manpower, weapons, supplies, and losses. Wars, campaigns, and battles are waged or avoided based on assessments of relative numerical strength. Possessing superior numbers, either overall or at the decisive point, is a commonly held axiom (if not a guarantor) for success in warfare.
These numbers of war likewise inform the judgements of historians. They play a large role in shaping historical understanding of who won or lost, and why. Armies and leaders possessing a numerical advantage are expected to succeed, and thus come under exacting scrutiny when they do not. Commanders and combatants who win in spite of inferiorities in numbers are lauded as geniuses or elite fighters.
Given the importance of numbers in war and history, however, it is surprising to see how often historians treat quantitative data carelessly. All too often, for example, historical estimates of troop strength are presented uncritically and often rounded off, apparently for simplicity’s sake. Otherwise careful scholars are not immune from the casual or sloppy use of numbers.
However, just as careless treatment of qualitative historical evidence results in bad history, the same goes for mishandling quantitative data. To be sure, like any historical evidence, quantitative data can be imprecise or simply inaccurate. Thus, as with any historical evidence, it is incumbent upon historians to analyze the numbers they use with methodological rigor.
OK, with that bit of throat-clearing out of the way, let me now proceed to jump into one of the greatest quantitative morasses in military historiography: strengths and losses in the American Civil War. Participants, pundits, and scholars have been arguing endlessly over numbers since before the war ended. And since nothing seems to get folks riled up more than debating Civil War numbers than arguing about the merits (or lack thereof) of Union General George B. McClellan, I am eventually going to add him to the mix as well.
The reason I am grabbing these dual lightning rods is to illustrate the challenges of quantitative data and historical analysis by looking at one of Trevor Dupuy’s favorite historical case studies, the Battle of Antietam (or Sharpsburg, for the unreconstructed rebels lurking out there). Dupuy cited his analysis of the battle in several of his books, mainly as a way of walking readers through his Quantified Judgement Method of Analysis (QJMA), and to demonstrate his concept of combat multipliers.
I have questions about his Antietam analysis that I will address later. To begin, however, I want to look at the force strength numbers he used. On p. 156 of Numbers, Predictions and War, he provided the following figures for the opposing armies at Antietam:The sources he cited for these figures were R. Ernest Dupuy and Trevor N. Dupuy, The Compact History of the Civil War(New York: Hawthorn, 1960) and Thomas L. Livermore, Numbers and Losses of the Civil War(reprint, Bloomington: University of Indiana, 1957).
It is with Livermore that I will begin tracing the historical and historiographical mystery of how many Confederates fought at the Battle of Antietam.
[This post was originally published on 1 December 2017.]
How many troops are needed to successfully attack or defend on the battlefield? There is a long-standing rule of thumb that holds that an attacker requires a 3-1 preponderance over a defender in combat in order to win. The aphorism is so widely accepted that few have questioned whether it is actually true or not.
Trevor Dupuy challenged the validity of the 3-1 rule on empirical grounds. He could find no historical substantiation to support it. In fact, his research on the question of force ratios suggested that there was a limit to the value of numerical preponderance on the battlefield.
TDI President Chris Lawrence has also challenged the 3-1 rule in his own work on the subject.
The validity of the 3-1 rule is no mere academic question. It underpins a great deal of U.S. military policy and warfighting doctrine. Yet, the only time the matter was seriously debated was in the 1980s with reference to the problem of defending Western Europe against the threat of Soviet military invasion.
It is probably long past due to seriously challenge the validity and usefulness of the 3-1 rule again.
This series of posts was based on the article “Iranian Casualties in the Iran-Iraq War: A Reappraisal,” by H. W. Beuttel, originally published in the December 1997 edition of the International TNDM Newsletter. Mr Beuttel was a former U.S. Army intelligence officer employed as a military analyst by Boeing Research & Development at the time of original publication. He also authored several updates to this original article, to be posted at a later date, which refined and updated his analysis.
Flank or rear attack is more likely to succeed than frontal attack. Among the many reasons for this are the following: there is greater opportunity for surprise by the attacker; the defender cannot be strong everywhere at once, and the front is the easiest focus for defensive effort; and the morale of the defender tends to be shaken when the danger of encirclement is evident. Again, historical examples are numerous, beginning with Hannibal’s tactical plans and brilliant executions of the Battles of Lake Trasimene and Cannae. Any impression that the concept of envelopment or of a “strategy of indirect approach” has arisen either from the introduction of modern weapons of war, or from the ruminations of recent writers on military affairs, is a grave misperception of history and underestimates earlier military thinkers.
“Seek the flanks” has been a military adage since antiquity, but its significance was enhanced tremendously when the conoidal bullet of the breech-loading, rifled musket revolutionized warfare in the mid-nineteenth century. This led Moltke to his 1867 observation that the increased deadliness of firepower demanded that the strategic offensive be coupled with tactical defensive, an idea that depended upon strategic envelopment for its accomplishment. This was a basic element of Moltke‘s strategy in the 1870 campaign in France. Its tactical manifestations took place at Metz and Sedan; both instances in which the Germans took up defensive positions across the French line of communications to Paris, and the French commanders, forced to attack, were defeated.
The essential emphasis of modern tactics and operational art remains enabling flank or rear attacks on enemy forces in order to obtain decisive results in combat. Will this remain true in the future? The ongoing historical pattern of ground forces dispersing on the battlefield in response to the increasing lethality of weapons seems likely to enhance the steadily increasing non-linear and non-contiguous character of modern battles in both conventional and irregular warfare.
The architects of the U.S. multi-domain battle and operations doctrine seem to anticipate this. Highly dispersed and distributed future battlefields are likely to offer constant, multiple opportunities (and risks) for flank and rear attacks. They are also likely to scramble current efforts to shape and map future battlefield geometry and architecture.
[This post is based on “Iranian Casualties in the Iran-Iraq War: A Reappraisal,” by H. W. Beuttel, originally published in the December 1997 edition of the International TNDM Newsletter.]
If we estimate that at least 5,000,000 troops (about 12% of Iran’s then population) served in the war zone, then the military casualty distribution is not less than the following (Bold indicates the author’s choice from ranges):
Killed in Action/Died of Wounds: 188,000 (156,000-196,000) (17%)
Wounded in Action: 945,000 (754,000-1,110,000) (83%)
Severely Wounded/Disabled: 200,000 (18%) (Note: carve out of total wounded)
Missing in Action: 73,000 (6%) (Note: Carve out of total KIA plus several thousand possible defectors/collaborators)
PoW: 39,000-44,000
Total Military Battle Casualties (KIA + WIA): 1,133,000-1,302,000 (28% theater rate)
Possible Non-Battle Military Deaths: 74,000
Non-Battle Military Injuries: No idea.
With Civilian KIA (11,000) and WIA (34,000) and “chemical” (45,000) Total Hostile Action Casualties: 1,223,000
Possible Military Non-Battle Deaths (74,000):1,297,000
Total Deaths Due to the Imposed War: 273,000 (104% of Pentagon estimate of 262,000)
Of 5,000,000 estimated Iranian combatants (1 million regular army, 2 million Pasdaran, 2 million Baseej)
~ 4% were Killed in Action/Missing in Action
~ 4% were Disabled
~ 13% were Wounded
~ 1% were Non-Battle Deaths
~ 1% were PoWs
Total military losses all known causes ~ 27%
The military battle casualty total percentile (27%) is intermediate between that of World War I (50% ~ British Army) and World War II (13% ~ U.S. Army/U.S. Marine Corps, 22% British Army).[118]
The author acknowledges the highly speculative nature of much of the data and argument presented above. It is offered as a preliminary starting point to further study. As such, the author would appreciate hearing from anyone with additional data on this subject. In particular he would invite the Government of the Islamic Republic of Iran to provide any information that would corroborate, correct or expand on the information presented in this article.