Tag Doctrine

‘Evett’s Rates’: British War Office Wastage Tables

Stretcher bearers of the East Surrey Regiment, with a Churchill tank of the North Irish Horse in the background, during the attack on Longstop Hill, Tunisia, 23 April 1943. [Imperial War Museum/Wikimedia]

A friend of TDI queried us recently about a reference in Rick Atkinson’s The Guns at Last Light: The War in Western Europe, 1944-1945 to a British casualty estimation methodology known as “Evett’s Rates.” There are few references to Evett’s Rates online, but as it happens, TDI did find out some details about them for a study on casualty estimation. [1]

British Army staff officers during World War II and the 1950s used a set of look-up tables which listed expected monthly losses in percentage of strength for various arms under various combat conditions. The origin of the tables is not known, but they were officially updated twice, in 1942 by a committee chaired by Major General Evett, and in 1951-1955 by the Army Operations Research Group (AORG).[2]

The methodology was based on staff predictions of one of three levels of operational activity, “Intense,” “Normal,” and “Quiet.” These could be applied to an entire theater, or to individual divisions. The three levels were defined the same way for both the Evett Committee and AORG rates:

The rates were broken down by arm and rank, and included battle and nonbattle casualties.

Rates of Personnel Wastage Including Both Battle and Non-battle Casualties According to the Evett Committee of 1942. (Percent per 30 days).

The Evett Committee rates were criticized during and after the war. After British forces suffered twice the anticipated casualties at Anzio, the British 21st Army Group applied a “double intense rate” which was twice the Evett Committee figure and intended to apply to assaults. When this led to overestimates of casualties in Normandy, the double intense rate was discarded.

From 1951 to 1955, AORG undertook a study of casualty rates in World War II. Its analysis was based on casualty data from the following campaigns:

  • Northwest Europe, 1944
    • 6-30 June – Beachhead offensive
    • 1 July-1 September – Containment and breakout
    • 1 October-30 December – Semi-static phase
    • 9 February to 6 May – Rhine crossing and final phase
  • Italy, 1944
    • January to December – Fighting a relatively equal enemy in difficult country. Warfare often static.
    • January to February (Anzio) – Beachhead held against severe and well-conducted enemy counter-attacks.
  • North Africa, 1943
    • 14 March-13 May – final assault
  • Northwest Europe, 1940
    • 10 May-2 June – Withdrawal of BEF
  • Burma, 1944-45

From the first four cases, the AORG study calculated two sets of battle casualty rates as percentage of strength per 30 days. “Overall” rates included KIA, WIA, C/MIA. “Apparent rates” included these categories but subtracted troops returning to duty. AORG recommended that “overall” rates be used for the first three months of a campaign.

The Burma campaign data was evaluated differently. The analysts defined a “force wastage” category which included KIA, C/MIA, evacuees from outside the force operating area and base hospitals, and DNBI deaths. “Dead wastage” included KIA, C/MIA, DNBI dead, and those discharged from the Army as a result of injuries.

The AORG study concluded that the Evett Committee underestimated intense loss rates for infantry and armor during periods of very hard fighting and overestimated casualty rates for other arms. It recommended that if only one brigade in a division was engaged, two-thirds of the intense rate should be applied, if two brigades were engaged the intense rate should be applied, and if all brigades were engaged then the intense rate should be doubled. It also recommended that 2% extra casualties per month should be added to all the rates for all activities should the forces encounter heavy enemy air activity.[1]

The AORG study rates were as follows:

Recommended AORG Rates of Personnel Wastage. (Percent per 30 days).

If anyone has further details on the origins and activities of the Evett Committee and AORG, we would be very interested in finding out more on this subject.

NOTES

[1] This post is adapted from The Dupuy Institute, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 51-53).

[2] Rowland Goodman and Hugh Richardson. “Casualty Estimation in Open and Guerrilla Warfare.” (London: Directorate of Science (Land), U.K. Ministry of Defence, June 1995.), Appendix A.

TDI Friday Read: Links You May Have Missed, 23 March 2018

To follow on Chris’s recent post about U.S. Army modernization:

On the subject of future combat:

  • The U.S. National Academies of Sciences, Engineering, and Medicine has issued a new report emphasizing the need for developing countermeasures against multiple small unmanned aerial aircraft systems (sUASs) — organized in coordinated groups, swarms, and collaborative groups — which could be used much sooner than the U.S. Army anticipates.  [There is a summary here.]
  • National Defense University’s Frank Hoffman has a very good piece in the current edition of Parameters, “Will War’s Nature Change in the Seventh Military Revolution?,” that explores the potential implications of the combinations of robotics, artificial intelligence, and deep learning systems on the character and nature of war.
  • Major Hassan Kamara has an article in the current edition of Military Review contemplating changes in light infantry, “Rethinking the U.S. Army Infantry Rifle Squad

On the topic of how the Army is addressing its current and future challenges with irregular warfare and wide area security:

Perla On Dupuy

Dr. Peter Perla, noted defense researcher, wargame designer and expert, and author of the seminal The Art of Wargaming: A Guide for Professionals and Hobbyists, gave the keynote address at the 2017 Connections Wargaming Conference last August. The topic of his speech, which served as his valedictory address on the occasion of his retirement from government service, addressed the predictive power of wargaming. In it, Perla recalled a conversation he once had with Trevor Dupuy in the early 1990s:

Like most good stories, this one has a beginning, a middle, and an end. I have sort of jumped in at the middle. So let’s go back to the beginning.

As it happens, that beginning came during one of the very first Connections. It may even have been the first one. This thread is one of those vivid memories we all have of certain events in life. In my case, it is a short conversation I had with Trevor Dupuy.

I remember the setting well. We were in front of the entrance to the O Club at Maxwell. It was kind of dark, but I can’t recall if it was in the morning before the club opened for our next session, or the evening, before a dinner. Trevor and I were chatting and he said something about wargaming being predictive. I still recall what I said.

“Good grief, Trevor, we can’t even predict the outcome of a Super Bowl game much less that of a battle!” He seemed taken by surprise that I felt that way, and he replied, “Well, if that is true, what are we doing? What’s the point?”

I had my usual stock answers. We wargame to develop insights, to identify issues, and to raise questions. We certainly don’t wargame to predict what will happen in a battle or a war. I was pretty dogmatic in those days. Thank goodness I’m not that way any more!

The question of prediction did not go away, however.

For the rest of Perla’s speech, see here. For a wonderful summary of the entire 2017 Connections Wargaming conference, see here.

 

Artificial Intelligence (AI) And Warfare

Arnold Schwarzenegger and friend. [Image Credit Jordan Strauss/Invision/AP/File]

Humans are a competitive lot. With machines making so much rapid progress (see Moore’s Law), the singularity approaches—see the discussion between Michio Kaku and Ray Kurzweil, two prominent futurologists. This is the “hypothesis that the invention of artificial super intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.” (Wikipedia). This was also referred to as general artificial intelligence (GAI) by The Economist, and previously discussed in this blog.

We humans also exhibit a tendency to anthropomorphize, or to endow any observed object with human qualities. The image above illustrates Arnold Schwarzenegger sizing up his robotic doppelgänger. This is further evidenced by statements made about the ability of military networks to spontaneously become self-aware:

The idea behind the Terminator films – specifically, that a Skynet-style military network becomes self-aware, sees humans as the enemy, and attacks – isn’t too far-fetched, one of the nation’s top military officers said this week. Nor is that kind of autonomy the stuff of the distant future. ‘We’re a decade or so away from that capability,’ said Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff.

This exhibits a fundamental fear, and I believe a misconception, about the capabilities of these technologies. This is exemplified by Jay Tuck’s TED talk, “Artificial Intelligence: it will kill us.” His examples of AI in use today include airline and hotel revenue management, aircraft autopilot, and medical imaging. He also holds up the MQ-9 Reaper’s Argus (aka Gorgon Stare) imaging systems, as well as the X-47B Pegasus, previously discussed, as an example of modern AI, and the pinnacle in capability. Among several claims, he states that the X-47B has an optical stealth capability, which is inaccurate:

[X-47B], a descendant of an earlier killer drone with its roots in the late 1990s, is possibly the least stealthy of the competitors, owing to Northrop’s decision to build the drone big, thick and tough. Those qualities help it survive forceful carrier landings, but also make it a big target for enemy radars. Navy Capt. Jamie Engdahl, manager of the drone test program, described it as ‘low-observable relevant,’ a careful choice of words copping to the X-47B’s relative lack of stealth. (Emphasis added).

Such questions limit the veracity of these claims. I believe that this is little more than modern fear mongering, playing on ignorance. But, Mr. Tuck is not alone. From the forefront of technology, Elon Musk is often held up as an example of commercial success in the field of AI, and he recently addressed the national governors association meeting on this topic, specifically in the need for regulation in the commercial sphere.

On the artificial intelligence [AI] front, I have exposure to the most cutting edge AI, and I think people should be really concerned about it. … AI is a rare case, I think we should be proactive in terms of regulation, rather that reactive about it. Because by the time we are reactive about it, its too late. … AI is a fundamental risk to human civilization, in a way that car crashes, airplane crashes, faulty drugs or bad food were not. … In space, we get regulated by the FAA. But you know, if you ask the average person, ‘Do you want to get rid of the FAA? Do you want to take a chance on manufacturers not cutting corners on aircraft because profits were down that quarter? Hell no, that sounds terrible.’ Because robots will be able to do everything better than us, and I mean all of us. … We have companies that are racing to build AI, they have to race otherwise they are going to be made uncompetitive. … When the regulators are convinced it is safe they we can go, but otherwise, slow down.  [Emphasis added]

Mr. Musk also hinted at American exceptionalism: “America is the distillation of the human spirit of exploration.” Indeed, the link between military technology and commercial applications is an ongoing virtuous cycle. But, the kind of regulation that exists in the commercial sphere from within the national, subnational, and local governments of humankind do not apply so easily in the field of warfare, where no single authority exists. Any agreements to limit technology are a consensus-based agreement, such as a treaty.

The husky was mistakenly classified as wolf, because the classifier learned to use snow as feature. [Machine Master blog]

In a recent TEDx talk, Peter Haas describes his work in AI, and some of challenges that exist within the state of the art of this technology. As illustrated above, when asked to distinguish between a wolf and a dog, the machine classified the Husky in the above photo as a wolf. The humans developing the AI system did not know why this happened, so they asked the AI system to show the regions of the image that were used to make this decision, and the result is depicted on the right side of the image. The fact that this dog was photographed with snow in the background is a form of bias – are fact that snow exists in a photo does not yield any conclusive proof that any particular animal is a dog or a wolf.

Right now there are people – doctors, judges, accountants – who are getting information from an AI system and treating it like it was information from a trusted colleague. It is this trust that bothers me. Not because of how often AI gets it wrong; AI researchers pride themselves on the accuracy of results. It is how badly it gets it wrong when it makes a mistake that has me worried. These systems do not fail gracefully.

AI systems clearly have drawbacks, but they also have significant advantages, such as in the curation of shared model of the battlefield.

In a paper for the Royal Institute of International Affairs in London, Mary Cummings of Duke University says that an autonomous system perceives the world through its sensors and reconstructs it to give its computer ‘brain’ a model of the world which it can use to make decisions. The key to effective autonomous systems is ‘the fidelity of the world model and the timeliness of its updates.‘ [Emphasis added]

Perhaps AI systems might best be employed in the cyber domain, where their advantages are naturally “at home?” Mr. Haas noted that machines at the current time have a tough time doing simple tasks, like opening a door. As was covered in this blog, former Deputy Defense Secretary Robert Work noted this same problem, and thus called for man-machine teaming as one of the key areas of pursuit within the Third Offset Strategy.

Just as the previous blog post illustrates, “the quality of military men is what wins wars and preserves nations.” Let’s remember Paul Van Ripper’s performance in Millennium Challenge 2002:

Red, commanded by retired Marine Corps Lieutenant General Paul K. Van Riper, adopted an asymmetric strategy, in particular, using old methods to evade Blue’s sophisticated electronic surveillance network. Van Riper used motorcycle messengers to transmit orders to front-line troops and World-War-II-style light signals to launch airplanes without radio communications. Red received an ultimatum from Blue, essentially a surrender document, demanding a response within 24 hours. Thus warned of Blue’s approach, Red used a fleet of small boats to determine the position of Blue’s fleet by the second day of the exercise. In a preemptive strike, Red launched a massive salvo of cruise missiles that overwhelmed the Blue forces’ electronic sensors and destroyed sixteen warships.

We should learn lessons on the over reliance on technology. AI systems are incredibly fickle, but which offer incredible capabilities. We should question and inspect results by such systems. They do not exhibit emotions, they are not self-aware, they do not spontaneously ask questions unless specifically programmed to do so. We should recognize their significant limitations and use them in conjunction with humans who will retain command decisions for the foreseeable future.

Technology And The Human Factor In War

A soldier waves an Israeli flag on the Golan front during the 1973 Yom Kippur War. (IDF Spokesperson’s unit, Jerusalem Report Archives)

[The article below is reprinted from the August 1997 edition of The International TNDM Newsletter.]

Technology and the Human Factor in War
by Trevor N. Dupuy

The Debate

It has become evident to many military theorists that technology has become increasingly important in war. In fact (even though many soldiers would not like to admit it) most such theorists believe that technology has actually reduced the significance of the human factor in war, In other words, the more advanced our military technology, these “technocrats” believe, the less we need to worry about the professional capability and competence of generals, admirals, soldiers, sailors, and airmen.

The technocrats believe that the results of the Kuwait, or Gulf, War of 1991 have confirmed their conviction. They cite the contribution to those results of the U.N. (mainly U.S.) command of the air, stealth aircraft, sophisticated guided missiles, and general electronic superiority, They believe that it was technology which simply made irrelevant the recent combat experience of the Iraqis in their long war with Iran.

Yet there are a few humanist military theorists who believe that the technocrats have totally misread the lessons of this century‘s wars! They agree that, while technology was important in the overwhelming U.N. victory, the principal reason for the tremendous margin of U.N. superiority was the better training, skill, and dedication of U.N. forces (again, mainly U.S.).

And so the debate rests. Both sides believe that the result of the Kuwait War favors their point of view, Nevertheless, an objective assessment of the literature in professional military journals, of doctrinal trends in the U.S. services, and (above all) of trends in the U.S. defense budget, suggest that the technocrats have stronger arguments than the humanists—or at least have been more convincing in presenting their arguments.

I suggest, however, that a completely impartial comparison of the Kuwait War results with those of other recent wars, and with some of the phenomena of World War II, shows that the humanists should not yet concede the debate.

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country, But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations.

To elevate the debate beyond generalities, and demonstrate convincingly that the human factor is at least as important as technology in war, I shall review eight instances in this past century when a military force has been successful because of the quality if its people, even though the other side was at least equal or superior in the technological sophistication of its weapons. The examples I shall use are:

  • Germany vs. the USSR in World War II
  • Germany vs. the West in World War II
  • Israel vs. Arabs in 1948, 1956, 1967, 1973 and 1982
  • The Vietnam War, 1965-1973
  • Britain vs. Argentina in the Falklands 1982
  • South Africans vs. Angolans and Cubans, 1987-88
  • The U.S. vs. Iraq, 1991

The demonstration will be based upon a marshaling of historical facts, then analyzing those facts by means of a little simple arithmetic.

Relative Combat Effectiveness Value (CEV)

The purpose of the arithmetic is to calculate relative combat effectiveness values (CEVs) of two opposing military forces. Let me digress to set up the arithmetic. Although some people who hail from south of the Mason-Dixon Line may be reluctant to accept the fact, statistics prove that the fighting quality of Northern soldiers and Southern soldiers was virtually equal in the American Civil War. (I invite those who might disagree to look at Livermore’s Numbers and Losses in the Civil War). That assumption of equality of the opposing troop quality in the Civil War enables me to assert that the successful side in every important battle in the Civil War was successful either because of numerical superiority or superior generalship. Three of Lee’s battles make the point:

  • Despite being outnumbered, Lee won at Antietam. (Though Antietam is sometimes claimed as a Union victory, Lee, the defender, held the battlefield; McClellan, the attacker, was repulsed.) The main reason for Lee’s success was that on a scale of leadership his generalship was worth 10, while McClellan was barely a 6.
  • Despite being outnumbered, Lee won at Chancellorsville because he was a 10 to Hooker’s 5.
  • Lee lost at Gettysburg mainly because he was outnumbered. Also relevant: Meade did not lose his nerve (like McClellan and Hooker) with generalship worth 8 to match Lee’s 8.

Let me use Antietam to show the arithmetic involved in those simple analyses of a rather complex subject:

The numerical strength of McClellan’s army was 89,000; Lee’s army was only 39,000 strong, but had the multiplier benefit of defensive posture. This enables us to calculate the theoretical combat power ratio of the Union Army to the Confederate Army as 1.4:1.0. In other words, with substantial preponderance of force, the Union Army should have been successful. (The combat power ratio of Confederates to Northerners, of course, was the reciprocal, or 0.71:1.04)

However, Lee held the battlefield, and a calculation of the actual combat power ratio of the two sides (based on accomplishment of mission, gaining or holding ground, and casualties) was a scant, but clear cut: 1.16:1.0 in favor of the Confederates. A ratio of the actual combat power ratio of the Confederate/Union armies (1.16) to their theoretical combat power (0.71) gives us a value of 1.63. This is the relative combat effectiveness of the Lee’s army to McClellan’s army on that bloody day. But, if we agree that the quality of the troops was the same, then the differential must essentially be in the quality of the opposing generals. Thus, Lee was a 10 to McClellan‘s 6.

The simple arithmetic equation[1] on which the above analysis was based is as follows:

CEV = (R/R)/(P/P)

When:
CEV is relative Combat Effectiveness Value
R/R is the actual combat power ratio
P/P is the theoretical combat power ratio.

At Antietam the equation was: 1.63 = 1.16/0.71.

We’ll be revisiting that equation in connection with each of our examples of the relative importance of technology and human factors.

Air Power and Technology

However, one more digression is required before we look at the examples. Air power was important in all eight of the 20th Century examples listed above. Offhand it would seem that the exercise of air superiority by one side or the other is a manifestation of technological superiority. Nevertheless, there are a few examples of an air force gaining air superiority with equivalent, or even inferior aircraft (in quality or numbers) because of the skill of the pilots.

However, the instances of such a phenomenon are rare. It can be safely asserted that, in the examples used in the following comparisons, the ability to exercise air superiority was essentially a technological superiority (even though in some instances it was magnified by human quality superiority). The one possible exception might be the Eastern Front in World War II, where a slight German technological superiority in the air was offset by larger numbers of Soviet aircraft, thanks in large part to Lend-Lease assistance from the United States and Great Britain.

The Battle of Kursk, 5-18 July, 1943

Following the surrender of the German Sixth Army at Stalingrad, on 2 February, 1943, the Soviets mounted a major winter offensive in south-central Russia and Ukraine which reconquered large areas which the Germans had overrun in 1941 and 1942. A brilliant counteroffensive by German Marshal Erich von Manstein‘s Army Group South halted the Soviet advance, and recaptured the city of Kharkov in mid-March. The end of these operations left the Soviets holding a huge bulge, or salient, jutting westward around the Russian city of Kursk, northwest of Kharkov.

The Germans promptly prepared a new offensive to cut off the Kursk salient, The Soviets energetically built field fortifications to defend the salient against expected German attacks. The German plan was for simultaneous offensives against the northern and southern shoulders of the base of the Kursk salient, Field Marshal Gunther von K1uge’s Army Group Center, would drive south from the vicinity of Orel, while Manstein’s Army Group South pushed north from the Kharkov area, The offensive was originally scheduled for early May, but postponements by Hitler, to equip his forces with new tanks, delayed the operation for two months, The Soviets took advantage of the delays to further improve their already formidable defenses.

The German attacks finally began on 5 July. In the north General Walter Model’s German Ninth Army was soon halted by Marshal Konstantin Rokossovski’s Army Group Center. In the south, however, German General Hermann Hoth’s Fourth Panzer Army and a provisional army commanded by General Werner Kempf, were more successful against the Voronezh Army Group of General Nikolai Vatutin. For more than a week the XLVIII Panzer Corps advanced steadily toward Oboyan and Kursk through the most heavily fortified region since the Western Front of 1918. While the Germans suffered severe casualties, they inflicted horrible losses on the defending Soviets. Advancing similarly further east, the II SS Panzer Corps, in the largest tank battle in history, repulsed a vigorous Soviet armored counterattack at Prokhorovka on July 12-13, but was unable to continue to advance.

The principal reason for the German halt was the fact that the Soviets had thrown into the battle General Ivan Konev’s Steppe Army Group, which had been in reserve. The exhausted, heavily outnumbered Germans had no comparable reserves to commit to reinvigorate their offensive.

A comparison of forces and losses of the Soviet Voronezh Army Group and German Army Group South on the south face of the Kursk Salient is shown below. The strengths are averages over the 12 days of the battle, taking into consideration initial strengths, losses, and reinforcements.

A comparison of the casualty tradeoff can be found by dividing Soviet casualties by German strength, and German losses by Soviet strength. On that basis, 100 Germans inflicted 5.8 casualties per day on the Soviets, while 100 Soviets inflicted 1.2 casualties per day on the Germans, a tradeoff of 4.9 to 1.0

The statistics for the 8-day offensive of the German XLVIII Panzer Corps toward Oboyan are shown below. Also shown is the relative combat effectiveness value (CEV) of Germans and Soviets, as calculated by the TNDM. As was the case for the Battle of Antietam, this is derived from a mathematical comparison of the theoretical combat power ratio of the two forces (simply considering numbers and weapons characteristics), and the actual combat power ratios reflected by the battle results:

The calculated CEVs suggest that 100 German troops were the combat equivalent of 240 Soviet troops, comparably equipped. The casualty tradeoff in this battle shows that 100 Germans inflicted 5.15 casualties per day on the Soviets, while 100 Soviets inflicted 1.11 casualties per day on the Germans, a tradeoff of4.64. It is a rule of thumb that the casualty tradeoff is usually about the square of the CEV.

A similar comparison can be made of the two-day battle of Prokhorovka. Soviet accounts of that battle have claimed this as a great victory by the Soviet Fifth Guards Tank Army over the German II SS Panzer Corps. In fact, since the German advance was halted, the outcome was close to a draw, but with the advantage clearly in favor of the Germans.

The casualty tradeoff shows that 100 Germans inflicted 7.7 casualties per on the Soviets, while 100 Soviets inflicted 1.0 casualties per day on the Germans, for a tradeoff value of 7.7.

When the German offensive began, they had a slight degree of local air superiority. This was soon reversed by German and Soviet shifts of air elements, and during most of the offensive, the Soviets had a slender margin of air superiority. In terms of technology, the Germans probably had a slight overall advantage. However, the Soviets had more tanks and, furthermore, their T-34 was superior to any tank the Germans had available at the time. The CEV calculations demonstrate that the Germans had a great qualitative superiority over the Russians, despite near-equality in technology, and despite Soviet air superiority. The Germans lost the battle, but only because they were overwhelmed by Soviet numbers.

German Performance, Western Europe, 1943-1945

Beginning with operations between Salerno and Naples in September, 1943, through engagements in the closing days of the Battle of the Bulge in January, 1945, the pattern of German performance against the Western Allies was consistent. Some German units were better than others, and a few Allied units were as good as the best of the Germans. But on the average, German performance, as measured by CEV and casualty tradeoff, was better than the Western allies by a CEV factor averaging about 1.2, and a casualty tradeoff factor averaging about 1.5. Listed below are ten engagements from Italy and Northwest Europe during that 1944.

Technologically, German forces and those of the Western Allies were comparable. The Germans had a higher proportion of armored combat vehicles, and their best tanks were considerably better than the best American and British tanks, but the advantages were at least offset by the greater quantity of Allied armor, and greater sophistication of much of the Allied equipment. The Allies were increasingly able to achieve and maintain air superiority during this period of slightly less than two years.

The combination of vast superiority in numbers of troops and equipment, and in increasing Allied air superiority, enabled the Allies to fight their way slowly up the Italian boot, and between June and December, 1944, to drive from the Normandy beaches to the frontier of Germany. Yet the presence or absence of Allied air support made little difference in terms of either CEVs or casualty tradeoff values. Despite the defeats inflicted on them by the numerically superior Allies during the latter part of 1944, in December the Germans were able to mount a major offensive that nearly destroyed an American army corps, and threatened to drive at least a portion of the Allied armies into the sea.

Clearly, in their battles against the Soviets and the Western Allies, the Germans demonstrated that quality of combat troops was able consistently to overcome Allied technological and air superiority. It was Allied numbers, not technology, that defeated the quantitatively superior Germans.

The Six-Day War, 1967

The remarkable Israeli victories over far more numerous Arab opponents—Egyptian, Jordanian, and Syrian—in June, 1967 revealed an Israeli combat superiority that had not been suspected in the United States, the Soviet Union or Western Europe. This superiority was equally awesome on the ground as in the air. (By beginning the war with a surprise attack which almost wiped out the Egyptian Air Force, the Israelis avoided a serious contest with the one Arab air force large enough, and possibly effective enough, to challenge them.) The results of the three brief campaigns are summarized in the table below:

It should be noted that some Israelis who fought against the Egyptians and Jordanians also fought against the Syrians. Thus, the overall Arab numerical superiority was greater than would be suggested by adding the above strength figures, and was approximately 328,000 to 200,000.

It should also be noted that the technological sophistication of the Israeli and Arab ground forces was comparable. The only significant technological advantage of the Israelis was their unchallenged command of the air. (In terms of battle outcomes, it was irrelevant how they had achieved air superiority.) In fact this was a very significant advantage, the full import of which would not be realized until the next Arab-Israeli war.

The results of the Six Day War do not provide an unequivocal basis for determining the relative importance of human factors and technological superiority (as evidenced in the air). Clearly a major factor in the Israeli victories was the superior performance of their ground forces due mainly to human factors. At least as important in those victories was Israeli command of the air, in which both technology and human factors both played a part.

The October War, 1973

A better basis for comparing the relative importance of human factors and technology is provided by the results of the October War of 1973 (known to Arabs as the War of Ramadan, and to Israelis as the Yom Kippur War). In this war the Israeli unquestioned superiority in the air was largely offset by the Arabs possession of highly sophisticated Soviet air defense weapons.

One important lesson of this war was a reassessment of Israeli contempt for the fighting quality of Arab ground forces (which had stemmed from the ease with which they had won their ground victories in 1967). When Arab ground troops were protected from Israeli air superiority by their air defense weapons, they fought well and bravely, demonstrating that Israeli control of the air had been even more significant in 1967 than anyone had then recognized.

It should be noted that the total Arab (and Israeli) forces are those shown in the first two comparisons, above. A Jordanian brigade and two Iraqi divisions formed relatively minor elements of the forces under Syrian command (although their presence on the ground was significant in enabling the Syrians to maintain a defensive line when the Israelis threatened a breakthrough around 20 October). For the comparison of Jordanians and Iraqis the total strength is the total of the forces in the battles (two each) on which these comparisons are based.

One other thing to note is how the Israelis, possibly unconsciously, confirmed that validity of their CEVs with respect to Egyptians and Syrians by the numerical strengths of their deployments to the two fronts. Since the war ended up in a virtual stalemate on both fronts, the overall strength figures suggest rough equivalence of combat capability.

The CEV values shown in the above table are very significant in relation to the debate about human factors and technology, There was little if anything to choose between the technological sophistication of the two sides. The Arabs had more tanks than the Israelis, but (as Israeli General Avraham Adan once told the author) there was little difference in the quality of the tanks. The Israelis again had command of the air, but this was neutralized immediately over the battlefields by the Soviet air defense equipment effectively manned by the Arabs. Thus, while technology was of the utmost importance to both sides, enabling each side to prevent the enemy from gaining a significant advantage, the true determinant of battlefield outcomes was the fighting quality of the troops, And, while the Arabs fought bravely, the Israelis fought much more effectively. Human factors made the difference.

Israeli Invasion of Lebanon, 1982

In terms of the debate about the relative importance of human factors and technology, there are two significant aspects to this small war, in which Syrians forces and PLO guerrillas were the Arab participants. In the first place, the Israelis showed that their air technology was superior to the Syrian air defense technology, As a result, they regained complete control of the skies over the battlefields. Secondly, it provides an opportunity to include a highly relevant quotation.

The statistical comparison shows the results of the two major battles fought between Syrians and Israelis:

In assessing the above statistics, a quotation from the Israeli Chief of Staff, General Rafael Eytan, is relevant.

In late 1982 a group of retired American generals visited Israel and the battlefields in Lebanon. Just before they left for home, they had a meeting with General Eytan. One of the American generals asked Eytan the following question: “Since the Syrians were equipped with Soviet weapons, and your troops were equipped with American (or American-type) weapons, isn’t the overwhelming Israeli victory an indication of the superiority of American weapons technology over Soviet weapons technology?”

Eytan’s reply was classic: “If we had had their weapons, and they had had ours, the result would have been absolutely the same.”

One need not question how the Israeli Chief of Staff assessed the relative importance of the technology and human factors.

Falkland Islands War, 1982

It is difficult to get reliable data on the Falkland Islands War of 1982. Furthermore, the author of this article had not undertaken the kind of detailed analysis of such data as is available. However, it is evident from the information that is available about that war that its results were consistent with those of the other examples examined in this article.

The total strength of Argentine forces in the Falklands at the time of the British counter-invasion was slightly more than 13,000. The British appear to have landed close to 6,400 troops, although it may have been fewer. In any event, it is evident that not more than 50% of the total forces available to both sides were actually committed to battle. The Argentine surrender came 27 days after the British landings, but there were probably no more than six days of actual combat. During these battles the British performed admirably, the Argentinians performed miserably. (Save for their Air Force, which seems to have fought with considerable gallantry and effectiveness, at the extreme limit of its range.) The British CEV in ground combat was probably between 2.5 and 4.0. The statistics were at least close to those presented below:

It is evident from published sources that the British had no technological advantage over the Argentinians; thus the one-sided results of the ground battles were due entirely to British skill (derived from training and doctrine) and determination.

South African Operations in Angola, 1987-1988

Neither the political reasons for, nor political results of, the South African military interventions in Angola in the 1970s, and again in the late 1980s, need concern us in our consideration of the relative significance of technology and of human factors. The combat results of those interventions, particularly in 1987-1988 are, however, very relevant.

The operations between elements of the South African Defense Force (SADF) and forces of the Popular Movement for the Liberation of Angola (FAPLA) took place in southeast Angola, generally in the region east of the city of Cuito-Cuanavale. Operating with the SADF units were a few small units of Jonas Savimbi’s National Union for the Total Independence of Angola (UNITA). To provide air support to the SADF and UNITA ground forces, it would have been necessary for the South Africans to establish air bases either in Botswana, Southwest Africa (Namibia), or in Angola itself. For reasons that were largely political, they decided not to do that, and thus operated under conditions of FAPLA air supremacy. This led them, despite terrain generally unsuited for armored warfare, to use a high proportion of armored vehicles (mostly light armored cars) to provide their ground troops with some protection from air attack.

Summarized below are the results of three battles east of Cuito-Cuanavale in late 1987 and early 1988. Included with FAPLA forces are a few Cubans (mostly in armored units); included with the SADF forces are a few UNITA units (all infantry).

FAPLA had complete command of air, and substantial numbers of MiG-21 and MiG-23 sorties were flown against the South Africans in all of these battles. This technological superiority was probably partly offset by greater South African EW (electronic warfare) capability. The ability of the South Africans to operate effectively despite hostile air superiority was reminiscent of that of the Germans in World War II. It was a further demonstration that, no matter how important technology may be, the fighting quality of the troops is even more important.

The tank figures include armored cars. In the first of the three battles considered, FAPLA had by far the more powerful and more numerous medium tanks (20 to 0). In the other two, SADF had a slight or significant advantage in medium tank numbers and quality. But it didn’t seem to make much difference in the outcomes.

Kuwait War, 1991

The previous seven examples permit us to examine the results of Kuwait (or Second Gulf) War with more objectivity than might otherwise have possible. First, let’s look at the statistics. Note that the comparison shown below is for four days of ground combat, February 24-28, and shows only operations of U.S. forces against the Iraqis.

There can be no question that the single most important contribution to the overwhelming victory of U.S. and other U.N. forces was the air war that preceded, and accompanied, the ground operations. But two comments are in order. The air war alone could not have forced the Iraqis to surrender. On the other hand, it is evident that, even without the air war, U.S. forces would have readily overwhelmed the Iraqis, probably in more than four days, and with more than 285 casualties. But the outcome would have been hardly less one-sided.

The Vietnam War, 1965-1973

It is impossible to make the kind of mathematical analysis for the Vietnam War as has been done in the examples considered above. The reason is that we don’t have any good data on the Vietcong—North Vietnamese forces,

However, such quantitative analysis really isn’t necessary There can be no doubt that one of the opponents was a superpower, the most technologically advanced nation on earth, while the other side was what Lyndon Johnson called a “raggedy-ass little nation,” a typical representative of “the third world.“

Furthermore, even if we were able to make the analyses, they would very possibly be misinterpreted. It can be argued (possibly with some exaggeration) that the Americans won all of the battles. The detailed engagement analyses could only confirm this fact. Yet it is unquestionable that the United States, despite airpower and all other manifestations of technological superiority, lost the war. The human factor—as represented by the quality of American political (and to a lesser extent military) leadership on the one side, and the determination of the North Vietnamese on the other side—was responsible for this defeat.

Conclusion

In a recent article in the Armed Forces Journal International Col. Philip S. Neilinger, USAF, wrote: “Military operations are extremely difficult, if not impossible, for the side that doesn’t control the sky.” From what we have seen, this is only partly true. And while there can be no question that operations will always be difficult to some extent for the side that doesn’t control the sky, the degree of difficulty depends to a great degree upon the training and determination of the troops.

What we have seen above also enables us to view with a better perspective Colonel Neilinger’s subsequent quote from British Field Marshal Montgomery: “If we lose the war in the air, we lose the war and we lose it quickly.” That statement was true for Montgomery, and for the Allied troops in World War II. But it was emphatically not true for the Germans.

The examples we have seen from relatively recent wars, therefore, enable us to establish priorities on assuring readiness for war. It is without question important for us to equip our troops with weapons and other materiel which can match, or come close to matching, the technological quality of the opposition’s materiel. We must realize that we cannot—as some people seem to think—buy good forces, by technology alone. Even more important is to assure the fighting quality of the troops. That must be, by far, our first priority in peacetime budgets and in peacetime military activities of all sorts.

NOTES

[1] This calculation is automatic in analyses of historical battles by the Tactical Numerical Deterministic Model (TNDM).

[2] The initial tank strength of the Voronezh Army Group was about 1,100 tanks. About 3,000 additional Soviet tanks joined the battle between 6 and 12 July. At the end of the battle there were about 1,800 Soviet tanks operational in the battle area; at the same time there were about 1,000 German tanks still operational.

[3] The relative combat effectiveness value of each force is calculated in comparison to 1.0. Thus the CEV of the Germans is 2.40:1.0, while that of the Soviets is 0.42: 1.0. The opposing CEVs are always the reciprocals of each other.

Drones And The U.S. Navy

An X-47 Unmanned Combat Air System (UCAS) drone lands on the USS Theodore Roosevelt during a test in 2014. [Breaking Defense]

Preamble & Warning (P&W): Please forgive me, this is an acronym heavy post.

In May 2013, the U.S. Navy (USN) reached milestones by having a “drone,” or unmanned aerial vehicle (UAV) land and take-off from an aircraft carrier. This was a significant achievement in aviation, and heralded an era of combat UAVs (UCAV) being integrated into carrier air wings (CVW). This vehicle, the X-47B, was built by Northrup Grumman, under the concept of a carrier-based stealthy strike vehicle.

Ultimately, after almost three years, their decision was announced:

On 1 February 2016, after many delays over whether the [Unmanned Carrier-Launched Airborne Surveillance and Strike] UCLASS would specialize in strike or intelligence, surveillance and reconnaissance (ISR) roles, it was reported that a significant portion of the UCLASS effort would be directed to produce a Super Hornet-sized carrier-based aerial refueling tanker as the Carrier-Based Aerial-Refueling System (CBARS), with ‘a little ISR’ and some capabilities for communications relay, and strike capabilities put off to a future version of the aircraft. In July 2016, it was officially named ‘MQ-25A Stingray’.

The USN, who had just proven that they can add a stealthy UCAV to carrier flight deck operations, decided to put this new capability on the shelf, and instead refocus the efforts of the aerospace defense industry on a brand new requirement, namely …

For mission tanking, the threshold requirement is offloading 14,000 lb. of fuel to aviation assets at 500 nm from the ship, thereby greatly extending the range of the carrier air wing, including the Lockheed Martin F-35C and Boeing F/A-18 Super Hornet. The UAV must also be able to integrate with the Nimitz-class carriers, being able to safely launch and recover and not take up more space than is allocated for storage, maintenance and repairs.

Boeing has fashioned part of St. Louis Lambert International Airport into an aircraft carrier deck, complete with a mock catapult system. [Boeing]

Why did they do this?

The Pentagon apparently made this program change in order to address the Navy’s expected fighter shortfall by directing funds to buy additional F/A-18E/F Super Hornets and accelerate purchases and development of the F-35C. Having the CBARS as the first carrier-based UAV provides a less complex bridge to the future F/A-XX, should it be an autonomous strike platform. It also addresses the carriers’ need for an organic refueling aircraft, proposed as a mission for the UCLASS since 2014, freeing up the 20–30 percent of Super Hornets performing the mission in a more capable and cost effective manner than modifying the F-35, V-22 Osprey, and E-2D Hawkeye, or bringing the retired S-3 Viking back into service.

Notice within this quote the supposition that the F/A-XX would be an autonomous strike platform. This program was originally a USN-specific program to build a next-generation platform to perform both strike and air superiority missions, much like the F/A-18 aircraft are “swing role.” The US Air Force (USAF) had a separate program for a next generation air superiority aircraft called the F-X. These programs were combined by the Department of Defense (DoD) into the Next Generation Air Dominance (NGAD) program. We can tell from the name of this program that it is clearly focused on the air superiority mission, as compared to the balance of strike and superiority, implicit in the USN program.

Senator John McCain, chairman of the Senate Armed Services Committee (SASC), wrote a letter to then Secretary of Defense Ash Carter, on 2015-03-24, stating, “I strongly believe that the Navy’s first operational unmanned combat aircraft must be capable of performing a broad range of missions in contested environments as part of the carrier air wing, including precision strike as well as [ISR].” This is effectively an endorsement of the X-47B, and quite unlike the MQ-25.

I’m in agreement with Senator McCain on this. I think that a great deal of experience could have been gained by continuing the development and test of the X-47B, and possibly deploying the vehicle to the fleet.

The Navy hinted at the possibility of using the UCLASS in air-to-air engagements as a ‘flying missile magazine’ to supplement the F/A-18 Super Hornet and F-35C Lightning II as a type of ‘robotic wingman.’ Its weapons bay could be filled with AIM-120 AMRAAMs and be remotely operated by an E-2D Hawkeye or F-35C flight leader, using their own sensors and human judgment to detect, track, and direct the UAV to engage an enemy aircraft. The Navy’s Naval Integrated Fire Control-Counter Air (NIFC-CA) concept gives a common picture of the battle space to multiple air platforms through data-links, where any aircraft could fire on a target in their range that is being tracked by any sensor, so the forward deployed UCLASS would have its missiles targeted by another controller. With manned-unmanned teaming for air combat, a dedicated unmanned supersonic fighter may not be developed, as the greater cost of high-thrust propulsion and an airframe of similar size to a manned fighter would deliver a platform with comparable operating costs and still without an ability to engage on its own.

Indeed, the German Luftwaffe has completed an air combat concept study, stating that the fighter of the 2040’s will be a “stealthy drone herder”:

Interestingly the twin-engine, twin-tail stealth design would be a twin-seat design, according to Alberto Gutierrez, Head of Eurofighter Programme, Airbus DS. The second crewmember may be especially important for the FCAS concept of operations, which would see it operate in a wider battle network, potentially as a command and control asset or UCAV/UAV mission commander.

Instead, the USN has decided to banish the drones into the tanker and light ISR roles, to focus on having more Super Hornets available, and move towards integrating the F-35C into the CVW. I believe that this is a missed opportunity to move ahead to get direct front line experience in operating UCAVs as part of combat carrier operations.

Comparing the RAND Version of the 3:1 Rule to Real-World Data

Chuliengcheng. In a glorious death eternal life. (Battle of Yalu River, 1904) [Wikimedia Commons]

[The article below is reprinted from the Winter 2010 edition of The International TNDM Newsletter.]

Comparing the RAND Version of the 3:1 Rule to Real-World Data
Christopher A. Lawrence

For this test, The Dupuy Institute took advan­tage of two of its existing databases for the DuWar suite of databases. The first is the Battles Database (BaDB), which covers 243 battles from 1600 to 1900. The sec­ond is the Division-level Engagement Database (DLEDB), which covers 675 division-level engagements from 1904 to 1991.

The first was chosen to provide a historical con­text for the 3:1 rule of thumb. The second was chosen so as to examine how this rule applies to modern com­bat data.

We decided that this should be tested to the RAND version of the 3:1 rule as documented by RAND in 1992 and used in JICM [Joint Integrated Contingency Model] (with SFS [Situational Force Scoring]) and other mod­els. This rule, as presented by RAND, states: “[T]he famous ‘3:1 rule,’ according to which the attacker and defender suffer equal fractional loss rates at a 3:1 force ratio if the battle is in mixed terrain and the defender enjoys ‘prepared’ defenses…”

Therefore, we selected out all those engage­ments from these two databases that ranged from force ratios of 2.5 to 1 to 3.5 to 1 (inclusive). It was then a simple matter to map those to a chart that looked at attackers losses compared to defender losses. In the case of the pre-1904 cases, even with a large database (243 cases), there were only 12 cases of combat in that range, hardly statistically significant. That was because most of the combat was at odds ratios in the range of .50-to-1 to 2.00-to-one.

The count of number of engagements by odds in the pre-1904 cases:

As the database is one of battles, then usually these are only joined at reasonably favorable odds, as shown by the fact that 88 percent of the battles occur between 0.40 and 2.50 to 1 odds. The twelve pre-1904 cases in the range of 2.50 to 3.50 are shown in Table 1.

If the RAND version of the 3:1 rule was valid, one would expect that the “Percent per Day Loss Ratio” (the last column) would hover around 1.00, as this is the ratio of attacker percent loss rate to the defender per­cent loss rate. As it is, 9 of the 12 data points are notice­ably below 1 (below 0.40 or a 1 to 2.50 exchange rate). This leaves only three cases (25%) with an exchange rate that would support such a “rule.”

If we look at the simple ratio of actual losses (vice percent losses), then the numbers comes much closer to parity, but this is not the RAND interpreta­tion of the 3:1 rule. Six of the twelve numbers “hover” around an even exchange ratio, with six other sets of data being widely off that central point. “Hover” for the rest of this discussion means that the exchange ratio ranges from 0.50-to-1 to 2.00-to 1.

Still, this is early modern linear combat, and is not always representative of modern war. Instead, we will examine 634 cases in the Division-level Database (which consists of 675 cases) where we have worked out the force ratios. While this database covers from 1904 to 1991, most of the cases are from WWII (1939- 1945). Just to compare:

As such, 87% of the cases are from WWII data and 10% of the cases are from post-WWII data. The engagements without force ratios are those that we are still working on as The Dupuy Institute is always ex­panding the DLEDB as a matter of routine. The specific cases, where the force ratios are between 2.50 and 3.50 to 1 (inclusive) are shown in Table 2:

This is a total of 98 engagements at force ratios of 2.50 to 3.50 to 1. It is 15 percent of the 634 engage­ments for which we had force ratios. With this fairly significant representation of the overall population, we are still getting no indication that the 3:1 rule, as RAND postulates it applies to casualties, does indeed fit the data at all. Of the 98 engagements, only 19 of them demonstrate a percent per day loss ratio (casualty exchange ratio) between 0.50-to-1 and 2-to-1. This is only 19 percent of the engagements at roughly 3:1 force ratio. There were 72 percent (71 cases) of those engage­ments at lower figures (below 0.50-to-1) and only 8 percent (cases) are at a higher exchange ratio. The data clearly was not clustered around the area from 0.50-to- 1 to 2-to-1 range, but was well to the left (lower) of it.

Looking just at straight exchange ratios, we do get a better fit, with 31 percent (30 cases) of the figure ranging between 0.50 to 1 and 2 to 1. Still, this fig­ure exchange might not be the norm with 45 percent (44 cases) lower and 24 percent (24 cases) higher. By definition, this fit is 1/3rd the losses for the attacker as postulated in the RAND version of the 3:1 rule. This is effectively an order of magnitude difference, and it clearly does not represent the norm or the center case.

The percent per day loss exchange ratio ranges from 0.00 to 5.71. The data tends to be clustered at the lower values, so the high values are very much outliers. The highest percent exchange ratio is 5.71, the second highest is 4.41, the third highest is 2.92. At the other end of the spectrum, there are four cases where no losses were suffered by one side and seven where the exchange ratio was .01 or less. Ignoring the “N/A” (no losses suffered by one side) and the two high “outliers (5.71 and 4.41), leaves a range of values from 0.00 to 2.92 across 92 cases. With an even dis­tribution across that range, one would expect that 51 percent of them would be in the range of 0.50-to-1 and 2.00-to-1. With only 19 percent of the cases being in that range, one is left to conclude that there is no clear correlation here. In fact, it clearly is the opposite effect, which is that there is a negative relationship. Not only is the RAND construct unsupported, it is clearly and soundly contradicted with this data. Furthermore, the RAND construct is theoretically a worse predictor of casualty rates than if one randomly selected a value for the percentile exchange rates between the range of 0 and 2.92. We do believe this data is appropriate and ac­curate for such a test.

As there are only 19 cases of 3:1 attacks fall­ing in the even percentile exchange rate range, then we should probably look at these cases for a moment:

One will note, in these 19 cases, that the aver­age attacker casualties are way out of line with the av­erage for the entire data set (3.20 versus 1.39 or 3.20 versus 0.63 with pre-1943 and Soviet-doctrine attack­ers removed). The reverse is the case for the defenders (3.12 versus 6.08 or 3.12 versus 5.83 with pre-1943 and Soviet-doctrine attackers removed). Of course, of the 19 cases, 2 are pre-1943 cases and 7 are cases of Soviet-doctrine attackers (in fact, 8 of the 14 cases of the So­viet-doctrine attackers are in this selection of 19 cases). This leaves 10 other cases from the Mediterranean and ETO (Northwest Europe 1944). These are clearly the unusual cases, outliers, etc. While the RAND 3:1 rule may be applicable for the Soviet-doctrine offensives (as it applies to 8 of the 14 such cases we have), it does not appear to be applicable to anything else. By the same token, it also does not appear to apply to virtually any cases of post-WWII combat. This all strongly argues that not only is the RAND construct not proven, but it is indeed clearly not correct.

The fact that this construct also appears in So­viet literature, but nowhere else in US literature, indi­cates that this is indeed where the rule was drawn from. One must consider the original scenarios run for the RSAC [RAND Strategy Assessment Center] wargame were “Fulda Gap” and Korean War scenarios. As such, they were regularly conducting bat­tles with Soviet attackers versus Allied defenders. It would appear that the 3:1 rule that they used more closely reflected the experiences of the Soviet attackers in WWII than anything else. Therefore, it may have been a fine representation for those scenarios as long as there was no US counterattacking or US offensives (and assuming that the Soviet Army of the 1980s performed at the same level as in did in the 1940s).

There was a clear relative performance difference between the Soviet Army and the German Army in World War II (see our Capture Rate Study Phase I & II and Measuring Human Factors in Combat for a detailed analysis of this).[1] It was roughly in the order of a 3-to-1-casualty exchange ratio. Therefore, it is not surprising that Soviet writers would create analytical tables based upon an equal percentage exchange of losses when attacking at 3:1. What is surprising, is that such a table would be used in the US to represent US forces now. This is clearly not a correct application.

Therefore, RAND’s SFS, as currently con­structed, is calibrated to, and should only be used to represent, a Soviet-doctrine attack on first world forces where the Soviet-style attacker is clearly not properly trained and where the degree of performance difference is similar to that between the Germans and Soviets in 1942-44. It should not be used for US counterattacks, US attacks, or for any forces of roughly comparable ability (regardless of whether Soviet-style doctrine or not). Furthermore, it should not be used for US attacks against forces of inferior training, motivation and co­hesiveness. If it is, then any such tables should be ex­pected to produce incorrect results, with attacker losses being far too high relative to the defender. In effect, the tables unrealistically penalize the attacker.

As JICM with SFS is now being used for a wide variety of scenarios, then it should not be used at all until this fundamental error is corrected, even if that use is only for training. With combat tables keyed to a result that is clearly off by an order of magnitude, then the danger of negative training is high.

NOTES

[1] Capture Rate Study Phases I and II Final Report (The Dupuy Institute, March 6, 2000) (2 Vols.) and Measuring Human Fac­tors in Combat—Part of the Enemy Prisoner of War Capture Rate Study (The Dupuy Institute, August 31, 2000). Both of these reports are available through our web site.

Aerial Combined Arms

In a previous post, I quoted Jules Hurst’s comparison between the medieval knights of old and modern day fighter pilots. His point was that the future of aerial combat will feature more combined arms. This I agree with; the degree of specialization that will be seen in the future will increase, although our ability to predict what this will be is uncertain. Hurst’s second point, that today’s aerial combat is akin to jousting and jovial knights looking to independently take down foes, I do not agree with at all.

Last night, I watched the History Channel documentary “Dogfights of Desert Storm,” a wonderful summary of several selected dogfights from the first Gulf War (1991, US and coalition vs Iraq), which included:

1. A furball between an unarmed EF-111 and a Mirage F1. Eventually, an F-15C came to the rescue, but the EF-111 crew was apparently awarded the Distinguished Flying Cross for its actions that day. Ultimately, the F1 hit the ground, and the F-15C got the credit.

2. A complex dogfight between a flight of two F-15Cs against 2 Mig-25s and 2 Mig-29s. This was a hairy affair, with lots of maneuver. The MiG-25s were able to decoy many heat-seeking AIM-9’s, so the AIM-7 radar guided missiles needed to be used to shoot them down.

[As previously reported, an F/A-18F had problems trying to down a Syrian Su-22 Fitter with an AIM-9 missile due to the effectiveness of Russian-made flares and had to resort to an AIM-120 radar-guided missile. Also a strategy from Soviet days, the preference to carry more than one type of seeker types seems to be quite good advice. The U.S. Air Force (USAF) has traditionally adhered to the concept of a beyond visual range (BVR) medium range, radar guided missile, the AIM-7 and the AIM-120 successor. This coupled with the short range AIM-9 infrared missile. The gap that this leaves is the long range, infrared guided missile.]

3. A well-run dogfight pitting a flight of four F-15Cs vs. a flight of four F-1s. Of the F-1s, one turned back to base, either for fear, prudence, or mechanical difficulty, it is difficult to say. The three other F-1s were all downed by AIM-7 missiles, fired at beyond visual range. What was noted about this engagement was the patience of the USAF flight leader, who did not immediately lock-on to the F-1s, in order to avoid triggering their radar warning receivers (RWR), and giving up the element of surprise by notifying them of the impending attack.

The statistic given was that 60% of the aerial victories in the entire conflict were from BVR.

The coalition’s triumph was an emphatic boost for current air war strategy. Multiple aircraft with specific roles working on concert to achieve victory. Air war in 1990, as it is today, is a team sport.” Multiple weapons disrupted the Iraqi capability to deal with it. It was information overload. They could not deal with the multiple successive strikes, and the fact that their radars went offline, and their command and control was shut down … jamming … deception – it was like having essentially a ‘war nervous breakdown’. (emphasis added).

Larry Pitts, a USAF F-15C Eagle pilot (retired), said

aerial victory against an enemy airplane was a career highlight for me. It’s something that I’ll never be able to beat, but you know in my mind, I did what any fighter pilot would have done if any enemy fighter had been put in front of him. I relied on my training, I engaged the airplane, protected my wingman as he protected me, and came out of it alive.

One key element in all of the combat recounted by the USAF pilots was the presence of airborne early warning aircraft, at the time the E-3C Sentry. Indeed, this form of combined arms—which is effectively an augmentation of a fighter pilot’s sensors—has been around for a surprisingly long time.

  • In February 1944, the United States Navy (USN), under Project Cadillac, equipped a TBM Avenger torpedo bomber with an airborne radar, and the resulting TBM-3W entered service with the Airborne Early Warning (AEW) mission.
  • In June 1949, a joint program with the USN and USAF resulted in the EC-121 Warning Star, a conversion of a Lockheed L1094 Super Constellation airliner. This aircraft entered service to reinforce the Distant Early Warning (DEW) Line, across the Arctic in Canada and Alaska to detect and defend against Soviet Air Force bombers flying over the pole. This was also the plane that played the “AWACS” role in Vietnam.
  • In January 1964, the E-2 Hawkeye was introduced into service with the USN, which required a carrier-based AWACS platform.
  • In March 1977, the first E-3 Sentry was delivered to the USAF by Boeing.

Indeed, the chart below illustrates the wide variety of roles and platforms flown by the USAF, in their combined arms operations.

[Source: Command: Modern Air & Naval Operations]

In addition, the USAF just released its FY2019 budget, fresh from budget action in Congress. This had a few surprises, including the planned retirement of both the B-1B and the B-2A in favor of the upcoming B-21 Raider, and continuing to enhance and improve the B-52. This is a very old platform, having been introduced in 1955. This does match a shift in thinking by the USAF, from stating that all of the fourth generation aircraft (non-stealthy) are entirely obsolete, to one in which they continue to play a role, as a follow-up force, perhaps in role of a “distant archer” with stand-off weapons. I previously discussed the Talon Hate pod enabling network communications between the F-22 and F-15C systems.

More on this to come!

Russian Army Experiments With Using Tanks For Indirect Fire

Russian Army T-90S main battle tanks. [Ministry of Defense of the Russian Federation]

Finnish freelance writer and military blogger Petri Mäkelä spotted an interesting announcement from the Ministry of Defense of the Russian Federation: the Combined-Arms Army of the Western Military District is currently testing the use of main battle tanks for indirect fire at the Pogonovo test range in the Voronezh region.

According to Major General Timur Trubiyenko, First Deputy Commander of the Western Military District Combined-Arms Army, in the course of company exercises, 200 tankers will test a combination of platoon direct and indirect fire tactics against simulated armored, lightly armored, and concealed targets up to 12 kilometers away.

Per Mäkelä, the exercise will involve T-90S main battle tanks using their 2A46 125 mm/L48 smoothbore cannons. According to the Ministry of Defense, more than 1,000 Russian Army soldiers, employing over 100 weapons systems and special equipment items, will participate in the exercises between 19 and 22 February 2018.

Tanks have been used on occasion to deliver indirect fire in World War II and Korea, but it is not a commonly used modern tactic. The use of modern fire control systems, guided rounds, and drone spotters might offer the means to make this more useful.

Aerial Drone Tactics, 2025-2050

[Image: War On The Rocks.]

My previous post outlined the potential advantages and limitations of current and future drone technology. The real utility of drones in future warfare may lie in a tactic that is both quite old and new, swarming. “‘This [drone swarm concept] goes all the way back to the tactics of Attila the Hun,’ says Randall Steeb, senior engineer at the Rand Corporation in the US. ‘A light attack force that can defeat more powerful and sophisticated opponents. They come out of nowhere, attack from all sides and then disappear, over and over.'”

In order to be effective, Mr. Steeb’s concept would require drones to be able to speed away from their adversary, or be able to hide. The Huns are described “as preferring to defeat their enemies by deceit, surprise attacks, and cutting off supplies. The Huns brought large numbers of horses to use as replacements and to give the impression of a larger army on campaign.” Also, prior to problems caused to the Roman Empire by the Huns under Attila (~400 CE), another group of people, the Scythians, used similar tactics much earlier, as mentioned by Herodotus, (~800 BCE). “With great mobility, the Scythians could absorb the attacks of more cumbersome foot soldiers and cavalry, just retreating into the steppes. Such tactics wore down their enemies, making them easier to defeat.” These tactics were also used by the Parthians, resulted in the Roman defeat under Crassis at the Battle of Carrahe, 53 BCE. Clearly, maneuver is as old as warfare itself.

Indeed, others have their own ancient analogies.

Today, fighter pilots approach warfare like a questing medieval knight. They search for opponents with similar capabilities and defeat them by using technologically superior equipment or better application of individual tactics and techniques. For decades, leading air forces nurtured this dynamic by developing expensive, manned air superiority fighters. This will all soon change. Advances in unmanned combat aerial vehicles (UCAVs) will turn fighter pilots from noble combatants to small-unit leaders and drive the development of new aerial combined arms tactics.

Drone Swarms: A Game Changer?

We can see that the new technologies come along, and they enable a new look at warfare, and often enable a new implementation of ancient tactics. There are some who claim that this changes the game, and indeed may change the fundamental nature of war.

Peter Singer, an expert on future warfare at the New America think-tank, is in no doubt. ‘What we have is a series of technologies that change the game. They’re not science fiction. They raise new questions. What’s possible? What’s proper?’ Mr. Singer is talking about artificial intelligence, machine learning, robotics and big-data analytics. Together they will produce systems and weapons with varying degrees of autonomy, from being able to work under human supervision to ‘thinking’ for themselves. The most decisive factor on the battlefield of the future may be the quality of each side’s algorithms. Combat may speed up so much that humans can no longer keep up. Frank Hoffman, a fellow of the National Defense University who coined the term ‘hybrid warfare’, believes that these new technologies have the potential not just to change the character of war but even possibly its supposedly immutable nature as a contest of wills. For the first time, the human factors that have defined success in war, ‘will, fear, decision-making and even the human spark of genius, may be less evident,’ he says.” (emphasis added).

Drones are highly capable, and with increasing autonomy, they themselves may be immune to fear. Technology has been progressing step by step to alter the character of war. Think of the Roman soldier and his personal experience in warfare up close vs. the modern sniper. They each have a different experience in warfare, and fear manifests itself in different ways. Unless we create and deploy full autonomous systems, with no human in or on the loop, there will be an opportunity for fear and confusion by the human mind to creep into martial matters. An indeed, with so much new technology, friction of some sort is almost assured.

I’m not alone in this assessment. Secretary of Defense James Mattis has said “You go all the way back to Thucydides who wrote the first history and it was of a war and he said it’s fear and honor and interest and those continue to this day. The fundamental nature of war is unchanging. War is a human social phenomenon.”

Swarming and Information Dominance

Indeed, the notion of the importance of information dominance plays upon one of the most important fundamental aspects of warfare: surprise. There are many synonyms for surprise, one of the most popular these days is situational awareness (SA). In a recent assessment of trends in air-to-air combat for the Center for Strategic and Budgetary Assessments (CSBA), Dr. John Stillion described the impact of SA.

Aerial combat over the past two decades, though relatively rare, continues to demonstrate the importance of superior SA. The building blocks, however, of superior SA, information acquisition and information denial, seem to be increasingly associated with sensors, signature reduction, and networks. Looking forward, these changes have greatly increased the proportion of BVR [Beyond Visual Range] engagements and likely reduced the utility of traditional fighter aircraft attributes, such as speed and maneuverability, in aerial combat. At the same time, they seem to have increased the importance of other attributes.

Stillion, famous for his RAND briefing on the F-35, proposes an interesting concept of operations for air-to-air combat, centered on larger aircraft with bigger sensor apertures, and subsonic UCAS fighters in the “front line.” He’s got a good video to illustrate how this concept would work against an adversary.

[I]t is important to acknowledge that all of the foregoing discussion is based on certain assumptions plus analysis of past trends, and the future of aerial combat might continue to belong to fast, agile aircraft. The alternative vision of future aerial combat presented in Chapter 5 relies heavily on robust LoS [Line of Sight] data links to enable widely distributed aircraft to efficiently share information and act in concert to achieve superior SA and combat effectiveness. Should the links be degraded or denied, the concept put forward here would be difficult or impossible to implement.

Therefore, in the near term, one of the most important capabilities to enable is a secure battle network. This will be required for remotely piloted and autonomous system alike, and this will be the foundation of information dominance – the acquisition of information for use by friendly forces, and the denial of information to an adversary.