Tag Third Offset Strategy

U.S. Army Solicits Proposals For Mobile Protected Firepower (MPF) Light Tank

The U.S. Army’s late and apparently lamented M551 Sheridan light tank. [U.S. Department of the Army/Wikipedia]

The U.S. Army recently announced that it will begin soliciting Requests for Proposal (RFP) in November to produce a new lightweight armored vehicle for its Mobile Protected Firepower (MPF) program. MPF is intended to field a company of vehicles for each Army Infantry Brigade Combat Team to provide them with “a long-range direct-fire capability for forcible entry and breaching operations.”

The Army also plans to field the new vehicle quickly. It is dispensing with the usual two-to-three year technology development phase, and will ask for delivery of the first sample vehicles by April 2018, one month after the RFP phase is scheduled to end. This will invariably favor proposals using existing off-the-shelf vehicle designs and “mature technology.”

The Army apparently will also accept RFPs with turret-mounted 105mm main guns, at least initially. According to previous MFP parameters, acceptable designs will eventually need to be able to accommodate 120mm guns.

I have observed in the past that the MPF is the result of the Army’s concerns that its light infantry may be deprived of direct fire support on anti-access/area denial (A2/AD) battlefields. Track-mounted, large caliber direct fire guns dedicated to infantry support are something of a doctrinal throwback to the assault guns of World War II, however.

There was a noted tendency during World War II to use anything on the battlefield that resembled a tank as a main battle tank, with unhappy results for the not-main battle tanks. As a consequence, assault guns, tank destroyers, and light tanks became evolutionary dead-ends in the development of post-World War II armored doctrine (the late M551 Sheridan, retired without replacement in 1996, notwithstanding). [For more on the historical background, see The Dupuy Institute, “The Historical Effectiveness of Lighter-Weight Armored Forces,” August 2001.]

The Army has been reluctant to refer to MPF as a light tank, but as David Dopp, the MPF Program Manager admitted, “I don’t want to say it’s a light tank, but it’s kind of like a light tank.” He went on to say that “It’s not going toe to toe with a tank…It’s for the infantry. It goes where the infantry goes — it breaks through bunkers, it works through targets that the infantry can’t get through.”

Major General David Bassett, program executive officer for the Army’s Ground Combat Systems concurred. It will be a tracked vehicle with substantial armor protection, Bassett said, “but certainly not what you’d see on a main battle tank.”

It will be interesting to see what the RFPs have to offer.

Previous TDI commentaries on the MPF Program:

https://dupuyinstitute.dreamhosters.com/2016/10/19/back-to-the-future-the-mobile-protected-firepower-mpf-program/

https://dupuyinstitute.dreamhosters.com/2017/03/21/u-s-army-moving-forward-with-mobile-protected-firepower-mpf-program/

TDI Friday Read: U.S. Airpower

[Image by Geopol Intelligence]

This weekend’s edition of TDI’s Friday Read is a collection of posts on the current state of U.S. airpower by guest contributor Geoffery Clark. The same factors changing the character of land warfare are changing the way conflict will be waged in the air. Clark’s posts highlight some of the way these changes are influencing current and future U.S. airpower plans and concepts.

F-22 vs. F-35: Thoughts On Fifth Generation Fighters

The F-35 Is Not A Fighter

U.S. Armed Forces Vision For Future Air Warfare

The U.S. Navy and U.S. Air Force Debate Future Air Superiority

U.S. Marine Corps Concepts of Operation with the F-35B

The State of U.S. Air Force Air Power

Fifth Generation Deterrence

 

The Effects Of Dispersion On Combat

[The article below is reprinted from the December 1996 edition of The International TNDM Newsletter. A revised version appears in Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Potomac Books, 2017), Chapter 13.]

The Effects of Dispersion on Combat
by Christopher A. Lawrence

The TNDM[1] does not play dispersion. But it is clear that dispersion has continued to increase over time, and this must have some effect on combat. This effect was identified by Trevor N. Dupuy in his various writings, starting with the Evolution of Weapons and Warfare. His graph in Understanding War of the battle casualties trends over time is presented here as Figure 1. As dispersion changes over time (dramatically), one would expect the casualties would change over time. I therefore went back to the Land Warfare Database (the 605 engagement version[2]) and proceeded to look at casualties over time and dispersion from every angle that l could.

l eventually realized that l was going to need some better definition of the time periods l was measuring to, as measuring by years scattered the data, measuring by century assembled the data in too gross a manner, and measuring by war left a confusing picture due to the number of small wars with only two or three battles in them in the Land Warfare Database. I eventually defined the wars into 14 categories, so I could fit them onto one readable graph:

To give some idea of how representative the battles listed in the LWDB were for covering the period, I have included a count of the number of battles listed in Michael Clodfelter’s two-volume book Warfare and Armed Conflict, 1618-1991. In the case of WWI, WWII and later, battles tend to be defined as a divisional-level engagement, and there were literally tens of thousands of those.

I then tested my data again looking at the 14 wars that I defined:

  • Average Strength by War (Figure 2)
  • Average Losses by War (Figure 3)
  • Percent Losses Per Day By War (Figure 4)a
  • Average People Per Kilometer By War (Figure 5)
  • Losses per Kilometer of Front by War (Figure 6)
  • Strength and Losses Per Kilometer of Front By War (Figure 7)
  • Ratio of Strength and Losses per Kilometer of Front by War (Figure 8)
  • Ratio of Strength and Loses per Kilometer of Front by Century (Figure 9)

A review of average strengths over time by century and by war showed no surprises (see Figure 2). Up through around 1900, battles were easy to define: they were one- to three-day affairs between clearly defined forces at a locale. The forces had a clear left flank and right flank that was not bounded by other friendly forces. After 1900 (and in a few cases before), warfare was fought on continuous fronts

with a ‘battle’ often being a large multi-corps operation. It is no longer clearly understood what is meant by a battle, as the forces, area covered, and duration can vary widely. For the LWDB, each battle was defined as the analyst wished. ln the case of WWI, there are a lot of very large battles which drive the average battle size up. ln the cases of the WWII, there are a lot of division-level battles, which bring the average down. In the case of the Arab-Israeli Wars, there are nothing but division and brigade-level battles, which bring the average down.

The interesting point to notice is that the average attacker strength in the 16th and 17th century is lower than the average defender strength. Later it is higher. This may be due to anomalies in our data selection.

Average loses by war (see Figure 3) suffers from the same battle definition problem.

Percent losses per day (see Figure 4) is a useful comparison through the end of the 19th Century. After that, the battles get longer and the definition of a duration of the battle is up to the analyst. Note the very dear and definite downward pattern of percent loses per day from the Napoleonic Wars through the Arab-Israeli Wars. Here is a very clear indication of the effects of dispersion. It would appear that from the 1600s to the 1800s the pattern was effectively constant and level, then declines in a very systematic pattern. This partially contradicts Trevor Dupuy’s writing and graphs (see Figure 1). It does appear that after this period of decline that the percent losses per day are being set at a new, much lower plateau. Percent losses per day by war is attached.

Looking at the actual subject of the dispersion of people (measured in people per kilometer of front) remained relatively constant from 1600 through the American Civil War (see Figure 5). Trevor Dupuy defined dispersion as the number of people in a box-like area. Unfortunately, l do not know how to measure that. lean clearly identify the left and right of a unit, but it is more difficult to tell how deep it is Furthermore, density of occupation of this box is far from uniform, with a very forward bias By the same token, fire delivered into this box is also not uniform, with a very forward bias. Therefore, l am quite comfortable measuring dispersion based upon unit frontage, more so than front multiplied by depth.

Note, when comparing the Napoleonic Wars to the American Civil War that the dispersion remains about the same. Yet, if you look at the average casualties (Figure 3) and the average percent casualties per day (Figure 4), it is clear that the rate of casualty accumulation is lower in the American Civil War (this again partially contradicts Dupuy‘s writings). There is no question that with the advent of the Minié ball, allowing for rapid-fire rifled muskets, the ability to deliver accurate firepower increased.

As you will also note, the average people per linear kilometer between WWI and WWII differs by a factor of a little over 1.5 to 1. Yet the actual difference in casualties (see Figure 4) is much greater. While one can just postulate that the difference is the change in dispersion squared (basically Dupuy‘s approach), this does not seem to explain the complete difference, especially the difference between the Napoleonic Wars and the Civil War.

lnstead of discussing dispersion, we should be discussing “casualty reduction efforts.” This basically consists of three elements:

  • Dispersion (D)
  • Increased engagement ranges (R)
  • More individual use of cover and concealment (C&C).

These three factors together result in the reduced chance to hit. They are also partially interrelated, as one cannot make more individual use of cover and concealment unless one is allowed to disperse. So, therefore. The need for cover and concealment increases the desire to disperse and the process of dispersing allows one to use more cover and concealment.

Command and control are integrated into this construct as being something that allows dispersion, and dispersion creates the need for better command control. Therefore, improved command and control in this construct does not operate as a force modifier, but enables a force to disperse.

Intelligence becomes more necessary as the opposing forces use cover and concealment and the ranges of engagement increase. By the same token, improved intelligence allows you to increase the range of engagement and forces the enemy to use better concealment.

This whole construct could be represented by the diagram at the top of the next page.

Now, I may have said the obvious here, but this construct is probably provable in each individual element, and the overall outcome is measurable. Each individual connection between these boxes may also be measurable.

Therefore, to measure the effects of reduced chance to hit, one would need to measure the following formula (assuming these formulae are close to being correct):

(K * ΔD) + (K * ΔC&C) + (K * ΔR) = H

(K * ΔC2) = ΔD

(K * ΔD) = ΔC&C

(K * ΔW) + (K * ΔI) = ΔR

K = a constant
Δ = the change in….. (alias “Delta”)
D = Dispersion
C&C = Cover & Concealment
R = Engagement Range
W = Weapon’s Characteristics
H = the chance to hit
C2 = Command and control
I = Intelligence or ability to observe

Also, certain actions lead to a desire for certain technological and system improvements. This includes the effect of increased dispersion leading to a need for better C2 and increased range leading to a need for better intelligence. I am not sure these are measurable.

I have also shown in the diagram how the enemy impacts upon this. There is also an interrelated mirror image of this construct for the other side.

I am focusing on this because l really want to come up with some means of measuring the effects of a “revolution in warfare.” The last 400 years of human history have given us more revolutionary inventions impacting war than we can reasonably expect to see in the next 100 years. In particular, I would like to measure the impact of increased weapon accuracy, improved intelligence, and improved C2 on combat.

For the purposes of the TNDM, I would very specifically like to work out an attrition multiplier for battles before WWII (and theoretically after WWII) based upon reduced chance to be hit (“dispersion”). For example, Dave Bongard is currently using an attrition multiplier of 4 for his WWI engagements that he is running for the battalion-level validation data base.[3] No one can point to a piece of paper saying this is the value that should be used. Dave picked this value based upon experience and familiarity with the period.

I have also attached Average Loses per Kilometer of Front by War (see Figure 6 above), and a summary chart showing the two on the same chart (see figure 7 above).

The values from these charts are:

The TNDM sets WWII dispersion factor at 3,000 (which l gather translates into 30,000 men per square kilometer). The above data shows a linear dispersion per kilometer of 2,992 men, so this number parallels Dupuy‘s figures.

The final chart I have included is the Ratio of Strength and Losses per Kilometer of Front by War (Figure 8). Each line on the bar graph measures the average ratio of strength over casualties for either the attacker or defender. Being a ratio, unusual outcomes resulted in some really unusually high ratios. I took the liberty of taking out six

data points because they appeared unusually lop-sided. Three of these points are from the English Civil War and were way out of line with everything else. These were the three Scottish battles where you had a small group of mostly sword-armed troops defeating a “modem” army. Also, Walcourt (1689), Front Royal (1862), and Calbritto (1943) were removed. L also have included the same chart, except by century (Figure 9).
Again, one sees a consistency in results in over 300+ years of war, in this case going all the way through WWI, then sees an entirely different pattern with WWII and the Arab-Israeli Wars

A very tentative set of conclusions from all this is:

  1. Dispersion has been relatively constant and driven by factors other than firepower from 1600-1815.
  2. Since the Napoleonic Wars, units have increasingly dispersed (found ways to reduce their chance to be hit) in response to increased lethality of weapons.
  3. As a result of this increased dispersion, casualties in a given space have declined.
  4. The ratio of this decline in casualties over area have been roughly proportional to the strength over an area from 1600 through WWI. Starting with WWII, it appears that people have dispersed faster than weapons lethality, and this trend has continued.
  5. In effect, people dispersed in direct relation to increased firepower from 1815 through 1920, and then after that time dispersed faster than the increase in lethality.
  6. It appears that since WWII, people have gone back to dispersing (reducing their chance to be hit) at the same rate that firepower is increasing.
  7. Effectively, there are four patterns of casualties in modem war:

Period 1 (1600 – 1815): Period of Stability

  • Short battles
  • Short frontages
  • High attrition per day
  • Constant dispersion
  • Dispersion decreasing slightly after late 1700s
  • Attrition decreasing slightly after mid-1700s.

Period 2 (1816 – 1905): Period of Adjustment

  • Longer battles
  • Longer frontages
  • Lower attrition per day
  • Increasing dispersion
  • Dispersion increasing slightly faster than lethality

Period 3 (1912 – 1920): Period of Transition

  • Long Battles
  • Continuous Frontages
  • Lower attrition per day
  • Increasing dispersion
  • Relative lethality per kilometer similar to past, but lower
  • Dispersion increasing slightly faster than lethality

Period 4 (1937 – present): Modern Warfare

  • Long Battles
  • Continuous Frontages
  • Low Attrition per day
  • High dispersion (perhaps constant?)
  • Relatively lethality per kilometer much lower than the past
  • Dispersion increased much faster than lethality going into the period.
  • Dispersion increased at the same rate as lethality within the period.

So the question is whether warfare of the next 50 years will see a new “period of adjustment,” where the rate of dispersion (and other factors) adjusts in direct proportion to increased lethality, or will there be a significant change in the nature of war?

Note that when l use the word “dispersion” above, l often mean “reduced chance to be hit,” which consists of dispersion, increased engagement ranges, and use of cover & concealment.

One of the reasons l wandered into this subject was to see if the TNDM can be used for predicting combat before WWII. l then spent the next few days attempting to find some correlation between dispersion and casualties. Using the data on historical dispersion provided above, l created a mathematical formulation and tested that against the actual historical data points, and could not get any type of fit.

I then locked at the length of battles over time, at one-day battles, and attempted to find a pattern. I could find none. I also looked at other permutations, but did not keep a record of my attempts. I then looked through the work done by Dean Hartley (Oakridge) with the LWDB and called Paul Davis (RAND) to see if there was anyone who had found any correlation between dispersion and casualties, and they had not noted any.

It became clear to me that if there is any such correlation, it is buried so deep in the data that it cannot be found by any casual search. I suspect that I can find a mathematical correlation between weapon lethality, reduced chance to hit (including dispersion), and casualties. This would require some improvement to the data, some systematic measure of weapons lethality, and some serious regression analysis. I unfortunately cannot pursue this at this time.

Finally, for reference, l have attached two charts showing the duration of the battles in the LWDB in days (Figure 10, Duration of Battles Over Time and Figure 11, A Count of the Duration of Battles by War).

NOTES

[1] The Tactical Numerical Deterministic Model, a combat model developed by Trevor Dupuy in 1990-1991 as the follow-up to his Quantified Judgement Model. Dr. James G. Taylor and Jose Perez also contributed to the TNDM’s development.

[2] TDI’s Land Warfare Database (LWDB) was a revised version of a database created by the Historical Evaluation Research Organization (HERO) for the then-U.S. Army Concepts and Analysis Agency (now known as the U.S. Army Center for Army Analysis (CAA)) in 1984. Since the original publication of this article, TDI expanded and revised the data into a suite of databases.

[3] This matter is discussed in Christopher A. Lawrence, “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, April 1997, pp. 40-50.

U.S. Army Updates Draft Multi-Domain Battle Operating Concept

The U.S. Army Training and Doctrine Command has released a revised draft version of its Multi-Domain Battle operating concept, titled “Multi-Domain Battle: Evolution of Combined Arms for the 21st Century, 2025-2040.” Clearly a work in progress, the document is listed as version 1.0, dated October 2017, and as a draft and not for implementation. Sydney J. Freeberg, Jr. has an excellent run-down on the revision at Breaking Defense.

The update is the result of the initial round of work between the U.S. Army and U.S. Air Force to redefine the scope of the multi-domain battlespace for the Joint Force. More work will be needed to refine the concept, but it shows remarkable cooperation in forging a common warfighting perspective between services long-noted for their independent thinking.

On a related note, Albert Palazzo, an Australian defense thinker and one of the early contributors to the Multi-Domain Battle concept, has published the first of a series of articles at The Strategy Bridge offering constructive criticism of the U.S. military’s approach to defining the concept. Palazzo warns that the U.S. may be over-emphasizing countering potential Russian and Chinese capabilities in its efforts and not enough on the broad general implications of long-range fires with global reach.

What difference can it make if those designing Multi-Domain Battle are acting on possibly the wrong threat diagnosis? Designing a solution for a misdiagnosed problem can result in the inculcation of a way of war unsuited for the wars of the future. One is reminded of the French Army during the interwar period. No one can accuse the French of not thinking seriously about war during these years, but, in the doctrine of the methodical battle, they got it wrong and misread the opportunities presented by mechanisation. There were many factors contributing to France’s defeat, but at their core was a misinterpretation of the art of the possible and a singular focus on a particular way of war. Shaping Multi-Domain Battle for the wrong problem may see the United States similarly sow the seeds for a military disaster that is avoidable.

He suggests that it would be wise for U.S. doctrine writers to take a more considered look at potential implications before venturing too far ahead with specific solutions.

A Return To Big Guns In Future Naval Warfare?

The first shot of the U.S. Navy Office of Naval Research’s (ONR) electromagnetic railgun, conducted at Naval Surface Warfare Center, Dahlgren Division in Virginia on 17 November 2016. [ONR’s Official YouTube Page]

Defense One’s Patrick Tucker reported last month that the U.S Navy Office of Naval Research (ONR) had achieved a breakthrough in capacitor design which is an important step forward in facilitating the use of electromagnetic railguns in future warships. The new capacitors are compact yet capable of delivering 20 megajoule bursts of electricity. ONR plans to increase this to 32 megajoules by next year.

Railguns use such bursts of energy to power powerful electromagnets capable of accelerating projectiles to hypersonic speeds. ONR’s goal is to produce railguns capable of firing 10 rounds per minute to a range of 100 miles.

The Navy initiated railgun development in 2005, intending to mount them on the new Zumwalt class destroyers. Since then, the production run of Zumwalts was cut from 32 to three. With the railguns still under development, the Navy has mounted 155mm cannons on them in the meantime.

Development of the railgun and a suitable naval powerplant continues. While the Zumwalts can generate 78 megajoules of energy and the Navy’s current railgun design only needs 25 to fire, the Navy still wants advanced capacitors capable of powering 150-killowatt lasers for drone defense, and new generations of radars and electronic warfare systems as well.

While railguns are huge improvement over chemical powered naval guns, there are still doubts about their effectiveness in combat compared to guided anti-ship missiles. Railgun projectiles are currently unguided and the Navy’s existing design is less powerful than the 1,000 pound warhead on the new Long Range Anti-Ship Missile (LRASM).

The U.S. Navy remains committed to railgun development nevertheless. For one idea of the role railguns and the U.S.S. Zumwalt might play in a future war, take a look at P. W. Singer and August Cole’s Ghost Fleet: A Novel of the Next World War, which came out in 2015.

Combat Readiness And The U.S. Army’s “Identity Crisis”

Servicemen of the U.S. Army’s 173rd Airborne Brigade Combat Team (standing) train Ukrainian National Guard members during a joint military exercise called “Fearless Guardian 2015,” at the International Peacekeeping and Security Center near the western village of Starychy, Ukraine, on May 7, 2015. [Newsweek]

Last week, Wesley Morgan reported in POLITICO about an internal readiness study recently conducted by the U.S. Army 173rd Airborne Infantry Brigade Combat Team. As U.S. European Command’s only airborne unit, the 173rd Airborne Brigade has been participating in exercises in the Baltic States and the Ukraine since 2014 to demonstrate the North Atlantic Treaty Organization’s (NATO) resolve to counter potential Russian aggression in Eastern Europe.

The experience the brigade gained working with Baltic and particularly Ukrainian military units that had engaged with Russian and Russian-backed Ukrainian Separatist forces has been sobering. Colonel Gregory Anderson, the 173rd Airborne Brigade commander, commissioned the study as a result. “The lessons we learned from our Ukrainian partners were substantial. It was a real eye-opener on the absolute need to look at ourselves critically,” he told POLITICO.

The study candidly assessed that the 173rd Airborne Brigade currently lacked “essential capabilities needed to accomplish its mission effectively and with decisive speed” against near-peer adversaries or sophisticated non-state actors. Among the capability gaps the study cited were

  • The lack of air defense and electronic warfare units and over-reliance on satellite communications and Global Positioning Systems (GPS) navigation systems;
  • simple countermeasures such as camouflage nets to hide vehicles from enemy helicopters or drones are “hard-to-find luxuries for tactical units”;
  • the urgent need to replace up-armored Humvees with the forthcoming Ground Mobility Vehicle, a much lighter-weight, more mobile truck; and
  • the likewise urgent need to field the projected Mobile Protected Firepower armored vehicle companies the U.S. Army is planning to add to each infantry brigade combat team.

The report also stressed the vulnerability of the brigade to demonstrated Russian electronic warfare capabilities, which would likely deprive it of GPS navigation and targeting and satellite communications in combat. While the brigade has been purchasing electronic warfare gear of its own from over-the-counter suppliers, it would need additional specialized personnel to use the equipment.

As analyst Adrian Bonenberger commented, “The report is framed as being about the 173rd, but it’s really about more than the 173rd. It’s about what the Army needs to do… If Russia uses electronic warfare to jam the brigade’s artillery, and its anti-tank weapons can’t penetrate any of the Russian armor, and they’re able to confuse and disrupt and quickly overwhelm those paratroopers, we could be in for a long war.”

While the report is a wake-up call with regard to the combat readiness in the short-term, it also pointedly demonstrates the complexity of the strategic “identity crisis” that faces the U.S. Army in general. Many of the 173rd Airborne Brigade’s current challenges can be traced directly to the previous decade and a half of deployments conducting wide area security missions during counterinsurgency operations in Iraq and Afghanistan. The brigade’s perceived shortcomings for combined arms maneuver missions are either logical adaptations to the demands of counterinsurgency warfare or capabilities that atrophied through disuse.

The Army’s specific lack of readiness to wage combined arms maneuver warfare against potential peer or near-peer opponents in Europe can be remedied given time and resourcing in the short-term. This will not solve the long-term strategic conundrum the Army faces in needing to be prepared to fight conventional and irregular conflicts at the same time, however. Unless the U.S. is willing to 1) increase defense spending to balance force structure to the demands of foreign and military policy objectives, or 2) realign foreign and military policy goals with the available force structure, it will have to resort to patching up short-term readiness issues as best as possible and continue to muddle through. Given the current state of U.S. domestic politics, muddling through will likely be the default option unless or until the consequences of doing so force a change.

Fifth Generation Deterrence

“Deterrence is the art of producing in the mind of the enemy… the FEAR to attack. And so, … the Doomsday machine is terrifying and simple to understand… and completely credible and convincing.” – Dr. Strangelove.

In a previous post, we looked at some aspects of the nuclear balance of power. In this Stpost, we will consider some aspects of conventional deterrence. Ironically, Chris Lawrence was cleaning out a box in his office (posted in this blog), which contained an important article for this debate, “The Case for More Effective, Less Expensive Weapons Systems: What ‘Quality Versus Quantity’ Issue?” by none other than Pierre M. Sprey, available here, published in 1982.

In comparing the F-15 and F-16, Sprey identifies four principal effectiveness characteristics that contribute to victory in air-to-air combat:

  1. Achieving surprise bounces and avoiding being surprised;
  2. Out-numbering the enemy in the air;
  3. Out-maneuvering the enemy to reach firing position (when surprise fails);
  4. Achieving reliable kills within the brief firing opportunities presented by combat.

“Surprise is the first because, in every air war since WWI, somewhere between 65% and 85% of all fighters shot down were unaware of their attacker.” Sprey mentions that the F-16 is superior to the F-15 due to the smaller size, and that fact that it smokes much less, both aspects that are clearly Within-Visual Range (WVR) combat considerations. Further, his discussion of Beyond Visual Range (BVR) combat is dismissive.

The F-15 has an apparently advantage inasmuch as it carries the Sparrow radar missile. On closer examination, this proves to be little or no advantage: in Vietnam, the Sparrow had a kill rate of .08 to .10, less that one third that of the AIM-9D/G — and the new models of the Sparrow do not appear to have corrected the major reasons for this disappointing performance; even worse, locking-on with the Sparrow destroys surprise because of the distinctive and powerful radar signature involved.

Sprey was right to criticize the performance of the early radar-guided missiles.  From “Trends in Air-to-Air Combat: Implications for Future Air Superiority,” page 10

From 1965 through 1968, during Operation Rolling Thunder, AIM-7 Sparrow missiles succeeded in downing their targets only 8 percent of the time and AIM-9 Sidewinders only 15 percent of the time. Pre-conflict testing indicated expected success rates of 71 and 65 percent respectively. Despite these problems, AAMs offered advantages over guns and accounted for the vast majority of U.S. air-to-air victories throughout the war.

Sprey seemed to miss out of the fact that the radar guided missile that supported BVR air combat was not something in the far distant future, but an evolution of radar and missile technology. Even in the 1980’s, the share of air-to-air combat victories by BVR missiles was on the rise, and since the 1990’s, it has become the most common way to shoot down an enemy aircraft.

In an Aviation Week podcast in July of this year, retired Marine Lt. Col. David Berke (also previously quoted in this blog), and Pierre Sprey debated the F-35. Therein, Sprey offers a formulaic definition of air power, as created by force and effectiveness, with force being a function of cost, reliability, and how often it can fly per day (sortie generation rate?). “To create air power, you have to put a bunch of airplanes in the sky over the enemy. You can’t do it with a tiny hand full, even if they are like unbelievably good. If you send six aircraft to China, they could care less what they are … F-22 deployments are now six aircraft.”

Berke counters with the ideas that he expressed before in his initial conversation with Aviation week (as analyzed in this blog), that information and situational awareness are by far the most important factor in aerial warfare. This stems from the advantage of surprise, which was Sprey’s first criteria in 1982, and remains a critical factor is warfare to this day. This reminds me a bit of Disraeli’s truism of “lies, damn lies and statistics”pick the metrics that tell your story, rather than objectively look at the data.

Critics beyond Mr. Sprey have said that high technology weapons like the F-22 and the F-35 are irrelevant for America’s wars; “the [F-22] was not relevant to the military’s operations in places like Iraq, Afghanistan and Libya — at least according to then-secretary of defense Robert Gates.” Indeed, according to the Washington Post, “Gates called the $65 billion fleet a ‘niche silver-bullet solution’ to a major aerial war threat that remains distant. … and has promised to urge President Obama to veto the military spending bill if the full Senate retains F-22 funding.”

The current conflict in Syria against ISIS, after the Russian deployment resulted in crowded and contested airspace, as evidenced by a NATO Turkish F-16 shoot down of a Russian Air Force Su-24 (wikipedia), and as reported on this blog. Indeed, ironically for Mr. Sprey’s analysis of the relative values of the AIM-9 vs the AIM-7 missiles, as again reported by this blog,

[T]he U.S. Navy F/A-18E Super Hornet locked onto a Su-22 Fitter at a range of 1.5 miles. It fired an AIM-9X heat-seeking Sidewinder missile at it. The Syrian pilot was able to send off flares to draw the missile away from the Su-22. The AIM-9X is not supposed to be so easily distracted. They had to shoot down the Su-22 with a radar guided AMRAAM missile.

For the record the AIM-7 was a direct technical predecessor of the AIM-120 AMRAAM. We can perhaps conclude that having more that one type of weapon is useful, especially as other air power nations are always trying to improve their counter measures, and this incident shows that they can do so effectively. Of course, more observations are necessary for statistical proof, but since air combat is so rare since the end of the Cold War, the opportunity to learn the lesson and improve the AIM-9X should not be squandered.

USAF Air Combat Dominance as Deterrent

Hence to fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting. – Sun Tzu

The admonition to win without fighting is indeed a timeless principle of warfare, and it is clearly illustrated through this report on the performance of the F-22 in the war against ISIS, over the crowded airspace in Syria, from Aviation Week on June 4th, 2017.  I’ve quoted at length, and applied emphasis.

Shell, a U.S. Air Force lieutenant colonel and Raptor squadron commander who spoke on the condition that Aviation Week identify him only by his call sign, and his squadron of stealth F-22 Lockheed Martin Raptors had a critical job to do: de-conflict coalition operations over Syria with an irate Russia.

… one of the most critical missions the F-22 conducts in the skies over Syria, particularly in the weeks following the April 6 Tomahawk strike, is de-confliction between coalition and non-coalition aircraft, says Shell. … the stealth F-22’s ability to evade detection gives it a unique advantage in getting non-coalition players to cooperate, says Shell. 

‘It is easier to bring air dominance to bear if you know where the other aircraft are that you are trying to influence, and they don’t know where you are,’ says Shell. ‘When other airplanes don’t know where you are, their sense of comfort goes down, so they have a tendency to comply more.

… U.S. and non-coalition aircraft were still communicating directly, over an internationally recognized, unsecure frequency often used for emergencies known as ‘Guard,’  says Shell. His F-22s acted as a kind of quarterback, using high-fidelity sensors to determine the positions of all the actors on the battlefield, directing non-coalition aircraft where to fly and asking them over the Guard frequency to move out of the way. 

The Raptors were able to fly in contested areas, in range of surface-to-air missile systems and fighters, without the non-coalition players knowing their exact positions, Shell says. This allowed them to establish air superiority—giving coalition forces freedom of movement in the air and on the ground—and a credible deterrent.

Far from being a silver bullet solution for a distant aerial war, America’s stealth fighters are providing credible deterrence on the front lines today. They have achieved in some cases, the ultimate goal of winning without fighting, by exploiting the advantage of surprise. The right question might be, how many are required for this mission, given the enormous costs of fifth generation fighters? (more on this later).  As a quarterback, the F-22 can support many allied units, as part of a larger team.

Giving credit where it is due, Mr. Sprey has rightly stated in his Aviation Week interview, “cost is part of the force you can bring to bear upon the enemy.”  His mechanism to compute air power in 2017, however, seems to ignore the most important aspect of air power since it first emerged in World War I, surprise.  His dogmatic focus on the lightweight, single purpose air-to-air fighter, which seems to shun even available, proven technology seems clear.

Tanks With Frickin’ Laser Beams On Their Heads

Portent Of The Future: This Mobile High-Energy Laser-equipped Stryker was evaluated during the 2017 Maneuver Fires Integrated Experiment at Fort Sill, Oklahoma. The MEHEL can shoot a drone out of the sky using a 5kW laser. (Photo Credit: C. Todd Lopez)

As the U.S. Army ponders its Multi-Domain Battle concept for future warfare, it is also considering what types of weapons it will need to conduct it. Among these is a replacement for the venerable M1 Abrams Main Battle Tank (MBT), which is now 40 years old. Recent trends in combat are leading some to postulate a next-generation MBT that is lighter and more maneuverable, but equipped with a variety of new defensive capabilities to make them more survivable against modern anti-tank weapons. These include electronic jamming and anti-missile missiles, collectively referred to as Active Protection Systems, as well as unmanned turrets. Manned vehicles will be augmented with unmanned ground vehicles.The Army is also exploring new advanced composite armor and nanotechnology.

Also under consideration are replacements for the traditional MBT long gun, including high-power lasers and railguns. Some of these could be powered by hydrogen power cells and biofuels.

As the U.S. looks toward lighter armored vehicles, some countries appear to going in the other direction. Both Russia and Israel are developing beefed-up versions of existing vehicles designed specifically for fighting in urban environments.

The strategic demands on U.S. ground combat forces don’t allow for the luxury of fielding larger combat vehicles that complicate the challenge of rapid deployment to face global threats. Even as the historical trend toward increasing lethality and greater dispersion on the battlefield continues, the U.S. may have little choice other than to rely on technological innovation to balance the evolving capabilities of potential adversaries.

More On The U.S. Army’s ‘Identity Crisis’

The new edition of the U.S. Army War College’s quarterly journal Parameters contains additional commentary on the question of whether the Army should be optimizing to wage combined arms maneuver warfare or wide-area security/Security Force Assistance.

Conrad Crane, the chief of historical services at the U.S. Army Heritage and Education Center offers some comments and criticism of an article by Gates Brown, “The Army’s Identity Crisis” in the Winter 2016–17 issue of Parameters. Brown then responds to Crane’s comments.

Sketching Out Multi-Domain Battle Operational Doctrine

Small Wars Journal has published an insightful essay by U.S. Army Major Amos Fox, “Multi-Domain Battle: A Perspective on the Salient Features of an Emerging Operational Doctrine.” Fox is a recent graduate of Army’s School of Advanced Military Studies (SAMS) and author of several excellent pieces assessing Russian military doctrine and its application in the Ukraine.

Drawing upon an array of sources, including recent Russian military operations, preliminary conceptualizations of MDB, Carl von Clausewitz, J.F.C Fuller, and maneuver warfare theory, Fox takes a crack at shaping the parameters of a doctrine for Multi-Domain Battle (MDB) operations on land. He begins by summarizing how MDB will connect operations to strategy.

Current proponents suggest that [MDB] will occur against peer competitors in contested environments, providing the US Army and its joint partners with a much thinner margin of victory than in the recent past. As such, US forces should look to create zones of proximal dominance to enable the active pursuit of objectives and end states, and that dislocation is the key to defeating an adversary capable of multi-domain operations.

The essence of MDB will be a constant struggle for battlespace dominance, which will be “fleeting, fragile, and prone to shock or surprise.” Achieving temporary dominance only establishes the pre-conditions necessary for closing with and destroying enemy forces, however.

Fox suggests envisioning the cross-domain, combined arms, and individual arms of ground forces (i.e. direct fire weapons, indirect fire weapons, cyber, electronic, information, reconnaissance, et cetera) as “zones of proximal dominance” or “as an orb of power which radiates from a central position.” Long-range weapons perform a protective function and form the outer layers of the zone, while shorter-range weapons constitute the fighting functions.

[O]ne must understand that in multi-domain battle they must first strip away, or dislocate, the protective layers of an enemy’s force in order to destroy its strength, or its inner core. In the cross-domain environment, an enemy’s outer core is its cross-domain and joint capabilities. Therefore, the more of the enemy’s outer can be cleaved away or neutralized, the more success friendly forces will have in defeating the enemy’s main fighting force. Dislocating the outer layers and destroying the inner core will, in essence, defeats the cross-domain enemy.

Dislocation is a concept Fox adopts from maneuver warfare theory as “a critical component of defeating an enemy with cross-domain capabilities because it denies the enemy access to its tools, renders those tools irrelevant, or forces the enemy into environments in which those tools are ill-disposed.”

Fox’s perspective is well informed and logical, but exploration of the implications of MDB are in the earliest stages. The essay is a fascinating and highly-recommended read.