Tag Third Offset Strategy

A2/D2 and Jam Gee-Cee in the Western Pacific

western-pacific-oceanOne of the primary scenarios the Third Offset Strategy is intended to address is a potential military conflict between the United States and the People’s Republic of China over the sovereignty of the Republic of China (Taiwan) and territorial control of the South and East China Seas. As surveyed by James Holmes in a wonderful Mahanian geopolitical analysis, the South China Sea is a semi-enclosed sea at the intersection between East Asia and the Indian Ocean region, bounded by strategic gaps and choke points between island chains, atolls, and reefs, and riven by competing territorial claims among rising and established regional powers.

China’s current policy appears to be to develop the ability to assert military control over the Western Pacific and deny U.S. armed forces access to the area in case of overt conflict. The strategic dimension is framed by China’s pursuit of anti-access/area denial (A2/D2) capabilities enabled by development of sophisticated long-range strike, sensor, guidance, and other military technologies, and the use of asymmetrical warfare operational concepts such as psychological and information operations, and “lawfare” (the so-called “three warfares.”) China is also advancing its interests in decidedly low-tech ways as well, such as creating artificial islands on disputed reefs through dredging.

The current U.S. approach to thwarting China’s A2/D2 strategy is the Joint Concept for Access and Maneuver in the Global Commons (JAM-GC, aka “Jam Gee Cee”), or the concept formerly known as AirSea Battle. As described by Stephen Biddle and Ivan Oelrich, the current iteration of JAM-GC

…is designed to preserve U.S. access to the Western Pacific by combining passive defenses against Chinese missile attack with an emphasis on offensive action to destroy or disable the forces that China would use to establish A2/AD. This offensive action would use “cross-domain synergy” among U.S. space, cyber, air, and maritime forces (hence the moniker “AirSea”) to blind or suppress Chinese sensors. The heart of the concept, however, lies in physically destroying the Chinese weapons and infrastructure that underpin A2/AD.

The brute, counterforce character of JAM-GC provides the logic behind proposals for new long-range precision strike weapons such as the Air Force’s stealthy Long Range Strike-Bomber (LRS-B) program, recently designated the B-21 Raider.

The JAM-GC concept has not yet been officially set and continues to evolve. Both the A2/D2 construct and the premises behind JAM-GC are being challenged. As Biddle and Oelrich conclude in their detailed analysis of the strategic and military trends in the region, it is not at all clear that China’s A2/D2 approach will actually achieve its goal.

[W]e find that by 2040 China will not achieve military hegemony over the Western Pacific or anything close to it—even without ASB. A2/AD is giving air and maritime defenders increasing advantages, but those advantages are strongest over controlled landmasses and weaken over distance. As both sides deploy A2/AD, these capabilities will increasingly replace today’s U.S. command of the global commons not with Chinese hegemony but with a more differentiated pattern of control, with a U.S. sphere of influence around allied landmasses, a Chinese sphere of influence over the Chinese mainland, and contested battlespace covering much of the South and East China Seas, wherein neither power enjoys wartime freedom of surface or air movement.

They also raise deeper concerns about JAM-GC’s emphasis on an aggressive counterforce posture. In an era of constrained defense spending, developing and acquiring the military capability to execute it could be costly. Also, long-range air and missile strikes against the Chinese mainland runs the distinct risk of escalating a regional conflict into a general war between nuclear armed opponents.

A recent RAND analysis echoed these conclusions. The adoption of counterforce strategies by both the U.S. and China would result in heavy military losses by both sides that would make it difficult to constrain a longer, broader conflict. Although the RAND analysts foresee the U.S. prevailing in such a conflict, it would not be quick and the ramifications to both sides would be severe.

Dissatisfaction with these options and potential outcomes is partly what motivated the development of the Third Offset Strategy in the first place. It is not clear whether leveraging technological innovation can provide new operational capabilities that will enable successful solutions to these strategic dilemmas. What does seem apparent is that fresh thinking is needed.

 

Back To The Future: The Mobile Protected Firepower (MPF) Program

The MPF's historical antecedent: the German Army's 7.5 cm leichtes Infanteriegeschütz.
The MPF’s historical antecedent: the German Army’s 7.5 cm leichtes Infanteriegeschütz.

Historically, one of the challenges of modern combat has been in providing responsive, on-call, direct fire support for infantry. The U.S. armed forces have traditionally excelled in providing fire support for their ground combat maneuver elements, but recent changes have apparently caused concern that this will continue to be the case in the future.

Case in point is the U.S. Army’s Mobile Protected Firepower (MPF) program. The MPF seems to reflect concern by the U.S. Army that future combat environments will inhibit the capabilities of heavy artillery and air support systems tasked with providing fire support for infantry units. As Breaking Defense describes it,

“Our near-peers have sought to catch up with us,” said Fort Benning commander Maj. Gen. Eric Wesley, using Pentagon code for China and Russia. These sophisticated nation-states — and countries buying their hardware, like Iran — are developing so-called Anti-Access/Area Denial (A2/AD): layered defenses of long-range sensors and missiles to keep US airpower and ships at a distance (anti-access), plus anti-tank weapons, mines, and roadside bombs to decimate ground troops who get close (area denial).

The Army’s Maneuver Center of Excellence at Ft. Benning, Georgia is the proponent for development of a new lightly-armored, tracked vehicle mounting a 105mm or 120mm gun. According to the National Interest, the goal of the MPF program is

… to provide a company of vehicles—which the Army adamantly does not want to refer to as light tanks—to brigades from the 82nd Airborne Division or 10th Mountain Division that can provide heavy fire support to those infantry units. The new vehicle, which is scheduled to enter into full-scale engineering and manufacturing development in 2019—with fielding tentatively scheduled for around 2022—would be similar in concept to the M551 Sheridan light tank. The Sheridan used to be operated the Army’s airborne units unit until 1996, but was retired without replacement. (Emphasis added)

As Chris recently pointed out, General Dynamics Land Systems has developed a prototype it calls the Griffin. BAE Systems has also pitched its XM8 Armored Gun System, developed in the 1990s.

The development of a dedicated, direct fire support weapon for line infantry can be seen as something of an anachronism. During World War I, German infantrymen sought alternatives to relying on heavy artillery support that was under the control of higher headquarters and often slow or unresponsive to tactical situations on the battlefield. They developed an expedient called the “infantry gun” (Infanteriegeschütz) by stripping down captured Russian 76.2mm field guns for direct use against enemy infantry, fortifications, and machine guns. Other armies imitated the Germans, but between the wars, the German Army was only one to develop 75mm and 150mm wheeled guns of its own dedicated specifically to infantry combat support.

The Germans were also the first to develop versions mounted on tracked, armored chassis, called “assault guns” (Sturmgeschütz). During World War II, the Germans often pressed their lightly armored assault guns into duty as ersatz tanks to compensate for insufficient numbers of actual tanks. (The apparently irresistible lure to use anything that looks like a tank as a tank also afflicted the World War II U.S. tank destroyer as well, yielding results that dissatisfied all concerned.)

Other armies again copied the Germans during the war, but the assault gun concept was largely abandoned afterward. Both the U.S. and the Soviet Union developed vehicles intended to provide gunfire support for airborne infantry, but these were more aptly described as light tanks. The U.S. Army’s last light tank, the M551 Sheridan, was retired in 1996 and not replaced.

It appears that the development of new technology is leading the U.S. Army back to old ideas. Just don’t call them light tanks.

Are Long-Range Fires Changing The Character of Land Warfare?

Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)
Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)

Has U.S. land warfighting capability been compromised by advances by potential adversaries in long-range artillery capabilities? Michael Jacobson and Robert H. Scales argue that this is the case in an article on War on the Rocks.

While the U.S. Army has made major advances by incorporating precision into artillery, the ability and opportunity to employ precision are premised on a world of low-intensity conflict. In high-intensity conflict defined by combined-arms maneuver, the employment of artillery based on a precise point on the ground becomes a much more difficult proposition, especially when the enemy commands large formations of moving, armored vehicles, as Russia does. The U.S. joint force has recognized this dilemma and compensates for it by employing superior air forces and deep-strike fires. But Russia has undertaken a comprehensive upgrade of not just its military technology but its doctrine. We should not be surprised that Russia’s goal in this endeavor is to offset U.S. advantages in air superiority and double-down on its traditional advantages in artillery and rocket mass, range, and destructive power.

Jacobson and Scales provide a list of relatively quick fixes they assert would restore U.S. superiority in long-range fires: change policy on the use of cluster munitions; upgrade the U.S. self-propelled howitzer inventory from short-barreled 39 caliber guns to long-barreled 52 calibers and incorporate improved propellants and rocket assistance to double their existing range; reevaluate restrictions on the forthcoming Long Range Precision Fires rocket system in light of Russian attitudes toward the Intermediate Range Nuclear Forces treaty; and rebuild divisional and field artillery units atrophied by a decade of counterinsurgency warfare.

Their assessment echoes similar comments made earlier this year by Lieutenant General H. R. McMaster, director of the U.S. Army’s Capabilities Integration Center. Another option for countering enemy fire artillery capabilities, McMaster suggested, was the employment of “cross-domain fires.” As he explained, “When an Army fires unit arrives somewhere, it should be able to do surface-to-air, surface-to-surface, and shore-to-ship capabilities.

The notion of land-based fire elements engaging more than just other land or counter-air targets has given rise to a concept being called “multi-domain battle.” It’s proponents, Dr. Albert Palazzo of the Australian Army’s War Research Centre, and Lieutenant Colonel David P. McLain III, Chief, Integration and Operations Branch in the Joint and Army Concepts Division of the Army Capabilities Integration Center, argue (also at War on the Rocks) that

While Western forces have embraced jointness, traditional boundaries between land, sea, and air have still defined which service and which capability is tasked with a given mission. Multi-domain battle breaks down the traditional environmental boundaries between domains that have previously limited who does what where. The theater of operations, in this view, is a unitary whole. The most useful capability needs to get the mission no matter what domain it technically comes from. Newly emerging technologies will enable the land force to operate in ways that, in the past, have been limited by the boundaries of its domain. These technologies will give the land force the ability to dominate not just the land but also project power into and across the other domains.

Palazzo and McClain contend that future land warfare forces

…must be designed, equipped, and trained to gain and maintain advantage across all domains and to understand and respond to the requirements of the future operating environment… Multi-domain battle will create options and opportunities for the joint force, while imposing multiple dilemmas on the adversary. Through land-to-sea, land-to-air, land-to-land, land-to-space, and land-to-cyberspace fires and effects, land forces can deter, deny, and defeat the adversary. This will allow the joint commander to seize, retain, and exploit the initiative.

As an example of their concept, Palazzo and McClain cite a combined, joint operation from the Pacific Theater in World War II:

Just after dawn on September 4, 1943, Australian soldiers of the 9th Division came ashore near Lae, Papua in the Australian Army’s first major amphibious operation since Gallipoli. Supporting them were U.S. naval forces from VII Amphibious Force. The next day, the 503rd U.S. Parachute Regiment seized the airfield at Nadzab to the West of Lae, which allowed the follow-on landing of the 7th Australian Division.  The Japanese defenders offered some resistance on the land, token resistance in the air, and no resistance at sea. Terrain was the main obstacle to Lae’s capture.

From the beginning, the allied plan for Lae was a joint one. The allies were able to get their forces across the approaches to the enemy’s position, establish secure points of entry, build up strength, and defeat the enemy because they dominated the three domains of war relevant at the time — land, sea, and air.

The concept of multi-domain warfare seems like a logical conceptualization for integrating land-based weapons of increased range and effect into the sorts of near-term future conflicts envisioned by U.S. policy-makers and defense analysts. It comports fairly seamlessly with the precepts of the Third Offset Strategy.

However, as has been observed with the Third Offset Strategy, this raises questions about the role of long-range fires in conflicts that do not involve near-peer adversaries, such as counterinsurgencies. Is an emphasis on technological determinism reducing the capabilities of land combat units to just what they shoot? Is the ability to take and hold ground an anachronism in anti-access/area-denial environments? Do long-range fires obviate the relationship between fire and maneuver in modern combat tactics? If even infantry squads are equipped with stand-off weapons, what is the future of close quarters combat?

Unmanned Ground Vehicles: Drones Are Not Just For Flying Anymore

The Remote Controlled Abrams Tank [Hammacher Schlemmer]
The Remote Controlled Abrams Tank [Hammacher Schlemmer]

Over at Defense One, Patrick Tucker reports that General Dynamics Land Systems has teamed up with Kairos Autonomi to develop kits that “can turn virtually anything with wheels or tracks into a remote-controlled car.” It is part of a business strategy “to meet the U.S. Army’s expanding demand for unmanned ground vehicles”

Kairos kits costing less than $30,000 each have been installed on disposable vehicles to create moving targets for shooting practice. According to a spokesman, General Dynamics has also adapted them to LAV-25 Light Armored Vehicles and M1126 Strykers.

Tucker quotes Lt. Gen. H.R. McMaster (who else?), director of the U.S. Army’s Capabilities Integration Center, as saying that,

[G]etting remotely piloted and unmanned fighting vehicles out into the field is “something we really want to move forward on. What we want to do is get that kind of capability into soldiers’ hands early so we can refine the tactics, techniques and procedures, and then also consider enemy countermeasures and then build into the design of units that are autonomy enabled, build in the counter to those counters.”

According to General Dynamics Land Systems, the capability to turn any vehicle into a drone would give the U.S. an advantage over Russia, which has signaled its intent to automate versions of its T-14 Armata tank.

Technology, Eggs, and Risk (Oh, My)

Tokyo, Japan --- Eggs in a basket --- Image by © JIRO/Corbis
Tokyo, Japan — Eggs in a basket — Image by © JIRO/Corbis

In my last post, on the potential for the possible development of quantum radar to undermine the U.S. technological advantage in stealth technology, I ended by asking this question:

The basic assumption behind the Third Offset Strategy is that the U.S. can innovate and adopt technological capabilities fast enough to maintain or even expand its current military superiority. Does the U.S. really have enough of a scientific and technological development advantage over its rivals to validate this assumption?

My colleague, Chris, has suggested that I expand on the thinking behind this. Here goes:

The lead times needed for developing advanced weapons and the costs involved in fielding them make betting on technological innovation as a strategy seem terribly risky. In his 1980 study of the patterns of weapon technology development, The Evolution of Weapons and Warfare, Trevor Dupuy noted that there is a clear historical pattern of a period of 20-30 years between the invention of a new weapon and its use in combat in a tactically effective way. For example, practical armored fighting vehicles were first developed in 1915 but they were not used fully effectively in battle until the late 1930s.

The examples I had in mind when I wrote my original post were the F-35 Joint Strike Fighter (JSF) and the Littoral Combat Ship (LCS), both of which derive much, if not most, of their combat power from being stealthy. If that capability were to be negated even partially by a technological breakthrough or counter by a potential adversary, then 20+ years of development time and hundreds of billions of dollars would have been essentially wasted. If either or both or weapons system were rendered ineffective in the middle of a national emergency, neither could be quickly retooled nor replaced. The potential repercussions could be devastating.

I reviewed the development history of the F-35 in a previous post. Development began in 2001 and the Air Force declared the first F-35 squadron combat operational (in a limited capacity) in August 2016 (which has since been stood down for repairs). The first fully combat-capable F-35s will not be ready until 2018 at the soonest, and the entire fleet will not be ready until at least 2023. Just getting the aircraft fully operational will have taken 15-22 years, depending on how one chooses to calculate it. It will take several more years after that to fully evaluate the F-35 in operation and develop tactics, techniques, and procedures to maximize its effectiveness in combat. The lifetime cost of the F-35 has been estimated at $1.5 trillion, which is likely to be another underestimate.

The U.S. Navy anticipated the need for ships capable of operating in shallow coastal waters in the late 1990s. Development of the LCS began in 2003 the first ships of two variants were launched in 2006 and 2008, respectively. Two of each design have been built so far. Since then, cost overruns, developmental problems, disappointing performances at sea, and reconsideration of the ship’s role led the Navy to scale back a planned purchase of 53 LCSs to 40 at the end of 2015 to allow money to be spent on other priorities. As of July 2016, only 26 LCSs have been programmed and the Navy has been instructed to select one of the two designs to complete the class. Initial program procurement costs were $22 billion, which have now risen to $39 billion. Operating costs for each ship is currently estimated at $79 million, which the Navy asserts will drop when simultaneous testing and operational use ends. The Navy plans to build LCSs until the 2040s, which includes replacements for the original ten after a service life of 25 years. Even at the annual operating cost of a current U.S. Navy frigate ($59 million), a back of the envelope calculation for a lifetime cost for the LCS is around $91 billion, all told; this is also likely an underestimate. This seems like a lot of money to spend on a weapon that the Navy intends to pull out of combat should it sustain any damage.

It would not take a technological breakthrough as singular as quantum radar to degrade the effectiveness of U.S. stealth technology, either. The Russians claim that they already possess radars that can track U.S. stealth aircraft. U.S. sources essentially concede this, but point out that tracking a stealth platform does not mean that it can be attacked successfully. Obtaining a track sufficient to target involves other technological capabilities that are susceptible to U.S. electronic warfare capabilities. U.S. stealth aircraft already need to operate in conjunction with existing EW platforms to maintain their cloaked status. Even if quantum radar proves infeasible, the game over stealth is already afoot.

Quantum Radar: Should We Be Putting All Our Eggs In The Technology Basket?

Corporal Walter "Radar" O'Reilly (Gary Burghoff) | M*A*S*H
Corporal Walter “Radar” O’Reilly (Gary Burghoff) | M*A*S*H

As reported in Popular Mechanics last week, Chinese state media recently announced that a Chinese defense contractor has developed the world’s first quantum radar system. Derived from the principles of quantum mechanics, quantum radar would be capable of detecting vehicles equipped with so-called “stealth” technology for defeating conventional radio-wave based radar systems.

The Chinese claim should be taken with a large grain of salt. It is not clear that a functional quantum radar can be made to work outside a laboratory, much less adapted into a functional surveillance system. Lockheed Martin patented a quantum radar design in 2008, but nothing more has been heard about it publicly.

However, the history of military innovation has demonstrated that every technological advance has eventually resulted in a counter, either through competing weapons development or by the adoption of strategies or tactics to minimize the impact of the new capabilities. The United States has invested hundreds of billions of dollars in air and naval stealth capabilities and built its current and future strategies and tactics around its effectiveness. Much of the value of this investment could be wiped out with a single technological breakthrough by its potential adversaries.

The basic assumption behind the Third Offset Strategy is that the U.S. can innovate and adopt technological capabilities fast enough to maintain or even expand its current military superiority. Does the U.S. really have enough of a scientific and technological development advantage over its rivals to validate this assumption?

Betting On The Future: The Third Offset Strategy

Image by Center for Strategic and Budgetary Assessments (CSBA).
Image by Center for Strategic and Budgetary Assessments (CSBA).

In several recent posts, I have alluded to something called the Third Offset Strategy without going into any detail as to what it is. Fortunately for us all, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, wrote an excellent summary and primer on what it as all about in the current edition of Joint Forces Quarterly.

The Third Offset Strategy emerged from Defense Strategic Guidance issued by the President and Secretary of Defense in 2012 and from the results of the 2014 Quadrennial Defense Review. As Walton outlined,

The Defense Strategic Guidance (DSG) articulated 10 missions the [U.S.] joint force must accomplish in the future. These missions include the ability to:

– deter and defeat aggression

– project power despite antiaccess/area-denial (A2/AD) challenges

– operate effectively in cyberspace and space.

The follow-on 2014 Quadrennial Defense Review confirmed the importance of these missions and called for the joint force to “project power and win decisively” in spite of “increasingly sophisticated adversaries who could employ advanced warfighting capabilities.”

In these documents, U.S. policy-makers identified that the primary strategic challenge to securing the goals is that “capable adversaries are adopting potent A2/AD strategies that are challenging U.S. ability to ensure operational access.” These adversaries include China, Russia, and Iran.

The Third Offset Strategy was devised to address this primary strategic challenge.

In November 2014, then–Secretary of Defense Chuck Hagel announced a new Defense Innovation Initiative, which included the Third Offset Strategy. The initiative seeks to maintain U.S. military superiority over capable adversaries through the development of novel capabilities and concepts. Secretary Hagel modeled his approach on the First Offset Strategy of the 1950s, in which President Dwight D. Eisenhower countered the Soviet Union’s conventional numerical superiority through the buildup of America’s nuclear deterrent, and on the Second Offset Strategy of the 1970s, in which Secretary of Defense Harold Brown shepherded the development of precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance (ISR) systems to counter the numerical superiority and improving technical capability of Warsaw Pact forces along the Central Front in Europe.

Secretary of Defense Ashton Carter has built on Hagel’s vision of the Third Offset Strategy, and the proposed fiscal year 2017 budget is the first major public manifestation of the strategy: approximately $3.6 billion in research and development funding dedicated to Third Offset Strategy pursuits. As explained by Deputy Secretary of Defense Bob Work, the budget seeks to conduct numerous small bets on advanced capability research and demonstrations, and to work with Congress and the Services to craft new operational concepts so that the next administration can determine “what are the key bets we’re going to make.”

As Walton puts it, “the next Secretary of Defense will have the opportunity to make those big bets.” The keys to making the correct bets will be selecting the most appropriate scenarios to plan around, accurately assessing the performance of the U.S. joint force that will be programmed and budgeted for, and identifying the right priorities for new investment.

It is in this context that Walton recommended reviving campaign-level combat modeling at the Defense Department level, as part an overall reform of analytical processes informing force planning decisions.

Walton concludes by identifying the major obstacles in carrying out the Third Offset Strategy, some of which will be institutional and political in nature. However, he quickly passes over what might perhaps be the biggest problem with the Third Offset strategy, which is that it might be based on the wrong premises.

Lastly, the next Secretary of Defense will face numerous other, important defense challenges that will threaten to engross his or her attention, ranging from leading U.S. forces in Afghanistan, to countering Chinese, Russian, and Islamic State aggression, to reforming Goldwater-Nichols, military compensation, and base structure.

The ongoing conflicts in Afghanistan, Syria, and Iraq show no sign of abating anytime soon, yet they constitute “lesser includeds” in the Third Offset Strategy. Are we sure enough to bet that the A2/AD threat is the most important strategic challenge the U.S. will face in the near future?

Walton’s piece is worth reading and thinking about.

 

Should Defense Department Campaign-Level Combat Modeling Be Reinstated?

Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller

In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”

According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.

CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).

CAPE took the criticism to heart.

CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.

However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”

As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.

Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)

Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.

In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”

As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.

In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)

So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

The Saga of the F-35: Too Big To Fail?

Lockheed Upbeat Despite F-35 Losing Dogfight To Red Baron (Image by DuffelBlog)
Lockheed Upbeat Despite F-35 Losing Dogfight To Red Baron (Image by DuffelBlog)

Dan Grazier and Mandy Smithberger provide a detailed run down of the current status of the F-35 Joint Strike Fighter (JSF) over at the Center for Defense Information at the Project On Government Oversight (POGO). The Air Force recently declared its version, the F-35A, combat ready, but Grazer and Smithberger make a detailed case that this pronouncement is “wildly premature.”

The Pentagon’s top testing office warns that the F-35 is in no way ready for combat since it is “not effective and not suitable across the required mission areas and against currently fielded threats.”

As it stands now, the F-35 would need to run away from combat and have other planes come to its rescue, since it “will need support to locate and avoid modern threats, acquire targets, and engage formations of enemy fighter aircraft due to outstanding performance deficiencies and limited weapons carriage available (i.e., two bombs and two air-to-air missiles).”

In several instances, the memo rated the F-35A less capable than the aircraft we already have.

The F-35’s prime contractor, Lockheed Martin, is delivering progressively upgraded versions of the aircraft in blocks, but the first fully-combat operational block will not be delivered until 2018. There are currently 175 operational F-35s with limited combat capability, with 80 more scheduled for delivery in 2017 and 100 in 2018. However, the Government Accountability Office estimates that it will cost $1.7 billion to retroactively upgrade these 335 initial F-35s to full combat ready status. Operational testing and evaluation of those rebuilt aircraft won’t be completed until 2021 and they will remain non-combat capable until 2023 at the earliest, which means that the original 355 F-35s won’t really be fully operational for at least seven more years, or 22 years after Lockheed was awarded the development and production contract in 2001. And this is only if the JSF Program and Lockheed manage to hit their current targets with a program—estimated at $1.5 trillion over its operational life, the most expensive weapon in U.S. history—characterized by delays and cost overruns.

With over $400 billion in sunk costs already, the F-35 program may have become “too big to fail,” with all the implications that phrase connotes. Countless electrons have been spun assessing and explaining this state of affairs. It is possible that the problems will be corrected and the F-35 will fulfill the promises made on its behalf. The Air Force continues to cast it as the centerpiece of its warfighting capability 20 years from now.

Moreover, the Department of Defense has doubled-down on the technology-driven Revolution in Military Affairs paradigm with its Third Offset Strategy, which is premised on the proposition that advanced weapons and capabilities will afford the U.S. continued military dominance into the 21st century. Time will tell if the long, painful saga of the F-35 will be a cautionary tale or a bellwether.