The latest quarterly report from the Special Inspector General for Afghanistan Reconstruction (SIGAR) has been released. America’s military involvement in Afghanistan passed its 15th anniversary in October.
The data presented in the SIGAR report show some disturbing trends. Through the first eight months of 2016, Afghan national defense and security forces suffered approximately 15,000 casualties, including 5,523 killed. This from a reported force of 169,229 army and air force personnel (minus civilians) and 148,480 national police, for a total of 317,709. The casualty rate undoubtedly contributed to the net loss of 2,199 personnel from the previous quarter.
Afghan forces suffered 5,500 killed-in-action and 14,000+ wounded in 2015. They have already incurred that many combat deaths so far in 2016, though the number of wounded is significantly lower than in 2015. The approach of winter will slow combat operations, so the overall number of casualties for the year may not exceed the 2015 total.
The rough killed-to-wounded ratio of 3 to 1 for Afghan forces for 2016 is lower than in 2015, and does not compare favorably to rates of 9 to 1 and 13 to 1 for U.S. Army and Marine forces in combat from 2001-2012. This likely reflects a variety of factors, including rudimentary medical care and forces operating in exposed locations. It also suggests that even though the U.S. has launched over 700 air strikes, already more than the 500 carried out in all of 2015, there is still insufficient fire support for Afghan troops in contact
Insurgents are also fighting for control of more of the countryside than in 2015. The Afghan government has lost 2.2% of its territory so far this year. It controls or influences 258 of 407 total districts (63.4%), while insurgents control or influence 33 (8.1%), and 116 are “contested” (28.5%).
The overall level of violence presents a mixed picture. Security incidents between 20 May 20 and 15 August 2016 represent a 4.7% increase over the same period last year, but a 3.6% decrease from the same period in 2014.
The next U.S. president will face some difficult policy choices going forward. There are 9,800 U.S. troops slated to remain the country through the end of 2016, as part of an international training and counterterrorism force of 13,000. While the Afghan government resumed secret peace talks with the Taliban insurgents, a political resolution does not appear imminent. There appear to be no appealing strategic options or obvious ways forward for ending involvement in the longest of America’s ongoing wars against violent extremism.
One aspect of future battle of great interest to military planners is probably going to tank loss rates in combat. In a previous post, I looked at the analysis done by Trevor Dupuy on the relationship between tank and personnel losses in the U.S. experience during World War II. Today, I will take a look at his analysis of historical tank loss rates.
In general, Dupuy identified that a proportional relationship exists between personnel casualty rates in combat and losses in tanks, guns, trucks, and other equipment. (His combat attrition verities are discussed here.) Looking at World War II division and corps-level combat engagement data in 1943-1944 between U.S., British and German forces in the west, and German and Soviet forces in the east, Dupuy found similar patterns in tank loss rates.
In combat between two division/corps-sized, armor-heavy forces, Dupuy found that the tank loss rates were likely to be between five to seven times the personnel casualty rate for the winning side, and seven to 10 for the losing side. Additionally, defending units suffered lower loss rates than attackers; if an attacking force suffered a tank losses seven times the personnel rate, the defending forces tank losses would be around five times.
Dupuy also discovered the ratio of tank to personnel losses appeared to be a function of the proportion of tanks to infantry in a combat force. Units with fewer than six tanks per 1,000 troops could be considered armor supporting, while those with a density of more than six tanks per 1,000 troops were armor-heavy. Armor supporting units suffered lower tank casualty rates than armor heavy units.
Dupuy looked at tank loss rates in the 1973 Arab-Israeli War and found that they were consistent with World War II experience.
What does this tell us about possible tank losses in future combat? That is a very good question. One guess that is reasonably certain is that future tank battles will probably not involve forces of World War II division or corps size. The opposing forces will be brigade combat teams, or more likely, battalion-sized elements.
Dupuy did not have as much data on tank combat at this level, and what he did have indicated a great deal more variability in loss rates. Examples of this can be found in the tables below.
These data points showed some consistency, with a mean of 6.96 and a standard deviation of 6.10, which is comparable to that for division/corps loss rates. Personnel casualty rates are higher and much more variable than those at the division level, however. Dupuy stated that more research was necessary to establish a higher degree of confidence and relevance of the apparent battalion tank loss ratio. So one potentially fruitful area of research with regard to near future combat could very well be a renewed focus on historical experience.
Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)
While the U.S. Army has made major advances by incorporating precision into artillery, the ability and opportunity to employ precision are premised on a world of low-intensity conflict. In high-intensity conflict defined by combined-arms maneuver, the employment of artillery based on a precise point on the ground becomes a much more difficult proposition, especially when the enemy commands large formations of moving, armored vehicles, as Russia does. The U.S. joint force has recognized this dilemma and compensates for it by employing superior air forces and deep-strike fires. But Russia has undertaken a comprehensive upgrade of not just its military technology but its doctrine. We should not be surprised that Russia’s goal in this endeavor is to offset U.S. advantages in air superiority and double-down on its traditional advantages in artillery and rocket mass, range, and destructive power.
Jacobson and Scales provide a list of relatively quick fixes they assert would restore U.S. superiority in long-range fires: change policy on the use of cluster munitions; upgrade the U.S. self-propelled howitzer inventory from short-barreled 39 caliber guns to long-barreled 52 calibers and incorporate improved propellants and rocket assistance to double their existing range; reevaluate restrictions on the forthcoming Long Range Precision Fires rocket system in light of Russian attitudes toward the Intermediate Range Nuclear Forces treaty; and rebuild divisional and field artillery units atrophied by a decade of counterinsurgency warfare.
Their assessment echoes similar comments made earlier this year by Lieutenant General H. R. McMaster, director of the U.S. Army’s Capabilities Integration Center. Another option for countering enemy fire artillery capabilities, McMaster suggested, was the employment of “cross-domain fires.” As he explained, “When an Army fires unit arrives somewhere, it should be able to do surface-to-air, surface-to-surface, and shore-to-ship capabilities.
The notion of land-based fire elements engaging more than just other land or counter-air targets has given rise to a concept being called “multi-domain battle.” It’s proponents, Dr. Albert Palazzo of the Australian Army’s War Research Centre, and Lieutenant Colonel David P. McLain III, Chief, Integration and Operations Branch in the Joint and Army Concepts Division of the Army Capabilities Integration Center, argue (also at War on the Rocks) that
While Western forces have embraced jointness, traditional boundaries between land, sea, and air have still defined which service and which capability is tasked with a given mission. Multi-domain battle breaks down the traditional environmental boundaries between domains that have previously limited who does what where. The theater of operations, in this view, is a unitary whole. The most useful capability needs to get the mission no matter what domain it technically comes from. Newly emerging technologies will enable the land force to operate in ways that, in the past, have been limited by the boundaries of its domain. These technologies will give the land force the ability to dominate not just the land but also project power into and across the other domains.
Palazzo and McClain contend that future land warfare forces
…must be designed, equipped, and trained to gain and maintain advantage across all domains and to understand and respond to the requirements of the future operating environment… Multi-domain battle will create options and opportunities for the joint force, while imposing multiple dilemmas on the adversary. Through land-to-sea, land-to-air, land-to-land, land-to-space, and land-to-cyberspace fires and effects, land forces can deter, deny, and defeat the adversary. This will allow the joint commander to seize, retain, and exploit the initiative.
As an example of their concept, Palazzo and McClain cite a combined, joint operation from the Pacific Theater in World War II:
Just after dawn on September 4, 1943, Australian soldiers of the 9th Division came ashore near Lae, Papua in the Australian Army’s first major amphibious operation since Gallipoli. Supporting them were U.S. naval forces from VII Amphibious Force. The next day, the 503rd U.S. Parachute Regiment seized the airfield at Nadzab to the West of Lae, which allowed the follow-on landing of the 7th Australian Division. The Japanese defenders offered some resistance on the land, token resistance in the air, and no resistance at sea. Terrain was the main obstacle to Lae’s capture.
From the beginning, the allied plan for Lae was a joint one. The allies were able to get their forces across the approaches to the enemy’s position, establish secure points of entry, build up strength, and defeat the enemy because they dominated the three domains of war relevant at the time — land, sea, and air.
The concept of multi-domain warfare seems like a logical conceptualization for integrating land-based weapons of increased range and effect into the sorts of near-term future conflicts envisioned by U.S. policy-makers and defense analysts. It comports fairly seamlessly with the precepts of the Third Offset Strategy.
However, as has been observed with the Third Offset Strategy, this raises questions about the role of long-range fires in conflicts that do not involve near-peer adversaries, such as counterinsurgencies. Is an emphasis on technological determinism reducing the capabilities of land combat units to just what they shoot? Is the ability to take and hold ground an anachronism in anti-access/area-denial environments? Do long-range fires obviate the relationship between fire and maneuver in modern combat tactics? If even infantry squads are equipped with stand-off weapons, what is the future of close quarters combat?
The Remote Controlled Abrams Tank [Hammacher Schlemmer]
Over at Defense One, Patrick Tucker reports that General Dynamics Land Systems has teamed up with Kairos Autonomi to develop kits that “can turn virtually anything with wheels or tracks into a remote-controlled car.” It is part of a business strategy “to meet the U.S. Army’s expanding demand for unmanned ground vehicles”
Kairos kits costing less than $30,000 each have been installed on disposable vehicles to create moving targets for shooting practice. According to a spokesman, General Dynamics has also adapted them to LAV-25 Light Armored Vehicles and M1126 Strykers.
Tucker quotes Lt. Gen. H.R. McMaster (who else?), director of the U.S. Army’s Capabilities Integration Center, as saying that,
[G]etting remotely piloted and unmanned fighting vehicles out into the field is “something we really want to move forward on. What we want to do is get that kind of capability into soldiers’ hands early so we can refine the tactics, techniques and procedures, and then also consider enemy countermeasures and then build into the design of units that are autonomy enabled, build in the counter to those counters.”
According to General Dynamics Land Systems, the capability to turn any vehicle into a drone would give the U.S. an advantage over Russia, which has signaled its intent to automate versions of its T-14 Armata tank.
In my last post, on the potential for the possible development of quantum radar to undermine the U.S. technological advantage in stealth technology, I ended by asking this question:
The basic assumption behind the Third Offset Strategy is that the U.S. can innovate and adopt technological capabilities fast enough to maintain or even expand its current military superiority. Does the U.S. really have enough of a scientific and technological development advantage over its rivals to validate this assumption?
My colleague, Chris, has suggested that I expand on the thinking behind this. Here goes:
The lead times needed for developing advanced weapons and the costs involved in fielding them make betting on technological innovation as a strategy seem terribly risky. In his 1980 study of the patterns of weapon technology development, The Evolution of Weapons and Warfare, Trevor Dupuy noted that there is a clear historical pattern of a period of 20-30 years between the invention of a new weapon and its use in combat in a tactically effective way. For example, practical armored fighting vehicles were first developed in 1915 but they were not used fully effectively in battle until the late 1930s.
The examples I had in mind when I wrote my original post were the F-35 Joint Strike Fighter (JSF) and the Littoral Combat Ship (LCS), both of which derive much, if not most, of their combat power from being stealthy. If that capability were to be negated even partially by a technological breakthrough or counter by a potential adversary, then 20+ years of development time and hundreds of billions of dollars would have been essentially wasted. If either or both or weapons system were rendered ineffective in the middle of a national emergency, neither could be quickly retooled nor replaced. The potential repercussions could be devastating.
I reviewed the development history of the F-35 in a previous post. Development began in 2001 and the Air Force declared the first F-35 squadron combat operational (in a limited capacity) in August 2016 (which has since been stood down for repairs). The first fully combat-capable F-35s will not be ready until 2018 at the soonest, and the entire fleet will not be ready until at least 2023. Just getting the aircraft fully operational will have taken 15-22 years, depending on how one chooses to calculate it. It will take several more years after that to fully evaluate the F-35 in operation and develop tactics, techniques, and procedures to maximize its effectiveness in combat. The lifetime cost of the F-35 has been estimated at $1.5 trillion, which is likely to be another underestimate.
The U.S. Navy anticipated the need for ships capable of operating in shallow coastal waters in the late 1990s. Development of the LCS began in 2003 the first ships of two variants were launched in 2006 and 2008, respectively. Two of each design have been built so far. Since then, cost overruns, developmental problems, disappointing performances at sea, and reconsideration of the ship’s role led the Navy to scale back a planned purchase of 53 LCSs to 40 at the end of 2015 to allow money to be spent on other priorities. As of July 2016, only 26 LCSs have been programmed and the Navy has been instructed to select one of the two designs to complete the class. Initial program procurement costs were $22 billion, which have now risen to $39 billion. Operating costs for each ship is currently estimated at $79 million, which the Navy asserts will drop when simultaneous testing and operational use ends. The Navy plans to build LCSs until the 2040s, which includes replacements for the original ten after a service life of 25 years. Even at the annual operating cost of a current U.S. Navy frigate ($59 million), a back of the envelope calculation for a lifetime cost for the LCS is around $91 billion, all told; this is also likely an underestimate. This seems like a lot of money to spend on a weapon that the Navy intends to pull out of combat should it sustain any damage.
It would not take a technological breakthrough as singular as quantum radar to degrade the effectiveness of U.S. stealth technology, either. The Russians claim that they already possess radars that can track U.S. stealth aircraft. U.S. sources essentially concede this, but point out that tracking a stealth platform does not mean that it can be attacked successfully. Obtaining a track sufficient to target involves other technological capabilities that are susceptible to U.S. electronic warfare capabilities. U.S. stealth aircraft already need to operate in conjunction with existing EW platforms to maintain their cloaked status. Even if quantum radar proves infeasible, the game over stealth is already afoot.
Corporal Walter “Radar” O’Reilly (Gary Burghoff) | M*A*S*H
As reported in Popular Mechanics last week, Chinese state media recently announced that a Chinese defense contractor has developed the world’s first quantum radar system. Derived from the principles of quantum mechanics, quantum radar would be capable of detecting vehicles equipped with so-called “stealth” technology for defeating conventional radio-wave based radar systems.
The Chinese claim should be taken with a large grain of salt. It is not clear that a functional quantum radar can be made to work outside a laboratory, much less adapted into a functional surveillance system. Lockheed Martin patented a quantum radar design in 2008, but nothing more has been heard about it publicly.
However, the history of military innovation has demonstrated that every technological advance has eventually resulted in a counter, either through competing weapons development or by the adoption of strategies or tactics to minimize the impact of the new capabilities. The United States has invested hundreds of billions of dollars in air and naval stealth capabilities and built its current and future strategies and tactics around its effectiveness. Much of the value of this investment could be wiped out with a single technological breakthrough by its potential adversaries.
The basic assumption behind the Third Offset Strategy is that the U.S. can innovate and adopt technological capabilities fast enough to maintain or even expand its current military superiority. Does the U.S. really have enough of a scientific and technological development advantage over its rivals to validate this assumption?
Image by Center for Strategic and Budgetary Assessments (CSBA).
In several recent posts, I have alluded to something called the Third Offset Strategy without going into any detail as to what it is. Fortunately for us all, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, wrote an excellent summary and primer on what it as all about in the current edition of Joint Forces Quarterly.
The Defense Strategic Guidance (DSG) articulated 10 missions the [U.S.] joint force must accomplish in the future. These missions include the ability to:
– deter and defeat aggression
– project power despite antiaccess/area-denial (A2/AD) challenges
– operate effectively in cyberspace and space.
The follow-on 2014 Quadrennial Defense Review confirmed the importance of these missions and called for the joint force to “project power and win decisively” in spite of “increasingly sophisticated adversaries who could employ advanced warfighting capabilities.”
In these documents, U.S. policy-makers identified that the primary strategic challenge to securing the goals is that “capable adversaries are adopting potent A2/AD strategies that are challenging U.S. ability to ensure operational access.” These adversaries include China, Russia, and Iran.
The Third Offset Strategy was devised to address this primary strategic challenge.
In November 2014, then–Secretary of Defense Chuck Hagel announced a new Defense Innovation Initiative, which included the Third Offset Strategy. The initiative seeks to maintain U.S. military superiority over capable adversaries through the development of novel capabilities and concepts. Secretary Hagel modeled his approach on the First Offset Strategy of the 1950s, in which President Dwight D. Eisenhower countered the Soviet Union’s conventional numerical superiority through the buildup of America’s nuclear deterrent, and on the Second Offset Strategy of the 1970s, in which Secretary of Defense Harold Brown shepherded the development of precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance (ISR) systems to counter the numerical superiority and improving technical capability of Warsaw Pact forces along the Central Front in Europe.
Secretary of Defense Ashton Carter has built on Hagel’s vision of the Third Offset Strategy, and the proposed fiscal year 2017 budget is the first major public manifestation of the strategy: approximately $3.6 billion in research and development funding dedicated to Third Offset Strategy pursuits. As explained by Deputy Secretary of Defense Bob Work, the budget seeks to conduct numerous small bets on advanced capability research and demonstrations, and to work with Congress and the Services to craft new operational concepts so that the next administration can determine “what are the key bets we’re going to make.”
As Walton puts it, “the next Secretary of Defense will have the opportunity to make those big bets.” The keys to making the correct bets will be selecting the most appropriate scenarios to plan around, accurately assessing the performance of the U.S. joint force that will be programmed and budgeted for, and identifying the right priorities for new investment.
It is in this context that Walton recommended reviving campaign-level combat modeling at the Defense Department level, as part an overall reform of analytical processes informing force planning decisions.
Walton concludes by identifying the major obstacles in carrying out the Third Offset Strategy, some of which will be institutional and political in nature. However, he quickly passes over what might perhaps be the biggest problem with the Third Offset strategy, which is that it might be based on the wrong premises.
Lastly, the next Secretary of Defense will face numerous other, important defense challenges that will threaten to engross his or her attention, ranging from leading U.S. forces in Afghanistan, to countering Chinese, Russian, and Islamic State aggression, to reforming Goldwater-Nichols, military compensation, and base structure.
The ongoing conflicts in Afghanistan, Syria, and Iraq show no sign of abating anytime soon, yet they constitute “lesser includeds” in the Third Offset Strategy. Are we sure enough to bet that the A2/AD threat is the most important strategic challenge the U.S. will face in the near future?
Walton’s piece is worth reading and thinking about.
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”
According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.
CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).
CAPE took the criticism to heart.
CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.
However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”
As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.
Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)
Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.
In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”
As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.
In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)
So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.