Tag quantitative analysis

Meanwhile, In Afghanistan…

The latest quarterly report from the Special Inspector General for Afghanistan Reconstruction (SIGAR) has been released. America’s military involvement in Afghanistan passed its 15th anniversary in October.

The data presented in the SIGAR report show some disturbing trends. Through the first eight months of 2016, Afghan national defense and security forces suffered approximately 15,000 casualties, including 5,523 killed. This from a reported force of 169,229 army and air force personnel (minus civilians) and 148,480 national police, for a total of 317,709. The casualty rate undoubtedly contributed to the net loss of 2,199 personnel from the previous quarter.

sigur-02Afghan forces suffered 5,500 killed-in-action and 14,000+ wounded in 2015. They have already incurred that many combat deaths so far in 2016, though the number of wounded is significantly lower than in 2015. The approach of winter will slow combat operations, so the overall number of casualties for the year may not exceed the 2015 total.

The rough killed-to-wounded ratio of 3 to 1 for Afghan forces for 2016 is lower than in 2015, and does not compare favorably to rates of 9 to 1 and 13 to 1 for U.S. Army and Marine forces in combat from 2001-2012. This likely reflects a variety of factors, including rudimentary medical care and forces operating in exposed locations. It also suggests that even though the U.S. has launched over 700 air strikes, already more than the 500 carried out in all of 2015, there is still insufficient fire support for Afghan troops in contact

Insurgents are also fighting for control of more of the countryside than in 2015. The Afghan government has lost 2.2% of its territory so far this year. It controls or influences 258 of 407 total districts (63.4%), while insurgents control or influence 33 (8.1%),  and 116 are “contested” (28.5%).

sigur-03The overall level of violence presents a mixed picture. Security incidents between 20 May 20 and 15 August 2016 represent a 4.7% increase over the same period last year, but a 3.6% decrease from the same period in 2014.

sigur-01The next U.S. president will face some difficult policy choices going forward. There are 9,800 U.S. troops slated to remain the country through the end of 2016, as part of an international training and counterterrorism force of 13,000. While the Afghan government resumed secret peace talks with the Taliban insurgents, a political resolution does not appear imminent. There appear to be no appealing strategic options or obvious ways forward for ending involvement in the longest of America’s ongoing wars against violent extremism.

Tank Loss Rates in Combat: Then and Now

wwii-tank-battlefieldAs the U.S. Army and the national security community seek a sense of what potential conflicts in the near future might be like, they see the distinct potential for large tank battles. Will technological advances change the character of armored warfare? Perhaps, but it seems more likely that the next big tank battles – if they occur – will likely resemble those from the past.

One aspect of future battle of great interest to military planners is probably going to tank loss rates in combat. In a previous post, I looked at the analysis done by Trevor Dupuy on the relationship between tank and personnel losses in the U.S. experience during World War II. Today, I will take a look at his analysis of historical tank loss rates.

In general, Dupuy identified that a proportional relationship exists between personnel casualty rates in combat and losses in tanks, guns, trucks, and other equipment. (His combat attrition verities are discussed here.) Looking at World War II division and corps-level combat engagement data in 1943-1944 between U.S., British and German forces in the west, and German and Soviet forces in the east, Dupuy found similar patterns in tank loss rates.

attrition-fig-58

In combat between two division/corps-sized, armor-heavy forces, Dupuy found that the tank loss rates were likely to be between five to seven times the personnel casualty rate for the winning side, and seven to 10 for the losing side. Additionally, defending units suffered lower loss rates than attackers; if an attacking force suffered a tank losses seven times the personnel rate, the defending forces tank losses would be around five times.

Dupuy also discovered the ratio of tank to personnel losses appeared to be a function of the proportion of tanks to infantry in a combat force. Units with fewer than six tanks per 1,000 troops could be considered armor supporting, while those with a density of more than six tanks per 1,000 troops were armor-heavy. Armor supporting units suffered lower tank casualty rates than armor heavy units.

attrition-fig-59

Dupuy looked at tank loss rates in the 1973 Arab-Israeli War and found that they were consistent with World War II experience.

What does this tell us about possible tank losses in future combat? That is a very good question. One guess that is reasonably certain is that future tank battles will probably not involve forces of World War II division or corps size. The opposing forces will be brigade combat teams, or more likely, battalion-sized elements.

Dupuy did not have as much data on tank combat at this level, and what he did have indicated a great deal more variability in loss rates. Examples of this can be found in the tables below.

attrition-fig-53attrition-fig-54

These data points showed some consistency, with a mean of 6.96 and a standard deviation of 6.10, which is comparable to that for division/corps loss rates. Personnel casualty rates are higher and much more variable than those at the division level, however. Dupuy stated that more research was necessary to establish a higher degree of confidence and relevance of the apparent battalion tank loss ratio. So one potentially fruitful area of research with regard to near future combat could very well be a renewed focus on historical experience.

NOTES

Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (Falls Church, VA: NOVA Publications, 1995), pp. 41-43; 81-90; 102-103

Should Defense Department Campaign-Level Combat Modeling Be Reinstated?

Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller

In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”

According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.

CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).

CAPE took the criticism to heart.

CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.

However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”

As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.

Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)

Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.

In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”

As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.

In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)

So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

The Uncongenial Lessons of Past Conflicts

Williamson Murray, professor emeritus of history at Ohio State University, on the notion that military failures can be traced to an overemphasis on the lessons of the last war:

It is a myth that military organizations tend to do badly in each new war because they have studied too closely the last one; nothing could be farther from the truth. The fact is that military organizations, for the most part, study what makes them feel comfortable about themselves, not the uncongenial lessons of past conflicts. The result is that more often than not, militaries have to relearn in combat—and usually at a heavy cost—lessons that were readily apparent at the end of the last conflict.

[Williamson Murray, “Thinking About Innovation,” Naval War College Review, Spring 2001, 122-123. This passage was cited in a recent essay by LTG H.R. McMaster, “Continuity and Change: The Army Operating Concept and Clear Thinking About Future War,” Military Review, March-April 2015. I recommend reading both.]

Studying The Conduct of War: “We Surely Must Do Better”

"The Ultimate Sand Castle" [Flickr, Jon]
“The Ultimate Sand Castle” [Flickr, Jon]

Chris and I both have discussed previously the apparent waning interest on the part of the Department of Defense to sponsor empirical research studying the basic phenomena of modern warfare. The U.S. government’s boom-or-bust approach to this is long standing, extending back at least to the Vietnam War. Recent criticism of the Department of Defense’s Office of Net Assessment (OSD/NA) is unlikely to help. Established in 1973 and led by the legendary Andrew “Yoda” Marshall until 2015, OSD/NA plays an important role in funding basic research on topics of crucial importance to the art of net assessment. Critics of the office appear to be unaware of just how thin the actual base of empirical knowledge is on the conduct of war. Marshall understood that the net result of a net assessment based mostly on guesswork was likely to be useless, or worse, misleadingly wrong.

This lack of attention to the actual conduct of war extends beyond government sponsored research. In 2004, Stephen Biddle, a professor of political science at George Washington University and a well-regarded defense and foreign policy analyst, published Military Power: Explaining Victory and Defeat in Modern Battle. The book focused on a very basic question: what causes victory and defeat in battle? Using a comparative approach that incorporated quantitative and qualitative methods, he effectively argued that success in contemporary combat was due to the mastery of what he called the “modern system.” (I won’t go into detail here, but I heartily recommend the book to anyone interested in the topic.)

Military Power was critically acclaimed and received multiple awards from academic, foreign policy, military, operations research, and strategic studies organizations. For all the accolades, however, Biddle was quite aware just how neglected the study of war has become in U.S. academic and professional communities. He concluded the book with a very straightforward assessment:

[F]or at least a generation, the study of war’s conduct has fallen between the stools of the institutional structure of modern academia and government. Political scientists often treat war itself as outside their subject matter; while its causes are seen as political and hence legitimate subjects of study, its conduct and outcomes are more often excluded. Since the 1970s, historians have turned away from the conduct of operations to focus on war’s effects on social, economic, and political structures. Military officers have deep subject matter knowledge but are rarely trained as theoreticians and have pressing operational demands on their professional attention. Policy analysts and operations researchers focus so tightly on short-deadline decision analysis (should the government buy the F22 or cancel it? Should the Army have 10 divisions or 8?) that underlying issues of cause and effect are often overlooked—even when the decisions under analysis turn on embedded assumptions about the causes of military outcomes. Operations research has also gradually lost much of its original empirical focus; modeling is now a chiefly deductive undertaking, with little systematic effort to test deductive claims against real world evidence. Over forty years ago, Thomas Schelling and Bernard Brodie argued that without an academic discipline of military science, the study of the conduct of war had languished; the passage of time has done little to overturn their assessment. Yet the subject is simply too important to treat by proxy and assumption on the margins of other questions In the absence of an institutional home for the study of warfare, it is all the more essential that analysts in existing disciplines recognize its importance and take up the business of investigating capability and its causes directly and rigorously. Few subjects are more important—or less studied by theoretical social scientists. With so much at stake, we surely must do better. [pp. 207-208]

Biddle published Military Power 12 years ago, in 2004. Has anything changed substantially? Have we done better?

U.S. Tank Losses and Crew Casualties in World War II

Attrition-CoverIn his 1990 book Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War, Trevor Dupuy took a look at the relationship between tank losses and crew casualties in the U.S. 1st  Army between June 1944 and May 1945 (pp. 80-81). The data sampled included 797 medium (averaging 5 crewmen) and 101 light (averaging 4 crewmen) tanks. For each tank loss, an average of one crewman was killed or wounded. Interestingly, although gunfire accounted for the most tank and crew casualties, infantry anti-tank rockets (such as the Panzerfaust) inflicted 13% of the tank losses, but caused 21% of the crew losses.

Attrition, Fig. 50Casualties were evenly distributed among the crew positions.

Attrition, Fig. 51Whether or not a destroyed tank caught fire made a big difference for the crew. Only 40% of the tanks in the sample burned, but casualties were distributed evenly between the tanks that burned and those that did not. This was due to the higher casualty rate in the tanks that caught fire (1.28 crew casualties per tank) and those that did not (0.78 casualties per tank).

Attrition, Fig. 52Dupuy found the relationship between tank losses and casualties to be straightforward and obvious. This relationship would not be so simple when viewed at the battalion level. More on that in a future post [Tank Loss Rates in Combat: Then and Now].

Some back-of-the-envelope calculations

Keying off Shawn’s previous post…if the DOD figures are accurate this means:

  1. In about two years, we have killed 45,000 insurgents from a force of around 25,000.
    1. This is around 100% losses a year
    2. This means the insurgents had to completely recruit an entire new force every year for the last two years
      1. Or maybe we just shot everyone twice.
    3. It is clear the claimed kills are way too high, or the claimed strength is too low, or a little bit of both
  2. We are getting three kills per sortie.
    1. Now, I have not done an analysis of kills per sorties in other insurgencies (and this would be useful to do), but I am pretty certain that this is unusually high.
  3. We are killing almost a 1,000 insurgents (not in uniform) for every civilian we are killing.
    1. Even if I use the Airwars figure of 1,568 civilians killed, this is 29 insurgents for every civilian killed.
    2. Again, I have not an analysis of insurgents killed per civilian killed in air operations (and this would be useful to do), but these rates seem unusually low.

It appears that there are some bad estimates being made here. Nothing wrong with doing an estimate, but something is very wrong if you are doing estimates that are significantly off. Some of these appear to be off.

This is, of course, a problem we encountered with Iraq and Afghanistan and is discussed to some extent in my book America’s Modern Wars. It was also a problem with the Soviet Army in World War II, and is something I discuss in some depth in my Kursk book.

It would be useful to develop a set of benchmarks from past wars looking at insurgents killed per sorties, insurgents killed per civilian killed in air operations (an other types of operations), insurgents killed compared to force strength, and so forth.

The Military Conflict Institute (TMCI) Will Meet in October

TMCI logoThe Military Conflict Institute (the website has not been recently updated) will hold it’s 58th General Working Meeting from 3-5 October 2016, hosted by the Institute for Defense Analysis in Alexandria, Virginia. It will feature discussions and presentations focused on war termination in likely areas of conflict in the near future, such as Egypt, Turkey, North Korea, Iran, Saudi Arabia, Kurdistan, and Israel. There will also be presentations on related and general military topics as well.

TMCI was founded in 1979 by Dr. Donald S. Marshall and Trevor Dupuy. They were concerned by the inability of existing Defense Department combat models to produce results that were consistent or rooted in historical experience. The organization is a non-profit, interdisciplinary, informal group that avoids government or institutional affiliation in order to maintain an independent perspective and voice. It’s objective is to advance public understanding of organized warfare in all its aspects. Most of the initial members were drawn from the ranks of operations analysts experienced in quantitative historical study and military operations research, but it has grown to include a diverse group of scholars, historians, students of war, soldiers, sailors, marines, airmen, and scientists. Member disciplines range from military science to diplomacy and philosophy.

For agenda information, contact Roger Mickelson TMCI6@aol.com. For joining instructions, contact Rosser Bobbitt rbobbitt@ida.org. Attendance is subject to approval.