Tag strategic studies

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

The Saga of the F-35: Too Big To Fail?

Lockheed Upbeat Despite F-35 Losing Dogfight To Red Baron (Image by DuffelBlog)
Lockheed Upbeat Despite F-35 Losing Dogfight To Red Baron (Image by DuffelBlog)

Dan Grazier and Mandy Smithberger provide a detailed run down of the current status of the F-35 Joint Strike Fighter (JSF) over at the Center for Defense Information at the Project On Government Oversight (POGO). The Air Force recently declared its version, the F-35A, combat ready, but Grazer and Smithberger make a detailed case that this pronouncement is “wildly premature.”

The Pentagon’s top testing office warns that the F-35 is in no way ready for combat since it is “not effective and not suitable across the required mission areas and against currently fielded threats.”

As it stands now, the F-35 would need to run away from combat and have other planes come to its rescue, since it “will need support to locate and avoid modern threats, acquire targets, and engage formations of enemy fighter aircraft due to outstanding performance deficiencies and limited weapons carriage available (i.e., two bombs and two air-to-air missiles).”

In several instances, the memo rated the F-35A less capable than the aircraft we already have.

The F-35’s prime contractor, Lockheed Martin, is delivering progressively upgraded versions of the aircraft in blocks, but the first fully-combat operational block will not be delivered until 2018. There are currently 175 operational F-35s with limited combat capability, with 80 more scheduled for delivery in 2017 and 100 in 2018. However, the Government Accountability Office estimates that it will cost $1.7 billion to retroactively upgrade these 335 initial F-35s to full combat ready status. Operational testing and evaluation of those rebuilt aircraft won’t be completed until 2021 and they will remain non-combat capable until 2023 at the earliest, which means that the original 355 F-35s won’t really be fully operational for at least seven more years, or 22 years after Lockheed was awarded the development and production contract in 2001. And this is only if the JSF Program and Lockheed manage to hit their current targets with a program—estimated at $1.5 trillion over its operational life, the most expensive weapon in U.S. history—characterized by delays and cost overruns.

With over $400 billion in sunk costs already, the F-35 program may have become “too big to fail,” with all the implications that phrase connotes. Countless electrons have been spun assessing and explaining this state of affairs. It is possible that the problems will be corrected and the F-35 will fulfill the promises made on its behalf. The Air Force continues to cast it as the centerpiece of its warfighting capability 20 years from now.

Moreover, the Department of Defense has doubled-down on the technology-driven Revolution in Military Affairs paradigm with its Third Offset Strategy, which is premised on the proposition that advanced weapons and capabilities will afford the U.S. continued military dominance into the 21st century. Time will tell if the long, painful saga of the F-35 will be a cautionary tale or a bellwether.

The Uncongenial Lessons of Past Conflicts

Williamson Murray, professor emeritus of history at Ohio State University, on the notion that military failures can be traced to an overemphasis on the lessons of the last war:

It is a myth that military organizations tend to do badly in each new war because they have studied too closely the last one; nothing could be farther from the truth. The fact is that military organizations, for the most part, study what makes them feel comfortable about themselves, not the uncongenial lessons of past conflicts. The result is that more often than not, militaries have to relearn in combat—and usually at a heavy cost—lessons that were readily apparent at the end of the last conflict.

[Williamson Murray, “Thinking About Innovation,” Naval War College Review, Spring 2001, 122-123. This passage was cited in a recent essay by LTG H.R. McMaster, “Continuity and Change: The Army Operating Concept and Clear Thinking About Future War,” Military Review, March-April 2015. I recommend reading both.]

Russia’s Strategy in Ukraine

"Russian Build-Up In and Around Ukraine: August 12, 2016," Institute for the Study of WarOver at Foreign Policy, Michael Kofman, a research scientist at CNA Corp. and fellow at the Wilson Center’s Kennan Institute, has analyzed recent Russian troop deployments on Ukraine’s border peripheries and what they imply about the strategic goals of the Russian government in the mid-term. He concludes that the Russians are not massing for a possible invasion in the short-term. Instead, the shifting of forces suggests sustainable, long-term deployments at strategically important locations along the border. The mid-term objective of this is to secure the current status-quo.

The Russian General Staff is not only repositioning these units back where they were before 2009, it’s also rebuilding a capable combat grouping on Crimea — albeit one that’s largely defensive in nature… It also secures the Russian vision for how this conflict ends: In a hypothetical future where the Minsk agreement is actually implemented, Russian forces may withdraw from the separatist enclaves in the Donbass. If the deal fails to hold or Kiev reneges on the terms, Russian divisions ringing the country from its north to very southeast (not including Crimea) would be poised to counter any Ukrainian moves by striking from several directions.

Kofman also sees this strategy as seeking to maintain Russia’s political dominance over Ukraine in the longer term.

The string of divisions, airbases, and brigades will be able to effect conventional deterrence or compellence for years to come… Russia will retain escalation dominance over Ukraine for the foreseeable future. By the end of 2017, its forces will be better positioned to conduct an incursion or threaten regime change in Kiev than they ever were in 2014.

Kofman recommends that the U.S. and its allies carefully think through the implications of this strategy. He believes it will take Ukraine five to 10 years to rebuild an effective military, but even if successful, the future correlation of forces and the aggressive positioning of Russian forces could make the situation more unstable rather than less so.

U.S. policymakers should think about the medium to long term — a timeline that is admittedly not our strong suit. If this conflict is not placed on stable footing by the time both countries feel themselves capable of engaging in a larger fight, it may well result in a conventional war that would dwarf the small set-piece battles we’ve seen so far. Beyond imposing a ceasefire on the current fighting, the West should think about what a rematch might look like several years from now.

Studying The Conduct of War: “We Surely Must Do Better”

"The Ultimate Sand Castle" [Flickr, Jon]
“The Ultimate Sand Castle” [Flickr, Jon]

Chris and I both have discussed previously the apparent waning interest on the part of the Department of Defense to sponsor empirical research studying the basic phenomena of modern warfare. The U.S. government’s boom-or-bust approach to this is long standing, extending back at least to the Vietnam War. Recent criticism of the Department of Defense’s Office of Net Assessment (OSD/NA) is unlikely to help. Established in 1973 and led by the legendary Andrew “Yoda” Marshall until 2015, OSD/NA plays an important role in funding basic research on topics of crucial importance to the art of net assessment. Critics of the office appear to be unaware of just how thin the actual base of empirical knowledge is on the conduct of war. Marshall understood that the net result of a net assessment based mostly on guesswork was likely to be useless, or worse, misleadingly wrong.

This lack of attention to the actual conduct of war extends beyond government sponsored research. In 2004, Stephen Biddle, a professor of political science at George Washington University and a well-regarded defense and foreign policy analyst, published Military Power: Explaining Victory and Defeat in Modern Battle. The book focused on a very basic question: what causes victory and defeat in battle? Using a comparative approach that incorporated quantitative and qualitative methods, he effectively argued that success in contemporary combat was due to the mastery of what he called the “modern system.” (I won’t go into detail here, but I heartily recommend the book to anyone interested in the topic.)

Military Power was critically acclaimed and received multiple awards from academic, foreign policy, military, operations research, and strategic studies organizations. For all the accolades, however, Biddle was quite aware just how neglected the study of war has become in U.S. academic and professional communities. He concluded the book with a very straightforward assessment:

[F]or at least a generation, the study of war’s conduct has fallen between the stools of the institutional structure of modern academia and government. Political scientists often treat war itself as outside their subject matter; while its causes are seen as political and hence legitimate subjects of study, its conduct and outcomes are more often excluded. Since the 1970s, historians have turned away from the conduct of operations to focus on war’s effects on social, economic, and political structures. Military officers have deep subject matter knowledge but are rarely trained as theoreticians and have pressing operational demands on their professional attention. Policy analysts and operations researchers focus so tightly on short-deadline decision analysis (should the government buy the F22 or cancel it? Should the Army have 10 divisions or 8?) that underlying issues of cause and effect are often overlooked—even when the decisions under analysis turn on embedded assumptions about the causes of military outcomes. Operations research has also gradually lost much of its original empirical focus; modeling is now a chiefly deductive undertaking, with little systematic effort to test deductive claims against real world evidence. Over forty years ago, Thomas Schelling and Bernard Brodie argued that without an academic discipline of military science, the study of the conduct of war had languished; the passage of time has done little to overturn their assessment. Yet the subject is simply too important to treat by proxy and assumption on the margins of other questions In the absence of an institutional home for the study of warfare, it is all the more essential that analysts in existing disciplines recognize its importance and take up the business of investigating capability and its causes directly and rigorously. Few subjects are more important—or less studied by theoretical social scientists. With so much at stake, we surely must do better. [pp. 207-208]

Biddle published Military Power 12 years ago, in 2004. Has anything changed substantially? Have we done better?

Chinese Carriers II

The Type 001A Class carrier:

China’s First Homebuilt Aircraft Carrier

  1. Won’t be operational until 2020 “at the earliest”
  2. Had a ski ramp in the bow (like the Liaoning)
  3. Displacement is 60,000 to 70,000 tons
  4. Estimate to carry around 48 aircraft
    1. 36 J-15 multirole fighters
    2. 12 Z-9 or Z-18 helicopters

Not sure I believe the article in the previous post about China having four more of these ready-for-action by 2025.

The video in the article of the Liaoning landing and launching J-15s is worth watching.

Chinese Carriers

Chinese Carriers

There seems to be some buzz out there about Chinese aircraft carriers:

http://www.huffingtonpost.com/asiatoday/china-likely-to-become-ai_b_11164324.html

http://foreignpolicy.com/2016/01/21/will-china-become-an-aircraft-carrier-superpower/

We usually don’t talk about seapower on this blog but doing a simple count of carriers in the world is useful:

  • Total Carriers (100,000+ tons): 10 (all U.S.)
  • Total Carriers (42,000 – 59,100 tons): 5 (China, Russia, India, U.S., France)
  • Total Carriers (40,000 – 41,649 tons): 8 (all U.S.)
  • Total Carriers (26,000 – 32,800 tons): 7 (Brazil, India, 2 Australian, Italy, Japan, Spain)
  • Total Carriers (11,486 – 21,500 tons): 10 (UK, 3 French, Egypt, 2 Japanese, South Korean, Italy, Thailand)

Summarizing the count (and there is a big difference between a 100,000+ Nimitz class carrier the Thailand’s 11,486 ton Charki Naruebet):

  • U.S. 19 carriers
  • U.S. Allies: 14 carriers
  • Neutrals: 5 carriers (India, Brazil, Egypt, Thailand)
  • Potentially hostile: 2 carriers (China, Russia)
  • Total: 40 carriers

China and Russian both have one carrier of over 55,000 tons. These Kuznetsov class carriers can carry around 36 – 41 aircraft. Each of our ten Nimitz class carriers carry around 80-90 aircraft. Our amphibious assault ships can carry 36 or more aircraft. In all reality, these carriers are their equivalent.

To be commissioned in the future:

  1. 2016    U.S.                 100,000 tons (CVN-78)
  2. 2016    Egypt                 21,300 tons
  3. 2017    Japan                27,000 tons
  4. 2017    UK                     70,600 tons !!!
  5. 2018    India                  40,000 tons
  6. 2018    U.S.                   45,000 tons
  7. 2019    Russian             14,000 tons
  8. 2019    South Korea      18,800 tons
  9. 2020    UK                     70,600 tons   !!!
  10. 2020    China                 65,000 tons   !!!
  11. 2020    U.S.                 100,000 tons (CVN-79)
  12. 2021    Turkey               26,000 tons
  13. 2022    Italy                 TBD
  14. 2025    India                  65,000 tons
  15. 2025    Russia             100,000 tons !!!
  16. 2025    U.S.                 100,000 tons (CVN-80)
  17. 2028    South Korea      30,000 tons
  18. 2029    Brazil               TBD
  19. 2036    South Korea      30,000 tons
  20. TBD    India                   4 carriers at 30,000 tons
  21. TBD    Singapore        TBD
  22. TBD    U.S.                   7 carriers at 100,000 tons  (CVN 81-87)
  23. TBD    U.S.                   9 carriers at 45,693 tons (LHA 8-16)

Source: https://en.wikipedia.org/wiki/List_of_aircraft_carriers_in_service

Now, the first article states that the Chinese plan to have six carriers deployed by 2025. There are only two shown in these listings, the active Liaoning (CV-16) and the newly build CV-001A to be commissioned in 2020. So maybe four more 65,000-ton carriers by 2025?

Needless to say, we are probably not looking at a “carrier gap” anytime in the near or mid-term future.

Some back-of-the-envelope calculations

Keying off Shawn’s previous post…if the DOD figures are accurate this means:

  1. In about two years, we have killed 45,000 insurgents from a force of around 25,000.
    1. This is around 100% losses a year
    2. This means the insurgents had to completely recruit an entire new force every year for the last two years
      1. Or maybe we just shot everyone twice.
    3. It is clear the claimed kills are way too high, or the claimed strength is too low, or a little bit of both
  2. We are getting three kills per sortie.
    1. Now, I have not done an analysis of kills per sorties in other insurgencies (and this would be useful to do), but I am pretty certain that this is unusually high.
  3. We are killing almost a 1,000 insurgents (not in uniform) for every civilian we are killing.
    1. Even if I use the Airwars figure of 1,568 civilians killed, this is 29 insurgents for every civilian killed.
    2. Again, I have not an analysis of insurgents killed per civilian killed in air operations (and this would be useful to do), but these rates seem unusually low.

It appears that there are some bad estimates being made here. Nothing wrong with doing an estimate, but something is very wrong if you are doing estimates that are significantly off. Some of these appear to be off.

This is, of course, a problem we encountered with Iraq and Afghanistan and is discussed to some extent in my book America’s Modern Wars. It was also a problem with the Soviet Army in World War II, and is something I discuss in some depth in my Kursk book.

It would be useful to develop a set of benchmarks from past wars looking at insurgents killed per sorties, insurgents killed per civilian killed in air operations (an other types of operations), insurgents killed compared to force strength, and so forth.

The Military Conflict Institute (TMCI) Will Meet in October

TMCI logoThe Military Conflict Institute (the website has not been recently updated) will hold it’s 58th General Working Meeting from 3-5 October 2016, hosted by the Institute for Defense Analysis in Alexandria, Virginia. It will feature discussions and presentations focused on war termination in likely areas of conflict in the near future, such as Egypt, Turkey, North Korea, Iran, Saudi Arabia, Kurdistan, and Israel. There will also be presentations on related and general military topics as well.

TMCI was founded in 1979 by Dr. Donald S. Marshall and Trevor Dupuy. They were concerned by the inability of existing Defense Department combat models to produce results that were consistent or rooted in historical experience. The organization is a non-profit, interdisciplinary, informal group that avoids government or institutional affiliation in order to maintain an independent perspective and voice. It’s objective is to advance public understanding of organized warfare in all its aspects. Most of the initial members were drawn from the ranks of operations analysts experienced in quantitative historical study and military operations research, but it has grown to include a diverse group of scholars, historians, students of war, soldiers, sailors, marines, airmen, and scientists. Member disciplines range from military science to diplomacy and philosophy.

For agenda information, contact Roger Mickelson TMCI6@aol.com. For joining instructions, contact Rosser Bobbitt rbobbitt@ida.org. Attendance is subject to approval.

Saigon, 1965

The American RAND staff and Vietnamese interviewers on the front porch of the villa on Rue Pasteur. Courtesy of Hanh Easterbrook. [Revisionist History]

Although this blog focuses on quantitative historical analysis, it is probably a good idea to consider from time to time that the analysis is being done by human beings. As objective as analysts try to be about the subjects they study, they cannot avoid interpreting what they see through the lenses of their own personal biases, experiences, and perspectives. This is not a bad thing, as each analyst can bring something new to the process and find things that other perhaps cannot.

The U.S. experience in Vietnam offers a number of examples of this. Recently, journalist and writer Malcolm Gladwell presented a podcast exploring an effort by the RAND Corporation initiated in the early 1960s to interview and assess the morale of captured Viet Cong fighters and defectors. His story centers on two RAND analysts, Leon Gouré and Konrad Kellen, and one of their Vietnamese interpreters, Mai Elliott. The podcast traces the origins and history of the project, how Gouré, Kellen, and Elliott brought very different perspectives to their work, and how they developed differing interpretations of the evidence they collected. Despite the relevance of the subject and the influence the research had on decision-making at high levels, the study ended inconclusively and ambivalently for all involved. (Elliott would go on to write an account of RAND’s activities in Southeast Asia and several other books.)

Gladwell presents an interesting human story as well as some insight into the human element of social science analysis. It is a unique take on one aspect of the Vietnam War and definitely worth the time to listen to. The podcast is part of his Revisionist History series.