Tag strategic studies

Japan’s Grand Strategy And The Japanese Air Self Defense Force (JASDF) (I)

Japanese Air Self Defense Force (JASDF) F-15 at Chitose Air Base, Japan in 2014. [Suga/Wikimedia]

In the previous post on Japan’s grand strategy, I observed its focus on the maritime domain and connectivity with the Indian Ocean. Much seaborne trade flows through this region, especially oil supplies for industrialized countries in East Asia, including Japan and China. These sea lines of communication (SLOC) extend far beyond Japan’s sovereign territory.

I also noted that the Japanese home islands required attention as well, as challenges to airspace sovereignty are ever present, even as they ebb and flow with the geopolitical situation of the times (see statistics through 2017).

To the student of military might, it may seem strange for a nation to project power in the maritime domain but to have a more reserved attitude towards projecting power in the air domain. After all, it has been well demonstrated and accepted that air power can be highly effective in the maritime domain, as evidenced by:

The Royal Navy launched the first all-aircraft ship-to-ship naval attack in history, employing 21 obsolete Fairey Swordfish biplane torpedo bombers from the aircraft carrier HMS Illustrious in the Mediterranean Sea. The attack struck the battle fleet of the Regia Marina at anchor in the harbor of Taranto. “Taranto, and the night of 11–12 November 1940, should be remembered for ever as having shown once and for all that in the Fleet Air Arm the Navy has its most devastating weapon.” Admiral Andrew Cunningham, British Royal Navy

The infamous attack on the U.S. Navy Pacific Fleet at anchor on 7 December 1941 involved the notable use of naval aviation by the Imperial Japanese Navy’s 1st Air Fleet (Kidō Butai), “[A] revolutionary and potentially formidable instrument of sea power.”  Gordon Prange.

The Royal Navy battleship HMS Prince of Wales and battlecruiser HMS Repulse were sunk by land-based bombers and torpedo bombers of the Imperial Japanese Navy off the coast of Malaya on 10 December 1941.

This ability to rapidly project power over great distances from the air contributed to the general state of surprise that the Allies found themselves (summed up nicely here):

The technological superiority of Japanese aviation, the bombing of Pearl Harbor, the sinking of HMS Prince of Wales and Repulse, and Japan’s rapid advance and dominance of the air shocked everyone. Japan was not only technologically superior in the air, its ability to support, replace, and move air assets was far superior to the Americans and the British. General Percival, the British commander in Malaya, was surprised that the Japanese were able to bomb Singapore in the first days of the war despite the fact that their nearest airbase was seven hundred miles away. He would soon profess his amazement at the performance of Japanese aircraft and their ability to launch coordinated attacks on targets all over Malaya.

Even after aerial defeats at Midway, the Marianas, and after the devastating strategic bombing campaign by the U..S Army Air Forces (USAAF), the Japanese were able to field effective air units, such as the 343rd Kōkūtai (Naval Air Group), with veteran pilots, led by experienced commanders such as Minoru Genda (more about him later), using excellent fighter aircraft; the N1K-J Shiden Kai / “George”. In these limited situations, the balance of aerial combat was not so lopsided as the headline numbers suggest (here is an excellent thesis on the complexity in these ratios). These air defense efforts, however, where too little, too late for the Japanese, but they illustrate capabilities which would re-emerge after the war, and especially in military alliance and rearmament with the US.

So, after having innovated the use of air power in the 1930’s and clearly demonstrating this to the world in the 1940’s, why is today’s JASDF relatively circumspect, especially relative to the Japanese Maritime Self-Defense Force (JMSDF), as Japan gradually moves into a more assertive foreign policy (as discussed previously)?

Some Useful Resources for Post-World War II U.S. Army Doctrine Development

This list originated in response to a Twitter query discussing the history of post-World War II U.S. Army doctrine development. It is hardly exhaustive but it does include titles and resources that may not be widely known.

The first two are books:

Benjamin Jensen, Forging the Sword: Doctrinal Change in the U.S. Army (Stanford University Press, 2016)

Jensen focused on the institutional processes shaping the Army’s continual post-war World War II efforts to reform its doctrine in response to changes in the character of modern warfare.

Shimon Naveh, In Pursuit of Military Excellence: The Evolution of Operational Theory (Routledge, 1997)

In an excellent overview of the evolution of operational thought through the 20th century, Naveh devoted two chapters to the Army’s transition to Active Defense in the 70s and then to AirLand Battle in the 80s.

There are several interesting monographs that are available online:

Andrew J. Bacevich, The Pentomic Era: The U.S. Army Between Korea and Vietnam (NDU Press, 1986)

Paul Herbert, Deciding What Has to Be Done: General William E. DePuy and the 1976 Edition of FM 100-5, Operations (Combat Studies Institute, 1988)

John Romjue, From Active Defense to AirLand Battle: the Development of Army Doctrine 1973-1982 (TRADOC, 1984)

John Romjue, The Army of Excellence: The Development of the 1980s Army (TRADOC, 1997)

John Romjue, American Army Doctrine for the Post-Cold War (TRADOC, 1997)

A really useful place to browse is the Army Command and General Staff College’s online Skelton Combined Arms Research Library (CARL). It is loaded with old manuals and student papers and theses addressing a wide variety of topics related to the nuts and bolts of doctrine.

Another good place to browse is the Defense Technical Information Center (DTIC), which is a huge digital library of government sponsored research. I recommend searches on publications by the Army’s defunct operations research organizations: Operations Research Office (ORO), Research Analysis Corporation (RAC), and the Special Operations Research Office (SORO). The Combat Operations Research Group (CORG), particularly a series of studies of Army force structure from squads to theater HQ’s by Virgil Ney. There is much more to find in DTIC.

Two other excellent places to browse for material on doctrine are the Combat Studies Institute Press publications on CARL and the U.S. Army Center of Military History’s publications.

Some journals with useful research include the Journal of Cold War Studies and the Journal of Strategic Studies.

If anyone else has suggestions, let me know.

Engaging the Phalanx (part 7 of 7)

Hopefully this is my last post on the subject (but I suspect not, as I expect a public response from the three TRADOC authors). This is in response to the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (see Part 1, Part 2, Part 3, Part 4, Part 5, Part 6). The issue here is the “Base of Sand” problem, which is what the original blog post that “inspired” their article was about:

Wargaming Multi-Domain Battle: The Base Of Sand Problem

While the first paragraph of their article addressed this blog post and they reference Paul Davis’ 1992 Base of Sand paper in their footnotes (but not John Stockfish’s paper, which is an equally valid criticism), they then do not discuss the “Base of Sand” problem further. They do not actually state whether this is a problem or not a problem. I gather by this notable omission that in fact they do understand that it is a problem, but being employees of TRADOC they are limited as to what they can publicly say. I am not.

I do address the “Base of Sand” problem in my book War by Numbers, Chapter 18. It has also been addressed in a few other posts on this blog. We are critics because we do not see significant improvement in the industry. In some cases, we are seeing regression.

In the end, I think the best solution for the DOD modeling and simulation community is not to “circle the wagons” and defend what they are currently doing, but instead acknowledge the limitations and problems they have and undertake a corrective action program. This corrective action program would involve: 1) Properly addressing how to measure and quantify certain aspects of combat (for example: Breakpoints) and 2) Validating these aspects and the combat models these aspects are part of by using real-world combat data. This would be an iterative process, as you develop and then test the model, then further develop it, and then test it again. This moves us forward. It is a more valued approach than just “circling the wagons.” As these models and simulations are being used to analyze processes that may or may not make us fight better, and may or may not save American service members lives, then I think it is important enough to do right. That is what we need to be focused on, not squabbling over a blog post (or seven).

Has The Army Given Up On Counterinsurgency Research, Again?

Mind-the-Gap

[In light of the U.S. Army’s recent publication of a history of it’s involvement in Iraq from 2003 to 2011, it may be relevant to re-post this piece from from 29 June 2016.]

As Chris Lawrence mentioned yesterday, retired Brigadier General John Hanley’s review of America’s Modern Wars in the current edition of Military Review concluded by pointing out the importance of a solid empirical basis for staff planning support for reliable military decision-making. This notion seems so obvious as to be a truism, but in reality, the U.S. Army has demonstrated no serious interest in remedying the weaknesses or gaps in the base of knowledge underpinning its basic concepts and doctrine.

In 2012, Major James A. Zanella published a monograph for the School of Advanced Military Studies of the U.S. Army Command and General Staff College (graduates of which are known informally as “Jedi Knights”), which examined problems the Army has had with estimating force requirements, particularly in recent stability and counterinsurgency efforts.

Historically, the United States military has had difficulty articulating and justifying force requirements to civilian decision makers. Since at least 1975, governmental officials and civilian analysts have consistently criticized the military for inadequate planning and execution. Most recently, the wars in Afghanistan and Iraq reinvigorated the debate over the proper identification of force requirements…Because Army planners have failed numerous times to provide force estimates acceptable to the President, the question arises, why are the planning methods inadequate and why have they not been improved?[1]

Zanella surveyed the various available Army planning tools and methodologies for determining force requirements, but found them all either inappropriate or only marginally applicable, or unsupported by any real-world data. He concluded

Considering the limitations of Army force planning methods, it is fair to conclude that Army force estimates have failed to persuade civilian decision-makers because the advice is not supported by a consistent valid method for estimating the force requirements… What is clear is that the current methods have utility when dealing with military situations that mirror the conditions represented by each model. In the contemporary military operating environment, the doctrinal models no longer fit.[2]

Zanella did identify the existence of recent, relevant empirical studies on manpower and counterinsurgency. He noted that “the existing doctrine on force requirements does not benefit from recent research” but suggested optimistically that it could provide “the Army with new tools to reinvigorate the discussion of troops-to-task calculations.”[3] Even before Zanella published his monograph, however, the Defense Department began removing any detailed reference or discussion about force requirements in counterinsurgency from Army and Joint doctrinal publications.

As Zanella discussed, there is a body of recent empirical research on manpower and counterinsurgency that contains a variety of valid and useful insights, but as I recently discussed, it does not yet offer definitive conclusions. Much more research and analysis is needed before the conclusions can be counted on as a valid and justifiably reliable basis for life and death decision-making. Yet, the last of these government sponsored studies was completed in 2010. Neither the Army nor any other organization in the U.S. government has funded any follow-on work on this subject and none appears forthcoming. This boom-or-bust pattern is nothing new, but the failure to do anything about it is becoming less and less understandable.

NOTES

[1] Major James A. Zanella, “Combat Power Analysis is Combat Power Density” (Ft. Leavenworth, KS: School of Advanced Military Studies, U.S. Army Command and General Staff College, 2012), pp. 1-2.

[2] Ibid, 50.

[3] Ibid, 47.

Validating Attrition

Continuing to comment on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 3 of 7; see Part 1, Part 2)

On the first page (page 28) in the third column they make the statement that:

Models of complex systems, especially those that incorporate human behavior, such as that demonstrated in combat, do not often lend themselves to empirical validation of output measures, such as attrition.

Really? Why can’t you? If fact, isn’t that exactly the model you should be validating?

More to the point, people have validated attrition models. Let me list a few cases (this list is not exhaustive):

1. Done by Center for Army Analysis (CAA) for the CEM (Concepts Evaluation Model) using Ardennes Campaign Simulation Study (ARCAS) data. Take a look at this study done for Stochastic CEM (STOCEM): https://apps.dtic.mil/dtic/tr/fulltext/u2/a489349.pdf

2. Done in 2005 by The Dupuy Institute for six different casualty estimation methodologies as part of Casualty Estimation Methodologies Studies. This was work done for the Army Medical Department and funded by DUSA (OR). It is listed here as report CE-1: http://www.dupuyinstitute.org/tdipub3.htm

3. Done in 2006 by The Dupuy Institute for the TNDM (Tactical Numerical Deterministic Model) using Corps and Division-level data. This effort was funded by Boeing, not the U.S. government. This is discussed in depth in Chapter 19 of my book War by Numbers (pages 299-324) where we show 20 charts from such an effort. Let me show you one from page 315:

 

So, this is something that multiple people have done on multiple occasions. It is not so difficult that The Dupuy Institute was not able to do it. TRADOC is an organization with around 38,000 military and civilian employees, plus who knows how many contractors. I think this is something they could also do if they had the desire.

 

Validation

Continuing to comment on the article in the December 2018 issue of the Phalanx by Jonathan Alt, Christopher Morey and Larry Larimer (this is part 2 of 7; see part 1 here).

On the first page (page 28) top of the third column they make the rather declarative statement that:

The combat simulations used by military operations research and analysis agencies adhere to strict standards established by the DoD regarding verification, validation and accreditation (Department of Defense, 2009).

Now, I have not reviewed what has been done on verification, validation and accreditation since 2009, but I did do a few fairly exhaustive reviews before then. One such review is written up in depth in The International TNDM Newsletter. It is Volume 1, No. 4 (February 1997). You can find it here:

http://www.dupuyinstitute.org/tdipub4.htm

The newsletter includes a letter dated 21 January 1997 from the Scientific Advisor to the CG (Commanding General)  at TRADOC (Training and Doctrine Command). This is the same organization that the three gentlemen who wrote the article in the Phalanx work for. The Scientific Advisor sent a letter out to multiple commands to try to flag the issue of validation (letter is on page 6 of the newsletter). My understanding is that he received few responses (I saw only one, it was from Leavenworth). After that, I gather there was no further action taken. This was a while back, so maybe everything has changed, as I gather they are claiming with that declarative statement. I doubt it.

This issue to me is validation. Verification is often done. Actual validations are a lot rarer. In 1997, this was my list of combat models in the industry that had been validated (the list is on page 7 of the newsletter):

1. Atlas (using 1940 Campaign in the West)

2. Vector (using undocumented turning runs)

3. QJM (by HERO using WWII and Middle-East data)

4. CEM (by CAA using Ardennes Data Base)

5. SIMNET/JANUS (by IDA using 73 Easting data)

 

Now, in 2005 we did a report on Casualty Estimation Methodologies (it is report CE-1 list here: http://www.dupuyinstitute.org/tdipub3.htm). We reviewed the listing of validation efforts, and from 1997 to 2005…nothing new had been done (except for a battalion-level validation we had done for the TNDM). So am I now to believe that since 2009, they have actively and aggressively pursued validation? Especially as most of this time was in a period of severely declining budgets, I doubt it. One of the arguments against validation made in meetings I attended in 1987 was that they did not have the time or budget to spend on validating. The budget during the Cold War was luxurious by today’s standards.

If there have been meaningful validations done, I would love to see the validation reports. The proof is in the pudding…..send me the validation reports that will resolve all doubts.

Engaging the Phalanx

The Military Operations Research Society (MORS) publishes a periodical journal called the Phalanx. In the December 2018 issue was an article that referenced one of our blog posts. This took us by surprise. We only found out about thanks to one of the viewers of this blog. We are not members of MORS. The article is paywalled and cannot be easily accessed if you are not a member.

It is titled “Perspectives on Combat Modeling” (page 28) and is written by Jonathan K. Alt, U.S. Army TRADOC Analysis Center, Monterey, CA.; Christopher Morey, PhD, Training and Doctrine Command Analysis Center, Ft. Leavenworth, Kansas; and Larry Larimer, Training and Doctrine Command Analysis Center, White Sands, New Mexico. I am not familiar with any of these three gentlemen.

The blog post that appears to be generating this article is this one:

Wargaming Multi-Domain Battle: The Base Of Sand Problem

Simply by coincidence, Shawn Woodford recently re-posted this in January. It was originally published on 10 April 2017 and was written by Shawn.

The opening two sentences of the article in the Phalanx reads:

Periodically, within the Department of Defense (DoD) analytic community, questions will arise regarding the validity of the combat models and simulations used to support analysis. Many attempts (sic) to resurrect the argument that models, simulations, and wargames “are built on the thin foundation of empirical knowledge about the phenomenon of combat.” (Woodford, 2017).

It is nice to be acknowledged, although it this case, it appears that we are being acknowledged because they disagree with what we are saying.

Probably the word that gets my attention is “resurrect.” It is an interesting word, that implies that this is an old argument that has somehow or the other been put to bed. Granted it is an old argument. On the other hand, it has not been put to bed. If a problem has been identified and not corrected, then it is still a problem. Age has nothing to do with it.

On the other hand, maybe they are using the word “resurrect” because recent developments in modeling and validation have changed the environment significantly enough that these arguments no longer apply. If so, I would be interested in what those changes are. The last time I checked, the modeling and simulation industry was using many of the same models they had used for decades. In some cases, were going back to using simpler hex-games for their modeling and wargaming efforts. We have blogged a couple of times about these efforts. So, in the world of modeling, unless there have been earthshaking and universal changes made in the last five years that have completely revamped the landscape….then the decades old problems still apply to the decades old models and simulations.

More to come (this is the first of at least 7 posts on this subject).

Afghan Security Forces Deaths Top 45,000 Since 2014

The President of Afghanistan, Ashraf Ghani, speaking with CNN’s Farid Zakiria, at the World Economic Forum in Davos, Switzerland, 25 January 2019. [Office of the President, Islamic Republic of Afghanistan]

Last Friday, at the World Economic Forum in Davos, Switzerland, Afghan President Ashraf Ghani admitted that his country’s security forces had suffered over 45,000 fatalities since he took office in September 2014. This total far exceeds the total of 28,000 killed since 2015 that Ghani had previously announced in November 2018. Ghani’s cryptic comment in Davos did not indicate how the newly revealed total relates to previously released figures, whether it was based on new accounting, a sharp increase in recent casualties, or more forthrightness.

This revised figure casts significant doubt on the validity of analysis based on the previous reporting. Correcting it will be difficult. At the request of the Afghan government in May 2017, the U.S. military has treated security forces attrition and loss data as classified and has withheld it from public release.

If Ghani’s figure is, in fact, accurate, then it reinforces the observation that the course of the conflict is tilting increasingly against the Afghan government.

 

What Multi-Domain Operations Wargames Are You Playing? [Updated]

Source: David A. Shlapak and Michael Johnson. Reinforcing Deterrence on NATO’s Eastern Flank: Wargaming the Defense of the Baltics. Santa Monica, CA: RAND Corporation, 2016.

 

 

 

 

 

 

 

[UPDATE] We had several readers recommend games they have used or would be suitable for simulating Multi-Domain Battle and Operations (MDB/MDO) concepts. These include several classic campaign-level board wargames:

The Next War (SPI, 1976)

NATO: The Next War in Europe (Victory Games, 1983)

For tactical level combat, there is Steel Panthers: Main Battle Tank (SSI/Shrapnel Games, 1996- )

There were also a couple of naval/air oriented games:

Asian Fleet (Kokusai-Tsushin Co., Ltd. (国際通信社) 2007, 2010)

Command: Modern Air Naval Operations (Matrix Games, 2014)

Are there any others folks are using out there?


A Mystics & Statistic reader wants to know what wargames are being used to simulate and explore Multi-Domain Battle and Operations (MDB/MDO) concepts?

There is a lot of MDB/MDO wargaming going on in at all levels in the U.S. Department of Defense. Much of this appears to use existing models, simulations, and wargames, such as the U.S. Army Center for Army Analysis’s unclassified Wargaming Analysis Model (C-WAM).

Chris Lawrence recently looked at C-WAM and found that it uses a lot of traditional board wargaming elements, including methodologies for determining combat results, casualties, and breakpoints that have been found unable to replicate real-world outcomes (aka “The Base of Sand” problem).

C-WAM 1

C-WAM 2

C-WAM 3

C-WAM 4 (Breakpoints)

There is also the wargame used by RAND to look at possible scenarios for a potential Russian invasion of the Baltic States.

Wargaming the Defense of the Baltics

Wargaming at RAND

What other wargames, models, and simulations are there being used out there? Are there any commercial wargames incorporating MDB/MDO elements into their gameplay? What methodologies are being used to portray MDB/MDO effects?

U.S. Army Releases New Iraq War History

On Thursday, the U.S. Army released a long-awaited history of its operational combat experience in Iraq from 2003 to 2011. The study, titled The U.S. Army in the Iraq War – Volume 1: Invasion – Insurgency – Civil War, 2003-2006 and The U.S. Army in the Iraq War – Volume 2: Surge and Withdrawal, 2007-2011, was published under the auspices of the U.S. Army War College’s Strategic Studies Institute.

This reflects its unconventional origins. Under normal circumstances, such work would be undertaken by either the U.S. Army Combat Studies Institute (CSI), which is charged with writing quick-turnaround “instant histories,” or the U.S. Army Center of Military History (CMH), which writes more deeply researched “official history,” years or decades after the fact.[1] Instead, these volumes were directly commissioned by then-Chief of the Staff of the Army, General Raymond Odierno, who created an Iraq Study Group in 2013 to research and write them. According to Odierno, his intent was “to capture key lessons, insights, and innovations from our more than 8 years of conflict in that country.[I]t was time to conduct an initial examination of the Army’s experiences in the post-9/11 wars, to determine their implications for our future operations, strategy, doctrine, force structure, and institutions.”

CSI had already started writing contemporary histories of the conflict, publishing On Point: The United States Army in Operation IRAQI FREEDOM (2004) and On Point II: Transition to the New Campaign (2008), which covered the period from 2003 to January 2005. A projected third volume was advertised, but never published.

Although the Iraq Study Group completed its work in June 2016 and the first volume of the history was scheduled for publication that October, its release was delayed due to concerns within the Army historical community regarding the its perspective and controversial conclusions. After external reviewers deemed the study fair and recommended its publication, claims were lodged after its existence was made public last autumn that the Army was suppressing it to avoid embarrassment. Making clear that the study was not an official history publication, current Army Chief of Staff General Mark Milley added his own forward to Odierno’s, and publicly released the two volumes yesterday.

NOTES

[1] For a discussion of the roles and mission of CSI and CMH with regard to history, see W. Shane Story, “Transformation or Troop Strength? Early Accounts of the Invasion of IraqArmy History, Winter 2006; Richard W. Stewart, “‘Instant’ History and History: A Hierarchy of NeedsArmy History, Winter 2006; Jeffrey J. Clarke, “The Care and Feeding of Contemporary History,” Army History, Winter 2006; and Gregory Fontenot, “The U.S. Army and Contemporary Military History,” Army History, Spring 2008.