Tag security studies

Japan’s Grand Strategy and Military Forces (III)

[John Hopkins Applied Physics Lab]

In my previous post, I looked at the Japanese Maritime Self-Defense Force (JMSDF) basic strategic missions of defending Japan from maritime invasion and securing the sea lines of communication (SLOC). This post will examine the basis for JMSDF’s approach to those tasks.

In 2011, JMSDF Vice Admiral (Ret.) Yoji Koda published an excellent article in the Naval War College Review, entitled “A New Carrier Race?.” Two passages therefrom are particular relevant and illuminating:

In 1952, … the Japan Maritime Guard (JMG) was established as a rudimentary defense organization for the nation. The leaders of the JMG were determined that the organization would be a navy, not a reinforced coast guard. Most were combat-experienced officers (captains and below) of the former Imperial Japanese Navy, and they had clear understanding of the difference between a coast guard–type law-enforcement force and a navy. Two years later, the JMG was transformed into the JMSDF, and with leaders whose dream to build a force that had a true naval function was stronger than ever. However, they also knew the difficulty of rebuilding a real navy, in light of strict constraints imposed by the new, postwar constitution. Nonetheless, the JMSDF has built its forces and trained its sailors vigorously, with this goal in view, and it is today one of the world’s truly capable maritime forces in both quality and size.

This continuity with the World War II-era Imperial Japanese Navy (IJN) is evident in several practices. The JMSDF generally re-uses IJN names of for new vessels, as well as its naval ensign, the Kyokujitsu-ki or “Rising Sun” flag. This flag is seen by some in South Korea and other countries as symbolic of Japan’s wartime militarism. In October 2018, the JMSDF declined an invitation to attend a naval review held by the Republic of Korea Navy (ROKN) at Jeju island, due to a request that only national flags be flown at the event. This type of disagreement may have a material impact on the ability of the JMSDF and the ROKN, both allies of the United States, to jointly operate effectively.

Koda continued:

Since the founding of the Japan Self-Defense Force (JSDF) and within it the JMSDF, in 1954…the bases of Japan’s national security and defense are the capability of the JSDF and the Japanese-U.S. alliance… Thus the operational concept of the JSDF with respect to the U.S. armed forces has been one of complementary mission-sharing, in which U.S. forces concentrate on offensive operations, while the JSDF maximizes its capability for defensive operations. In other words, the two forces form what is known as a “spear and shield” relationship… [T]he JMSDF ensures that Japan can receive American reinforcements from across the Pacific Ocean, guarantees the safety of U.S. naval forces operating around Japan, and enables U.S. carrier strike groups (CSGs) to concentrate on strike operations against enemy naval forces and land targets…[so] the JMSDF has set antisubmarine warfare as its main task…ASW was made the main pillar of JMSDF missions. Even in the present security environment, twenty years after the end of the Cold War and the threat of invasion from the Soviet Union, two factors are unchanged—the Japanese-U.S. alliance and Japan’s dependence on imported natural resources. Therefore the protection of SLOCs has continued to be a main mission of the JMSDF.

It is difficult to overstate the degree to which the USN and JMSDF are integrated. The US Navy’s Seventh Fleet is headquartered in Yokosuka, Japan, where the U.S.S. Ronald Reagan, a Nimitz-class super carrier, is stationed. Historically, this position was filled by the U.S.S. George Washington, which is currently back in Virginia undergoing refueling and overhaul. According to the Stars and Stripes, she may return to Japan with a new air wing, incorporating the MQ-25A Stingray aerial refueling drones.

According to the Center for Naval Analysis (CNA), the USN has the following ships based in Japan:

  • Yokosuka (south of Tokyo, in eastern Japan)
    • One CVN (nuclear aircraft carrier), U.S.S. Ronald Reagan
    • One AGC (amphibious command ship), U.S.S. Blue Ridge
    • Three CG (guided missile cruisers)
    • Seven DDG (guided missile destroyers)
  • Sasebo (north of Nagasaki, in the southern island of Kyuushu)
    • One LHD (amphibious assault ship, multi-purpose), U.S.S. Bon Home Richard
    • One LPD (amphibious transport dock), U.S.S. Greenway
    • Two LSD (dock landing ship)
    • Four MCM (mine counter measure ship)

One example of this close integration is the JS Maya, a Guided Missile Destroyer (DDG), launched on 30 July 2018. The ship is currently outfitting and is expected to be commissioned in 2020. A notable feature is the Collective Engagement Capability (CEC) (see graphic above). CEC is a “revolutionary approach to air defense,” according to John Hopkins Applied Physics Lab (which is involved in the development), “it allows combat systems to share unfiltered sensor measurements data associated with tracks with rapid timing and precision to enable the [USN-JMSDF] battlegroup units to operate as one.”

Zhang Junshe, a senior research fellow at the China’s People’s Liberation Army Naval Military Studies Research Institute, expressed concern in Chinese Global Times about this capability for “potentially targeting China and threatening other countries… CEC will strengthen intelligence data sharing with the US…strengthen their [US and Japan] military alliance. From the US perspective, it can better control Japan… ‘Once absolute security is realized by Japan and the US, they could attack other countries without scruples, which will certainly destabilize other regions.’”

Japan’s Grand Strategy and Military Forces (II)

Japanese Maritime Self-Defense Force (JMSDF) ships and the U.S.S. Ronald Reagan Carrier Strike Group conduct Annual Exercise 2016. [U.S. Navy]

In my first post on Japan’s grand strategy, I examined its “free and open” Indo-Pacific policy and briefly reviewed its armed forces—nominally “self-defense forces (SDF)”—as well as the legal reasons for this euphemism, and the Japanese government’s plans to clarify this constitutional conundrum.

The next several posts in this series will focus on a general overview of the Japanese Maritime Self-Defense Force (JMSDF), why this branch is considered primary (or dominant), some history in terms of how it came to be, the current missions, defense concepts, current capabilities and how they have been envisioned, how they are deployed, and a look ahead about options under consideration.

According to an excellent article in the Naval War College Review by Toshi Yoshihara, “the Japanese often describe their key national characteristic in nautical terms, with the familiar notion that ‘Japan is a small island nation lacking resource endowments and is thus highly dependent upon seaborne commerce for its well-being.’”

A few key facts, according to Jane’s Defense: Sea Module:

  • Japan has the world’s seventh-largest Exclusive Economic Zone (EEZ).
  • Japan operates a large commercial fishing fleet of about 200,000 vessels.
  • 90% of Japan’s oil is shipped from the Middle East.
  • 60% of Japan’s food is imported by sea.

The JMSDF is therefore tasked with the fundamental naval missions of defending Japan from maritime invasion and securing the sea lines of communication (SLOC). A recent article in the Japan News, spelled out why SLOC protection is vital for Japan:

[T]he South China Sea is a key sea-lane for Japan. If it became necessary to take a detour around the South China Sea, the additional time and fuel costs are estimated to be 1½ days and $120,000 for travel via the Sunda Strait, and three days and $240,000 for travel via the Lombok Strait. Both of these straits can be perilous, with strong tidal currents, sunken ships and shoals. If either were to see a large increase in marine traffic, chaos is predicted to ensue.

We can see this concern clearly in the recent JMSDF exercise deployment through the South China Sea, the straits of Sunda and Malacca, and onwards to India.

[The Japan News (Yomiuri Shimbun)]

For Indo Southeast Asia Deployment 2018 (ISEAD18) from 26 August to 30 October 2018, JMSDF vessels JS Kaga (DDH 184), JS Inazuma (DD105), JS Suzutsuki (DD117), stopped at Subic Bay, Philippines; Jakarta, Indonesia; Colombo, Sri Lanka; Visakhapatnam, India; and Changi, Singapore. The exercise included naval various exercises with port call countries, as well as the British and U.S. navies. This activity yielded important agreements, such as the maritime surveillance pact between Japan and India to share information on Chinese ship locations.

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

The Great 3-1 Rule Debate

coldwarmap3[This piece was originally posted on 13 July 2016.]

Trevor Dupuy’s article cited in my previous post, “Combat Data and the 3:1 Rule,” was the final salvo in a roaring, multi-year debate between two highly regarded members of the U.S. strategic and security studies academic communities, political scientist John Mearsheimer and military analyst/polymath Joshua Epstein. Carried out primarily in the pages of the academic journal International Security, Epstein and Mearsheimer argued the validity of the 3-1 rule and other analytical models with respect the NATO/Warsaw Pact military balance in Europe in the 1980s. Epstein cited Dupuy’s empirical research in support of his criticism of Mearsheimer’s reliance on the 3-1 rule. In turn, Mearsheimer questioned Dupuy’s data and conclusions to refute Epstein. Dupuy’s article defended his research and pointed out the errors in Mearsheimer’s assertions. With the publication of Dupuy’s rebuttal, the International Security editors called a time out on the debate thread.

The Epstein/Mearsheimer debate was itself part of a larger political debate over U.S. policy toward the Soviet Union during the administration of Ronald Reagan. This interdisciplinary argument, which has since become legendary in security and strategic studies circles, drew in some of the biggest names in these fields, including Eliot Cohen, Barry Posen, the late Samuel Huntington, and Stephen Biddle. As Jeffery Friedman observed,

These debates played a prominent role in the “renaissance of security studies” because they brought together scholars with different theoretical, methodological, and professional backgrounds to push forward a cohesive line of research that had clear implications for the conduct of contemporary defense policy. Just as importantly, the debate forced scholars to engage broader, fundamental issues. Is “military power” something that can be studied using static measures like force ratios, or does it require a more dynamic analysis? How should analysts evaluate the role of doctrine, or politics, or military strategy in determining the appropriate “balance”? What role should formal modeling play in formulating defense policy? What is the place for empirical analysis, and what are the strengths and limitations of existing data?[1]

It is well worth the time to revisit the contributions to the 1980s debate. I have included a bibliography below that is not exhaustive, but is a place to start. The collapse of the Soviet Union and the end of the Cold War diminished the intensity of the debates, which simmered through the 1990s and then were obscured during the counterterrorism/ counterinsurgency conflicts of the post-9/11 era. It is possible that the challenges posed by China and Russia amidst the ongoing “hybrid” conflict in Syria and Iraq may revive interest in interrogating the bases of military analyses in the U.S and the West. It is a discussion that is long overdue and potentially quite illuminating.

NOTES

[1] Jeffery A. Friedman, “Manpower and Counterinsurgency: Empirical Foundations for Theory and Doctrine,” Security Studies 20 (2011)

BIBLIOGRAPHY

(Note: Some of these are behind paywalls, but some are available in PDF format. Mearsheimer has made many of his publications freely available here.)

John J. Mearsheimer, “Why the Soviets Can’t Win Quickly in Central Europe,” International Security, Vol. 7, No. 1 (Summer 1982)

Samuel P. Huntington, “Conventional Deterrence and Conventional Retaliation in Europe,” International Security 8, no. 3 (Winter 1983/84)

Joshua Epstein, Strategy and Force Planning (Washington, DC: Brookings, 1987)

Joshua M. Epstein, “Dynamic Analysis and the Conventional Balance in Europe,” International Security 12, no. 4 (Spring 1988)

John J. Mearsheimer, “Numbers, Strategy, and the European Balance,” International Security 12, no. 4 (Spring 1988)

Stephen Biddle, “The European Conventional Balance,” Survival 30, no. 2 (March/April 1988)

Eliot A. Cohen, “Toward Better Net Assessment: Rethinking the European Conventional Balance,International Security Vol. 13, No. 1 (Summer 1988)

Joshua M. Epstein, “The 3:1 Rule, the Adaptive Dynamic Model, and the Future of Security Studies,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, “Assessing the Conventional Balance,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, Barry R. Posen, Eliot A. Cohen, “Correspondence: Reassessing Net Assessment,” International Security 13, No. 4 (Spring 1989)

Trevor N. Dupuy, “Combat Data and the 3:1 Rule,” International Security 14, no. 1 (Summer 1989)

Stephen Biddle et al., Defense at Low Force Levels (Alexandria, VA: Institute for Defense Analyses, 1991)

Force Ratios in Conventional Combat

American soldiers of the 117th Infantry Regiment, Tennessee National Guard, part of the 30th Infantry Division, move past a destroyed American M5A1 “Stuart” tank on their march to recapture the town of St. Vith during the Battle of the Bulge, January 1945. [Wikipedia]
[This piece was originally posted on 16 May 2017.]

This post is a partial response to questions from one of our readers (Stilzkin). On the subject of force ratios in conventional combat….I know of no detailed discussion on the phenomenon published to date. It was clearly addressed by Clausewitz. For example:

At Leuthen Frederick the Great, with about 30,000 men, defeated 80,000 Austrians; at Rossbach he defeated 50,000 allies with 25,000 men. These however are the only examples of victories over an opponent two or even nearly three times as strong. Charles XII at the battle of Narva is not in the same category. The Russian at that time could hardly be considered as Europeans; moreover, we know too little about the main features of that battle. Bonaparte commanded 120,000 men at Dresden against 220,000—not quite half. At Kolin, Frederick the Great’s 30,000 men could not defeat 50,000 Austrians; similarly, victory eluded Bonaparte at the desperate battle of Leipzig, though with his 160,000 men against 280,000, his opponent was far from being twice as strong.

These examples may show that in modern Europe even the most talented general will find it very difficult to defeat an opponent twice his strength. When we observe that the skill of the greatest commanders may be counterbalanced by a two-to-one ratio in the fighting forces, we cannot doubt that superiority in numbers (it does not have to more than double) will suffice to assure victory, however adverse the other circumstances.

and:

If we thus strip the engagement of all the variables arising from its purpose and circumstance, and disregard the fighting value of the troops involved (which is a given quantity), we are left with the bare concept of the engagement, a shapeless battle in which the only distinguishing factors is the number of troops on either side.

These numbers, therefore, will determine victory. It is, of course, evident from the mass of abstractions I have made to reach this point that superiority of numbers in a given engagement is only one of the factors that determines victory. Superior numbers, far from contributing everything, or even a substantial part, to victory, may actually be contributing very little, depending on the circumstances.

But superiority varies in degree. It can be two to one, or three or four to one, and so on; it can obviously reach the point where it is overwhelming.

In this sense superiority of numbers admittedly is the most important factor in the outcome of an engagement, as long as it is great enough to counterbalance all other contributing circumstance. It thus follows that as many troops as possible should be brought into the engagement at the decisive point.

And, in relation to making a combat model:

Numerical superiority was a material factor. It was chosen from all elements that make up victory because, by using combinations of time and space, it could be fitted into a mathematical system of laws. It was thought that all other factors could be ignored if they were assumed to be equal on both sides and thus cancelled one another out. That might have been acceptable as a temporary device for the study of the characteristics of this single factor; but to make the device permanent, to accept superiority of numbers as the one and only rule, and to reduce the whole secret of the art of war to a formula of numerical superiority at a certain time and a certain place was an oversimplification that would not have stood up for a moment against the realities of life.

Force ratios were discussed in various versions of FM 105-5 Maneuver Control, but as far as I can tell, this was not material analytically developed. It was a set of rules, pulled together by a group of anonymous writers for the sake of being able to adjudicate wargames.

The only detailed quantification of force ratios was provided in Numbers, Predictions and War by Trevor Dupuy. Again, these were modeling constructs, not something that was analytically developed (although there was significant background research done and the model was validated multiple times). He then discusses the subject in his book Understanding War, which I consider the most significant book of the 90+ that he wrote or co-authored.

The only analytically based discussion of force ratios that I am aware of (or at least can think of at this moment) is my discussion in my upcoming book War by Numbers: Understanding Conventional Combat. It is the second chapter of the book: https://dupuyinstitute.dreamhosters.com/2016/02/17/war-by-numbers-iii/

In this book, I assembled the force ratios required to win a battle based upon a large number of cases from World War II division-level combat. For example (page 18 of the manuscript):

I did this for the ETO, for the battles of Kharkov and Kursk (Eastern Front 1943, divided by when the Germans are attacking and when the Soviets are attacking) and for PTO (Manila and Okinawa 1945).

There is more than can be done on this, and we do have the data assembled to do this, but as always, I have not gotten around to it. This is why I am already considering a War by Numbers II, as I am already thinking about all the subjects I did not cover in sufficient depth in my first book.

Japan’s Grand Strategy and Military Forces (I)

[Source: Consulate-General of Japan, Sydney]

This is the first in a series of Orders of Battle (OOB) posts, which will cover Japan, the neighboring and regional powers in East Asia, as well as the major global players, with a specific viewpoint on their military forces in East Asia and the Greater Indo-Pacific. The idea is to provide a catalog of forces and capabilities, but also to provide some analysis of how those forces are linked to the nation’s strategy.

The geographic term “Indo-Pacific” is a relatively new one, and referred to by name in the grand strategy as detailed by the Japanese Ministry of Foreign Affairs (MOFA) in April 2017. It also aligns with the strategy and terminology used by US Defense Secretary James Mattis at the Shangri-La conference in June 2018. Dr. Michael J. Green has a good primer on the evolution of Japan’s grand strategy, along with a workable definition of the term:

What is “grand strategy”? It is the integration of all instruments of national power to shape a more favorable external environment for peace and prosperity. These comprehensive instruments of power are diplomatic, informational, military and economic. Successful grand strategies are most important in peacetime, since war may be considered the failure of strategy.

Nonetheless, the seminal speech by Vice President Pence regarding China policy on 4 October 2018, had an articulation of Chinese grand strategy: “Beijing is employing a whole-of-government approach, using political, economic, and military tools, as well as propaganda, to advance its influence and benefit its interests in the United States.” The concept of grand strategy is not new; Thucydides is often credited with the first discussion of this concept in History of the Peloponnesian War (431-404 BCE). It is fundamentally about the projection of power in all its forms.

With the Focus on the Indo-Pacific Strategy, What About the Home Islands? 

[Source: Japanese Ministry of Defense (MOD) ]

The East Asian region has some long simmering conflicts, legacies from past wars, such as World War II (or Great Pacific War) (1937-1945), the Korean War (1950-1953), and the Chinese Civil War (1921-1947). These conflicts led to static and stable borders, across which a “military balance” is often referred to, and publications from think tanks often refer to this, for example the Institute for International and Strategic Studies (IISS) offers a publication with this title. The points emphasized by IISS in the 2018 edition are “new arms orders and deliveries graphics and essays on Chinese and Russian air-launched weapons, artificial intelligence and defence, and Russian strategic-force modernisation.”

So, the Japanese military has two challenges, maintain the balance of power at home, that is playing defense, with neighbors who are changing and deploying new capabilities that have a material effect on this balance. And, as seen above Japan is working to build an offense as part of the new grand strategy, and military forces play a role.

Given the size and capability of the Japanese military forces, it is possible to project power  at great distances from the Japanese home waters. Yet, as a legacy from the Great Pacific War, the Japanese do not technically have armed forces. The constitution, imposed by Americans, officially renounces war as a sovereign right of the nation.

In July 2014, the constitution was officially ”re-interpreted” to allow collective self-defense. The meaning was that if the American military was under attack, for example in Guam, nearby Japanese military units could not legally engage with the forces attacking the Americans, even though they are allied nations, and conduct numerous training exercises together, that is, they train to fight together. This caused significant policy debate in Japan.

More recently, as was an item of debate in the national election in September 2018, the legal status of the SDF is viewed as requiring clarification, with some saying they are altogether illegal. “It’s time to tackle a constitutional revision,” Abe said in a victory speech.

The original defense plan was for the American military to defend Japan. The practical realities of the Cold War and the Soviet threat to Japan ended up creating what are technically “self-defense forces” (SDF) in three branches:

  • Japan Ground Self-Defense Forces (JGSDF)
  • Japan Maritime Self-Defense Forces (JGSDF)
  • Japan Air Self-Defense Forces (JASDF)

In the next post, these forces will be cataloged, with specific capabilities linked to Japanese strategy. As a quick preview, the map below illustrates the early warning radar sites, airborne early warning aircraft, and fighter-interceptor aircraft, charged with the mission to maintain a balance of power in the air, as Russian and Chinese air forces challenge the sovereignty of Japanese airspace. With the Russians, this is an old dance from the Cold War, but recently the Chinese have gotten into this game as well.

[Source: J-Wings magazine, December 2018]

What Does Lethality Mean In Warfare?

In an insightful essay over at The Strategy Bridge, “Lethality: An Inquiry,” Marine Corps officer Olivia Gerard accomplishes one of the most important, yet most often overlooked, aspects of successfully thinking about and planning for war: questioning a basic assumption. She achieves this by posing a simple question: “What is lethality?”

Gerard notes that the current U.S. National Defense Strategy is predicated on lethality; as it states: “A more lethal, resilient, and rapidly innovating Joint Force, combined with a robust constellation of allies and partners, will sustain American influence and ensure favorable balances of power that safeguard the free and open international order.” She also identifies the linkage in the strategy between lethality and deterrence via a supporting statement from Deputy Secretary of Defense Patrick Shanahan: “Everything we do is geared toward one goal: maximizing lethality. A lethal force is the strongest deterrent to war.”

After pointing out that the strategy does not define the concept of lethality, Gerard responds to Shanahan’s statement by asking “why?”

She uses this as a jumping off point to examine the meaning of lethality in warfare. Starting from the traditional understanding of lethality as a tactical concept, Gerard walks through the way it has been understood historically. From this, she formulates a construct for understanding the relationship between lethality and strategy:

Organizational lethality emerges from tactical lethality that is institutionally codified. Tactical lethality is nested within organizational lethality, which is nested within strategic lethality. Plugging these terms into an implicit calculus, we can rewrite strategic lethality as the efficacy with which we can form intentional deadly relationships towards targets that can be actualized towards political ends.

To this, Gerard appends two interesting caveats: “Notice first that the organizational component becomes implicit. What remains outside, however, is the intention–a meta-intention–to form these potential deadly relationships in the first place.”

It is the second of these caveats—the intent to connect lethality to a strategic end—that informs Gerard’s conclusion. While the National Defense Strategy does not define the term, she observes that by explicitly leveraging the threat to use lethality to bolster deterrence, it supplies the necessary credibility needed to make deterrence viable. “Proclaiming lethality a core tenet, especially in a public strategic document, is the communication of the threat.”

Gerard’s exploration of lethality and her proposed framework for understanding it provide a very useful way of thinking about the way it relates to warfare. It is definitely worth your time to read.

What might be just as interesting, however, are the caveats to her construct because they encompass a lot of what is problematic about the way the U.S. military thinks—explicitly and implicitly—about tactical lethality and how it is codified into concepts of organizational lethality. (While I have touched on some of those already, Gerard gives more to reflect on. More on that later.)

Gerard also references the definition of lethality Trevor Dupuy developed for his 1964 study of historical trends in weapon lethality. While noting that his definition was too narrow for the purposes of her inquiry, the historical relationship between lethality, casualties, and dispersion on the battlefield Dupuy found in that study formed the basis for his subsequent theories of warfare and models of combat. (I will write more about those in the future as well.)

Human Factors In Warfare: Fear In A Lethal Environment

Chaplain (Capt.) Emil Kapaun (right) and Capt. Jerome A. Dolan, a medical officer with the 8th Cavalry Regiment, 1st Cavalry Division, carry an exhausted Soldier off the battlefield in Korea, early in the war. Kapaun was famous for exposing himself to enemy fire. When his battalion was overrun by a Chinese force in November 1950, rather than take an opportunity to escape, Kapaun voluntarily remained behind to minister to the wounded. In 2013, Kapaun posthumously received the Medal of Honor for his actions in the battle and later in a prisoner of war camp, where he died in May 1951. [Photo Credit: Courtesy of the U.S. Army Center of Military History]

[This piece was originally published on 27 June 2017.]

Trevor Dupuy’s theories about warfare were sometimes criticized by some who thought his scientific approach neglected the influence of the human element and chance and amounted to an attempt to reduce war to mathematical equations. Anyone who has read Dupuy’s work knows this is not, in fact, the case.

Moral and behavioral (i.e human) factors were central to Dupuy’s research and theorizing about combat. He wrote about them in detail in his books. In 1989, he presented a paper titled “The Fundamental Information Base for Modeling Human Behavior in Combat” at a symposium on combat modeling that provided a clear, succinct summary of his thinking on the topic.

He began by concurring with Carl von Clausewitz’s assertion that

[P]assion, emotion, and fear [are] the fundamental characteristics of combat… No one who has participated in combat can disagree with this Clausewitzean emphasis on passion, emotion, and fear. Without doubt, the single most distinctive and pervasive characteristic of combat is fear: fear in a lethal environment.

Despite the ubiquity of fear on the battlefield, Dupuy pointed out that there is no way to study its impact except through the historical record of combat in the real world.

We cannot replicate fear in laboratory experiments. We cannot introduce fear into field tests. We cannot create an environment of fear in training or in field exercises.

So, to study human reaction in a battlefield environment we have no choice but to go to the battlefield, not the laboratory, not the proving ground, not the training reservation. But, because of the nature of the very characteristics of combat which we want to study, we can’t study them during the battle. We can only do so retrospectively.

We have no choice but to rely on military history. This is why military history has been called the laboratory of the soldier.

He also pointed out that using military history analytically has its own pitfalls and must be handled carefully lest it be used to draw misleading or inaccurate conclusions.

I must also make clear my recognition that military history data is far from perfect, and that–even at best—it reflects the actions and interactions of unpredictable human beings. Extreme caution must be exercised when using or analyzing military history. A single historical example can be misleading for either of two reasons: (a) The data is inaccurate, or (b) The example may be true, but also be untypical.

But, when a number of respectable examples from history show consistent patterns of human behavior, then we can have confidence that behavior in accordance with the pattern is typical, and that behavior inconsistent with the pattern is either untypical, or is inaccurately represented.

He then stated very concisely the scientific basis for his method.

My approach to historical analysis is actuarial. We cannot predict the future in any single instance. But, on the basis of a large set of reliable experience data, we can predict what is likely to occur under a given set of circumstances.

Dupuy listed ten combat phenomena that he believed were directly or indirectly related to human behavior. He considered the list comprehensive, if not exhaustive.

I shall look at Dupuy’s treatment of each of these in future posts (click links above).

Simpkin on the Long-Term Effects of Firepower Dominance

To follow on my earlier post introducing British military theorist Richard Simpkin’s foresight in detecting trends in 21st Century warfare, I offer this paragraph, which immediately followed the ones I quoted:

Briefly and in the most general terms possible, I suggest that the long-term effect of dominant firepower will be threefold. It will disperse mass in the form of a “net” of small detachments with the dual role of calling down fire and of local quasi-guerrilla action. Because of its low density, the elements of this net will be everywhere and will thus need only the mobility of the boot. It will transfer mass, structurally from the combat arms to the artillery, and in deployment from the direct fire zone (as we now understand it) to the formation and protection of mobile fire bases capable of movement at heavy-track tempo (Chapter 9). Thus the third effect will be to polarise mobility, for the manoeuvre force still required is likely to be based on the rotor. This line of thought is borne out by recent trends in Soviet thinking on the offensive. The concept of an operational manoeuvre group (OMG) which hives off raid forces against C3 and indirect fire resources is giving way to more fluid and discontinuous manoeuvre by task forces (“air-ground assault groups” found by “shock divisions”) directed onto fire bases—again of course with an operational helicopter force superimposed. [Simpkin, Race To The Swift, p. 169]

It seems to me that in the mid-1980s, Simpkin accurately predicted the emergence of modern anti-access/area denial (A2/AD) defensive systems with reasonable accuracy, as well the evolving thinking on the part of the U.S. military as to how to operate against them.

Simpkin’s vision of task forces (more closely resembling Russian/Soviet OMGs than rotary wing “air-ground assault groups” operational forces, however) employing “fluid and discontinuous manoeuvre” at operational depths to attack long-range precision firebases appears similar to emerging Army thinking about future multidomain operations. (It’s likely that Douglas MacGregor’s Reconnaissance Strike Group concept more closely fits that bill.)

One thing he missed on was his belief that rotary wing helicopter combat forces would supplant armored forces as the primary deep operations combat arm. However, there is the potential possibility that drone swarms might conceivably take the place in Simpkin’s operational construct that he allotted to heliborne forces. Drones have two primary advantages over manned helicopters: they are far cheaper and they are far less vulnerable to enemy fires. With their unique capacity to blend mass and fires, drones could conceivably form the deep strike operational hammer that Simpkin saw rotary wing forces providing.

Just as interesting was Simpkin’s anticipation of the growing importance of information and electronic warfare in these environments. More on that later.

Richard Simpkin on 21st Century Trends in Mass and Firepower

Anvil of “troops” vs. anvil of fire. (Richard Simpkin, Race To The Swift: Thoughts on Twenty-First Century Warfare, Brassey’s: London, 1985, p. 51)

For my money, one of the most underrated analysts and theorists of modern warfare was the late Brigadier Richard Simpkin. A retired British Army World War II veteran, Simpkin helped design the Chieftan tank in the 60s and 70s. He is best known for his series of books analyzing Soviet and Western military theory and doctrine. His magnum opus was Race To The Swift: Thoughts on Twenty-First Century Warfare, published in 1985. A brilliant blend of military history, insightful analysis of tactics and technology as well as operations and strategy, and Simpkin’s idiosyncratic wit, the observations in Race To The Swift are becoming more prescient by the year.

Some of Simpkin’s analysis has not aged well, such as the focus on the NATO/Soviet confrontation in Central Europe, and a bold prediction that rotary wing combat forces would eventually supplant tanks as the primary combat arm. However, it would be difficult to find a better historical review of the role of armored forces in modern warfare and how trends in technology, tactics, and doctrine are interacting with strategy, policy, and politics to change the character of warfare in the 21st Century.

To follow on my previous post on the interchangeability of fire (which I gleaned from Simpkin, of course), I offer this nugget on how increasing weapons lethality would affect 21st Century warfare, written from the perspective of the mid 1980s:

While accidents of ground will always provide some kind of cover, the effect of modern firepower on land force tactics is equally revolutionary. Just as we saw in Part 2 how the rotary wing may well turn force structures inside out, firepower is already turning tactical concepts inside out, by replacing the anvil of troops with an anvil of fire (Fig. 5, page 51)*. The use of combat troops at high density to hold ground or to seize it is already likely to prove highly costly, and may soon become wholly unprofitable. The interesting question is what effect the dominance of firepower will have at operational level.

One school of thought, to which many defence academics on both sides of the Atlantic subscribe, is that it will reduce mobility and bring about a return to positional warfare. The opposite view is that it will put a premium on elusiveness, increasing mobility and reducing mass. On analysis, both these opinions appear rather simplistic, mainly because they ignore the interchangeability of troops and fire…—in other words the equivalence or complementarity of the movement of troops and the massing of fire. They also underrate the part played by manned and unmanned surveillance, and by communication. Another factor, little understood by soldiers and widely ignored, is the weight of fire a modern fast jet in its strike configuration, flying a lo-lo-lo profile, can put down very rapidly wherever required. With modern artillery and air support, a pair of eyes backed up by an unjammable radio and perhaps a thermal imager becomes the equivalent of at least a (company) combat team, perhaps a battle group. [Simpkin, Race To The Swift, pp. 168-169]

Sound familiar? I will return to Simpkin’s insights in future posts, but I suggest you all snatch up a copy of Race To The Swift for yourselves.

* See above.