A military force that is surprised is severely disrupted, and its fighting capability is severely degraded. Surprise is usually achieved by the side that has the initiative, and that is attacking. However, it can be achieved by a defending force. The most common example of defensive surprise is the ambush.
Perhaps the best example of surprise achieved by a defender was that which Hannibal gained over the Romans at the Battle of Cannae, 216 BC, in which the Romans were surprised by the unexpected defensive maneuver of the Carthaginians. This permitted the outnumbered force, aided by the multiplying effect of surprise, to achieve a double envelopment of their numerically stronger force.
It has been hypothesized, and the hypothesis rather conclusively substantiated, that surprise can be quantified in terms of the enhanced mobility (quantifiable) which surprise provides to the surprising force, by the reduced vulnerability (quantifiable) of the surpriser, and the increased vulnerability (quantifiable) of the side that is surprised.
When men believe that their chances of survival in a combat situation become less than some value (which is probably quantifiable, and is unquestionably related to a strength ratio or a power ratio), they cannot and will not advance. They take cover so as to obtain some protection, and by so doing they redress the strength or power imbalance. A force with strength y (a strength less than opponent’s strength x) has its strength multiplied by the effect of defensive posture (let’s give it the symbol p) to a greater power value, so that power py approaches, equals, or exceeds x, the unenhanced power value of the force with the greater strength x. It was because of this that [Carl von] Clausewitz–who considered that battle outcome was the result of a mathematical equation[1]–wrote that “defense is a stronger form of fighting than attack.”[2] There is no question that he considered that defensive posture was a combat multiplier in this equation. It is obvious that the phenomenon of the strengthening effect of defensive posture is a combination of physical and human factors.
Dupuy elaborated on his understanding of Clausewitz’s comparison of the impact of the defensive and offensive posture in combat in his book Understanding War.
The statement [that the defensive is the stronger form of combat] implies a comparison of relative strength. It is essentially scalar and thus ultimately quantitative. Clausewitz did not attempt to define the scale of his comparison. However, by following his conceptual approach it is possible to establish quantities for this comparison. Depending upon the extent to which the defender has had the time and capability to prepare for defensive combat, and depending also upon such considerations as the nature of the terrain which he is able to utilize for defense, my research tells me that the comparative strength of defense to offense can range from a factor with a minimum value of about 1.3 to maximum value of more than 3.0.[3]
NOTES
[1] Dupuy believed Clausewitz articulated a fundamental law for combat theory, which Dupuy termed the “Law of Numbers.” One should bear in mind this concept of a theory of combat is something different than a fundamental law of war or warfare. Dupuy’s interpretation of Clausewitz’s work can be found in Understanding War: History and Theory of Combat (New York: Paragon House, 1987), 21-30.
While this relationship might appear primarily technological in nature, Dupuy considered it the result of the human factor of fear on the battlefield. He put it in more human terms in a symposium paper from 1989:
There is one basic reason for the dispersal of troops on modern battlefields: to mitigate the lethal effects of firepower upon troops. As Lewis Richardson wrote in The Statistics of Deadly Quarrels, there is a limit to the amount of punishment human beings can sustain. Dispersion was resorted to as a tactical response to firepower mostly because—as weapons became more lethal in the 17th Century—soldiers were already beginning to disperse without official sanction. This was because they sensed that on the bloody battlefields of that century they were approaching the limit of the punishment men can stand.
Soldiers with Battery C, 1st Battalion, 82nd Field Artillery Regiment, 1st Brigade Combat Team, 1st Cavalry Division maneuver their Paladins through Hohenfels Training Area, Oct. 26. Photo Credit: Capt. John Farmer, 1st Brigade Combat Team, 1st Cav
Last autumn, U.S. Army Chief of Staff General Mark Milley asserted that “we are on the cusp of a fundamental change in the character of warfare, and specifically ground warfare. It will be highly lethal, very highly lethal, unlike anything our Army has experienced, at least since World War II.” He made these comments while describing the Army’s evolving Multi-Domain Battle concept for waging future combat against peer or near-peer adversaries.
It is possible that ground combat attrition in the future between peer or near-peer combatants may be comparable to the U.S. experience in World War II (although there were considerable differences between the experiences of the various belligerents). Combat losses could be heavier. It certainly seems likely that they would be higher than those experienced by U.S. forces in recent counterinsurgency operations.
Dupuy documented a clear relationship over time between increasing weapon lethality, greater battlefield dispersion, and declining casualty rates in conventional combat. Even as weapons became more lethal, greater dispersal in frontage and depth among ground forces led daily personnel loss rates in battle to decrease.
The average daily battle casualty rate in combat has been declining since 1600 as a consequence. Since battlefield weapons continue to increase in lethality and troops continue to disperse in response, it seems logical to presume the trend in loss rates continues to decline, although this may not necessarily be the case. There were two instances in the 19th century where daily battle casualty rates increased—during the Napoleonic Wars and the American Civil War—before declining again. Dupuy noted that combat casualty rates in the 1973 Arab-Israeli War remained roughly the same as those in World War II (1939-45), almost thirty years earlier. Further research is needed to determine if average daily personnel loss rates have indeed continued to decrease into the 21st century.
Dupuy also discovered that, as with battle outcomes, casualty rates are influenced by the circumstantial variables of combat. Posture, weather, terrain, season, time of day, surprise, fatigue, level of fortification, and “all out” efforts affect loss rates. (The combat loss rates of armored vehicles, artillery, and other other weapons systems are directly related to personnel loss rates, and are affected by many of the same factors.) Consequently, yet counterintuitively, he could find no direct relationship between numerical force ratios and combat casualty rates. Combat power ratios which take into account the circumstances of combat do affect casualty rates; forces with greater combat power inflict higher rates of casualties than less powerful forces do.
Winning forces suffer lower rates of combat losses than losing forces do, whether attacking or defending. (It should be noted that there is a difference between combat loss rates and numbers of losses. Depending on the circumstances, Dupuy found that the numerical losses of the winning and losing forces may often be similar, even if the winner’s casualty rate is lower.)
Dupuy’s research confirmed the fact that the combat loss rates of smaller forces is higher than that of larger forces. This is in part due to the fact that smaller forces have a larger proportion of their troops exposed to enemy weapons; combat casualties tend to concentrated in the forward-deployed combat and combat support elements. Dupuy also surmised that Prussian military theorist Carl von Clausewitz’s concept of friction plays a role in this. The complexity of interactions between increasing numbers of troops and weapons simply diminishes the lethal effects of weapons systems on real world battlefields.
Somewhat unsurprisingly, higher quality forces (that better manage the ambient effects of friction in combat) inflict casualties at higher rates than those with less effectiveness. This can be seen clearly in the disparities in casualties between German and Soviet forces during World War II, Israeli and Arab combatants in 1973, and U.S. and coalition forces and the Iraqis in 1991 and 2003.
Combat Loss Rates on Future Battlefields
What do Dupuy’s combat attrition verities imply about casualties in future battles? As a baseline, he found that the average daily combat casualty rate in Western Europe during World War II for divisional-level engagements was 1-2% for winning forces and 2-3% for losing ones. For a divisional slice of 15,000 personnel, this meant daily combat losses of 150-450 troops, concentrated in the maneuver battalions (The ratio of wounded to killed in modern combat has been found to be consistently about 4:1. 20% are killed in action; the other 80% include mortally wounded/wounded in action, missing, and captured).
It seems reasonable to conclude that future battlefields will be less densely occupied. Brigades, battalions, and companies will be fighting in spaces formerly filled with armies, corps, and divisions. Fewer troops mean fewer overall casualties, but the daily casualty rates of individual smaller units may well exceed those of WWII divisions. Smaller forces experience significant variation in daily casualties, but Dupuy established average daily rates for them as shown below.
For example, based on Dupuy’s methodology, the average daily loss rate unmodified by combat variables for brigade combat teams would be 1.8% per day, battalions would be 8% per day, and companies 21% per day. For a brigade of 4,500, that would result in 81 battle casualties per day, a battalion of 800 would suffer 64 casualties, and a company of 120 would lose 27 troops. These rates would then be modified by the circumstances of each particular engagement.
Several factors could push daily casualty rates down. Milley envisions that U.S. units engaged in an anti-access/area denial environment will be constantly moving. A low density, highly mobile battlefield with fluid lines would be expected to reduce casualty rates for all sides. High mobility might also limit opportunities for infantry assaults and close quarters combat. The high operational tempo will be exhausting, according to Milley. This could also lower loss rates, as the casualty inflicting capabilities of combat units decline with each successive day in battle.
It is not immediately clear how cyberwarfare and information operations might influence casualty rates. One combat variable they might directly impact would be surprise. Dupuy identified surprise as one of the most potent combat power multipliers. A surprised force suffers a higher casualty rate and surprisers enjoy lower loss rates. Russian combat doctrine emphasizes using cyber and information operations to achieve it and forces with degraded situational awareness are highly susceptible to it. As Zelenopillya demonstrated, surprise attacks with modern weapons can be devastating.
Some factors could push combat loss rates up. Long-range precision weapons could expose greater numbers of troops to enemy fires, which would drive casualties up among combat support and combat service support elements. Casualty rates historically drop during night time hours, although modern night-vision technology and persistent drone reconnaissance might will likely enable continuous night and day battle, which could result in higher losses.
Drawing solid conclusions is difficult but the question of future battlefield attrition is far too important not to be studied with greater urgency. Current policy debates over whether or not the draft should be reinstated and the proper size and distribution of manpower in active and reserve components of the Army hinge on getting this right. The trend away from mass on the battlefield means that there may not be a large margin of error should future combat forces suffer higher combat casualties than expected.
Images of RAND wargames from a 1958 edition of Life magazine. [C3I Magazine]
A friend tipped me off to RAND Corporation‘s “Events @ RAND” podcast series on iTunes, specifically a recent installment titled “The Serious Role of Gaming at RAND.” David Shlapak, senior international research analyst and co-director of the RAND Center for Gaming, gives an overview of RAND’s history and experiences using gaming and simulations for a variety of tasks, including analysis and policy-making.
Shlapak and Michael Johnson touched off a major debate last year after publishing an analysis of the military balance in the Baltic states, based on a series of analytical wargames. Shlapak’s discussion of the effort and the ensuing question and answer session are of interest to both those new to gaming and simulation, as well as wargaming old timers. Much recommended.
I’ve been listening to Deputy Defense Secretary Robert Work speak on the Third Offset Strategy. He spoke at Defense One Production forum (2015-09-30), and again to Air Command and Staff College students, (2016-05-27). What follows are some rough notes and paraphrasing, aimed at understanding the strategy, and connecting the F-35 platform and its capabilities to the strategy.
Work gives an interesting description of his job as Chief Operating Officer (COO) of the Department of Defense (DOD), which is “one of the biggest corporations on the planet,” and having a “simple” mission, “to organize, train and equip a joint force that is ready for war and that is operated forward to preserve the peace.”
The Roots of the Third Offset Strategy
Why do we care about Third Offset? “We have to deal with the resurgence of great power competition.” What is a great power? Work credits John Mearsheimer’s definition, but in his own words, it is “a large state that can take on the dominant global state (the United States) and really give them a run for their money, and have a nuclear deterrent force that can survive a first strike. Don’t really care about economic power, or soft power, the focus is only on military capabilities.”
This is quite interesting, since economic power begets military capabilities. A poor China and a rich China are worlds’ apart in terms of the military power that they can field. Also, the stop and start nature of basing agreements with the Philippines under Duterte might remove key bases close to the South China Sea battlefield, having a huge impact on the ability of the US military to project power, as the RAND briefing from yesterday’s post illustrated in rather stark terms.
What has changed to require the Third Offset? Great power rivals have duplicated our Second Offset strategy, of precision guided munitions, stealth and operational (campaign) level battle networks. This strategy gave the US and allies an advantage for forty years. “We’ve lived in a unique time in post-Wesphalian era, where one state is so dominant relative to its peers.” He sees a dividing line in 2014, when two events occur:
China starts to reclaim islands in the South China Sea
Russia annexes Crimea and destabilizes Ukraine
Also, the nature of technology development has changed as well. In the Cold War, technological innovation happens in government labs:
1950’s – nuclear weapon miniaturization
1960’s – space and rocket technology
1970’s – precision guided munitions, stealth, information technology
1980’s – large scale system of systems
From 2012, militarily-relevant technologies are happening in the commercial sphere:
Artificial Intelligence (AI)
Autonomous Weapons Systems
Robotics
Digitization
Fight from Range
Operate from inside their battle network
Cyber and EW, how to take down their network?
“This means we know where to start, but we don’t know where it ends.” Of this list of technologies, he calls out AI and Autonomy as at the forefront. He defines Autonomy as “the delegation of decision authority to some entity in the battle network. Manned or unmanned system … what you are looking for is human-machine symbiosis.“
What do you need to do this? First, deep-learning systems. “Up until 2015, a human analyst was consistently more accurate at identifying an object in an image than a machine. In 2015, this changed. … when a machine makes a mistake, it makes a big one.” He then tells the story of a baby holding a baseball bat, “which the machine identified as an enemy armed combatant. … machines looked for patterns, and then provide them to humans who can use their intitive and strategic acuity to determine what’s going on.“
The F-35 and Strategy
As an example of how this might play out, a machine can generate the Air Tasking Order (ATO – which is a large document that lists all of the sorties and targets to be prosecuted by joint air forces in a 24-hour period, per Wikipedia) … in minutes or hours, instead of many analysts working for hours or days. “We are after human-computer collaborative decision-making.” In 1997, super computer “Deep Blue” beat Gary Kasparov in chess, which was a big deal at the time. In 2005, however, two amateur chess players using three computers beat a field of grand masters and field of super computers. “It was the human strategic guidance combined with the tactical acuity of the computer that we believe will be the most important thing.” He then goes on to highlight an example of this human-machine collaboration:
The F-35 is not a fighter plane. It shouldn’t even be called the F-35. It should be called the BN-35, the “Battle Network”-35. It is a human-machine collaboration machine that is unbelievable. The Distributed Aperture System (DAS), and all the sensors, and the network which pours into the plane; the plane processes it and displays it to the pilot, so that the pilot can make accurate, relevant and quick decisions. That’s why that airplane is going to be so good.
Work also covers another topic near and dear to me, wargaming. Perhaps a war game is a great opportunity for humans and machines to practice collaboration?
We are reinvigorating wargaming, which has really gone down over the past years. We’re looking at more at the service level, more at the OSD level, and these are very, very helpful for us to develop innovative leaders, and also helpful for us to go after new and innovative concepts.
He mentions the Schriever Wargame. “[O]nce you start to move forces, your great power rival will start to use cyber to try to slow down those forces … the distinction between away games and home games is no longer relevant to us.”
Next, I’ll look at the perspectives of the services as they adopt the F-35 in different ways.
Trevor N. Dupuy, Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979), p. 79
Is there actually a reliable way of calculating logistical demand in correlation to “standing” ration strength/combat/daily strength army size?
Did Dupuy ever focus on logistics in any of his work?
The answer to his first question is, yes, there is. In fact, this has been a standard military staff function since before there were military staffs (Martin van Creveld’s book, Supplying War: Logistics from Wallenstein to Patton (2nd ed.) is an excellent general introduction). Staff officer’s guides and field manuals from various armies from the 19th century to the present are full of useful information on field supply allotments and consumption estimates intended to guide battlefield sustainment. The records of modern armies also contain reams of bureaucratic records documenting logistical functions as they actually occurred. Logistics and supply is a woefully under-studied aspect of warfare, but not because there are no sources upon which to draw.
As to his second question, the answer is also yes. Dupuy addressed logistics in his work in a couple of ways. He included two logistics multipliers in his combat models, one in the calculation for the battlefield effects of weapons, the Operational Lethality Index (OLI), and also as one element of the value for combat effectiveness, which is a multiplier in his combat power formula.
Dupuy considered the impact of logistics on combat to be intangible, however. From his historical study of combat, Dupuy understood that logistics impacted both weapons and combat effectiveness, but in the absence of empirical data, he relied on subject matter expertise to assign it a specific value in his model.
Logistics or supply capability is basic in its importance to combat effectiveness. Yet, as in the case of the leadership, training, and morale factors, it is almost impossible to arrive at an objective numerical assessment of the absolute effectiveness of a military supply system. Consequently, this factor also can be applied only when solid historical data provides a basis for objective evaluation of the relative effectiveness of the opposing supply capabilities.[1]
His approach to this stands in contrast to other philosophies of combat model design, which hold that if a factor cannot be empirically measured, it should not be included in a model. (It is up to the reader to decide if this is a valid approach to modeling real-world phenomena or not.)
Yet, as with many aspects of the historical study of combat, Dupuy and his colleagues at the Historical Evaluation Research Organization (HERO) had taken an initial cut at empirical research on the subject. In the late 1960s and early 1970s, Dupuy and HERO conducted a series of studies for the U.S. Air Force on the historical use of air power in support of ground warfare. One line of inquiry looked at the effects of air interdiction on supply, specifically at Operation STRANGLE, an effort by the U.S. and British air forces to completely block the lines of communication and supply of German ground forces defending Rome in 1944.
Dupuy and HERO dug deeply into Allied and German primary source documentation to extract extensive data on combat strengths and losses, logistical capabilities and capacities, supply requirements, and aircraft sorties and bombing totals. Dupuy proceeded from a historically-based assumption that combat units, using expedients, experience, and training, could operate unimpaired while only receiving up to 65% of their normal supply requirements. If the level of supply dipped below 65%, the deficiency would begin impinging on combat power at a rate proportional to the percentage of loss (i.e., a 60% supply rate would impose a 5% decline, represented as a combat effectiveness multiplier of .95, and so on).
Using this as a baseline, Dupuy and HERO calculated the amount of aerial combat power the Allies needed to apply to impact German combat effectiveness. They determined that Operation STRANGLE was able to reduce German supply capacity to about 41.8% of normal, which yielded a reduction in the combat power of German ground combat forces by an average of 6.8%.
He cautioned that these calculations were “directly relatable only to the German situation as it existed in Italy in late March and early April 1944.” As detailed as the analysis was, Dupuy stated that it “may be an oversimplification of a most complex combination of elements, including road and railway nets, supply levels, distribution of targets, and tonnage on targets. This requires much further exhaustive analysis in order to achieve confidence in this relatively simple relationship of interdiction effort to supply capability.”[2]
The historical work done by Dupuy and HERO on logistics and combat appears unique, but it seems highly relevant. There is no lack of detailed data from which to conduct further inquiries. The only impediment appears to be lack of interest.
Image by Center for Strategic and Budgetary Assessments (CSBA).
In several recent posts, I have alluded to something called the Third Offset Strategy without going into any detail as to what it is. Fortunately for us all, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, wrote an excellent summary and primer on what it as all about in the current edition of Joint Forces Quarterly.
The Defense Strategic Guidance (DSG) articulated 10 missions the [U.S.] joint force must accomplish in the future. These missions include the ability to:
– deter and defeat aggression
– project power despite antiaccess/area-denial (A2/AD) challenges
– operate effectively in cyberspace and space.
The follow-on 2014 Quadrennial Defense Review confirmed the importance of these missions and called for the joint force to “project power and win decisively” in spite of “increasingly sophisticated adversaries who could employ advanced warfighting capabilities.”
In these documents, U.S. policy-makers identified that the primary strategic challenge to securing the goals is that “capable adversaries are adopting potent A2/AD strategies that are challenging U.S. ability to ensure operational access.” These adversaries include China, Russia, and Iran.
The Third Offset Strategy was devised to address this primary strategic challenge.
In November 2014, then–Secretary of Defense Chuck Hagel announced a new Defense Innovation Initiative, which included the Third Offset Strategy. The initiative seeks to maintain U.S. military superiority over capable adversaries through the development of novel capabilities and concepts. Secretary Hagel modeled his approach on the First Offset Strategy of the 1950s, in which President Dwight D. Eisenhower countered the Soviet Union’s conventional numerical superiority through the buildup of America’s nuclear deterrent, and on the Second Offset Strategy of the 1970s, in which Secretary of Defense Harold Brown shepherded the development of precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance (ISR) systems to counter the numerical superiority and improving technical capability of Warsaw Pact forces along the Central Front in Europe.
Secretary of Defense Ashton Carter has built on Hagel’s vision of the Third Offset Strategy, and the proposed fiscal year 2017 budget is the first major public manifestation of the strategy: approximately $3.6 billion in research and development funding dedicated to Third Offset Strategy pursuits. As explained by Deputy Secretary of Defense Bob Work, the budget seeks to conduct numerous small bets on advanced capability research and demonstrations, and to work with Congress and the Services to craft new operational concepts so that the next administration can determine “what are the key bets we’re going to make.”
As Walton puts it, “the next Secretary of Defense will have the opportunity to make those big bets.” The keys to making the correct bets will be selecting the most appropriate scenarios to plan around, accurately assessing the performance of the U.S. joint force that will be programmed and budgeted for, and identifying the right priorities for new investment.
It is in this context that Walton recommended reviving campaign-level combat modeling at the Defense Department level, as part an overall reform of analytical processes informing force planning decisions.
Walton concludes by identifying the major obstacles in carrying out the Third Offset Strategy, some of which will be institutional and political in nature. However, he quickly passes over what might perhaps be the biggest problem with the Third Offset strategy, which is that it might be based on the wrong premises.
Lastly, the next Secretary of Defense will face numerous other, important defense challenges that will threaten to engross his or her attention, ranging from leading U.S. forces in Afghanistan, to countering Chinese, Russian, and Islamic State aggression, to reforming Goldwater-Nichols, military compensation, and base structure.
The ongoing conflicts in Afghanistan, Syria, and Iraq show no sign of abating anytime soon, yet they constitute “lesser includeds” in the Third Offset Strategy. Are we sure enough to bet that the A2/AD threat is the most important strategic challenge the U.S. will face in the near future?
Walton’s piece is worth reading and thinking about.
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”
According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.
CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).
CAPE took the criticism to heart.
CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.
However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”
As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.
Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)
Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.
In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”
As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.
In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)
So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.
A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!
Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!
Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]
Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.
Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.