Tag Military Science

Tank Loss Rates in Combat: Then and Now

wwii-tank-battlefieldAs the U.S. Army and the national security community seek a sense of what potential conflicts in the near future might be like, they see the distinct potential for large tank battles. Will technological advances change the character of armored warfare? Perhaps, but it seems more likely that the next big tank battles – if they occur – will likely resemble those from the past.

One aspect of future battle of great interest to military planners is probably going to tank loss rates in combat. In a previous post, I looked at the analysis done by Trevor Dupuy on the relationship between tank and personnel losses in the U.S. experience during World War II. Today, I will take a look at his analysis of historical tank loss rates.

In general, Dupuy identified that a proportional relationship exists between personnel casualty rates in combat and losses in tanks, guns, trucks, and other equipment. (His combat attrition verities are discussed here.) Looking at World War II division and corps-level combat engagement data in 1943-1944 between U.S., British and German forces in the west, and German and Soviet forces in the east, Dupuy found similar patterns in tank loss rates.

attrition-fig-58

In combat between two division/corps-sized, armor-heavy forces, Dupuy found that the tank loss rates were likely to be between five to seven times the personnel casualty rate for the winning side, and seven to 10 for the losing side. Additionally, defending units suffered lower loss rates than attackers; if an attacking force suffered a tank losses seven times the personnel rate, the defending forces tank losses would be around five times.

Dupuy also discovered the ratio of tank to personnel losses appeared to be a function of the proportion of tanks to infantry in a combat force. Units with fewer than six tanks per 1,000 troops could be considered armor supporting, while those with a density of more than six tanks per 1,000 troops were armor-heavy. Armor supporting units suffered lower tank casualty rates than armor heavy units.

attrition-fig-59

Dupuy looked at tank loss rates in the 1973 Arab-Israeli War and found that they were consistent with World War II experience.

What does this tell us about possible tank losses in future combat? That is a very good question. One guess that is reasonably certain is that future tank battles will probably not involve forces of World War II division or corps size. The opposing forces will be brigade combat teams, or more likely, battalion-sized elements.

Dupuy did not have as much data on tank combat at this level, and what he did have indicated a great deal more variability in loss rates. Examples of this can be found in the tables below.

attrition-fig-53attrition-fig-54

These data points showed some consistency, with a mean of 6.96 and a standard deviation of 6.10, which is comparable to that for division/corps loss rates. Personnel casualty rates are higher and much more variable than those at the division level, however. Dupuy stated that more research was necessary to establish a higher degree of confidence and relevance of the apparent battalion tank loss ratio. So one potentially fruitful area of research with regard to near future combat could very well be a renewed focus on historical experience.

NOTES

Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (Falls Church, VA: NOVA Publications, 1995), pp. 41-43; 81-90; 102-103

Back To The Future: The Mobile Protected Firepower (MPF) Program

The MPF's historical antecedent: the German Army's 7.5 cm leichtes Infanteriegeschütz.
The MPF’s historical antecedent: the German Army’s 7.5 cm leichtes Infanteriegeschütz.

Historically, one of the challenges of modern combat has been in providing responsive, on-call, direct fire support for infantry. The U.S. armed forces have traditionally excelled in providing fire support for their ground combat maneuver elements, but recent changes have apparently caused concern that this will continue to be the case in the future.

Case in point is the U.S. Army’s Mobile Protected Firepower (MPF) program. The MPF seems to reflect concern by the U.S. Army that future combat environments will inhibit the capabilities of heavy artillery and air support systems tasked with providing fire support for infantry units. As Breaking Defense describes it,

“Our near-peers have sought to catch up with us,” said Fort Benning commander Maj. Gen. Eric Wesley, using Pentagon code for China and Russia. These sophisticated nation-states — and countries buying their hardware, like Iran — are developing so-called Anti-Access/Area Denial (A2/AD): layered defenses of long-range sensors and missiles to keep US airpower and ships at a distance (anti-access), plus anti-tank weapons, mines, and roadside bombs to decimate ground troops who get close (area denial).

The Army’s Maneuver Center of Excellence at Ft. Benning, Georgia is the proponent for development of a new lightly-armored, tracked vehicle mounting a 105mm or 120mm gun. According to the National Interest, the goal of the MPF program is

… to provide a company of vehicles—which the Army adamantly does not want to refer to as light tanks—to brigades from the 82nd Airborne Division or 10th Mountain Division that can provide heavy fire support to those infantry units. The new vehicle, which is scheduled to enter into full-scale engineering and manufacturing development in 2019—with fielding tentatively scheduled for around 2022—would be similar in concept to the M551 Sheridan light tank. The Sheridan used to be operated the Army’s airborne units unit until 1996, but was retired without replacement. (Emphasis added)

As Chris recently pointed out, General Dynamics Land Systems has developed a prototype it calls the Griffin. BAE Systems has also pitched its XM8 Armored Gun System, developed in the 1990s.

The development of a dedicated, direct fire support weapon for line infantry can be seen as something of an anachronism. During World War I, German infantrymen sought alternatives to relying on heavy artillery support that was under the control of higher headquarters and often slow or unresponsive to tactical situations on the battlefield. They developed an expedient called the “infantry gun” (Infanteriegeschütz) by stripping down captured Russian 76.2mm field guns for direct use against enemy infantry, fortifications, and machine guns. Other armies imitated the Germans, but between the wars, the German Army was only one to develop 75mm and 150mm wheeled guns of its own dedicated specifically to infantry combat support.

The Germans were also the first to develop versions mounted on tracked, armored chassis, called “assault guns” (Sturmgeschütz). During World War II, the Germans often pressed their lightly armored assault guns into duty as ersatz tanks to compensate for insufficient numbers of actual tanks. (The apparently irresistible lure to use anything that looks like a tank as a tank also afflicted the World War II U.S. tank destroyer as well, yielding results that dissatisfied all concerned.)

Other armies again copied the Germans during the war, but the assault gun concept was largely abandoned afterward. Both the U.S. and the Soviet Union developed vehicles intended to provide gunfire support for airborne infantry, but these were more aptly described as light tanks. The U.S. Army’s last light tank, the M551 Sheridan, was retired in 1996 and not replaced.

It appears that the development of new technology is leading the U.S. Army back to old ideas. Just don’t call them light tanks.

Are Long-Range Fires Changing The Character of Land Warfare?

Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)
Raytheon’s new Long-Range Precision Fires missile is deployed from a mobile launcher in this artist’s rendering. The new missile will allow the Army to fire two munitions from a single weapons pod, making it cost-effective and doubling the existing capacity. (Ratheon)

Has U.S. land warfighting capability been compromised by advances by potential adversaries in long-range artillery capabilities? Michael Jacobson and Robert H. Scales argue that this is the case in an article on War on the Rocks.

While the U.S. Army has made major advances by incorporating precision into artillery, the ability and opportunity to employ precision are premised on a world of low-intensity conflict. In high-intensity conflict defined by combined-arms maneuver, the employment of artillery based on a precise point on the ground becomes a much more difficult proposition, especially when the enemy commands large formations of moving, armored vehicles, as Russia does. The U.S. joint force has recognized this dilemma and compensates for it by employing superior air forces and deep-strike fires. But Russia has undertaken a comprehensive upgrade of not just its military technology but its doctrine. We should not be surprised that Russia’s goal in this endeavor is to offset U.S. advantages in air superiority and double-down on its traditional advantages in artillery and rocket mass, range, and destructive power.

Jacobson and Scales provide a list of relatively quick fixes they assert would restore U.S. superiority in long-range fires: change policy on the use of cluster munitions; upgrade the U.S. self-propelled howitzer inventory from short-barreled 39 caliber guns to long-barreled 52 calibers and incorporate improved propellants and rocket assistance to double their existing range; reevaluate restrictions on the forthcoming Long Range Precision Fires rocket system in light of Russian attitudes toward the Intermediate Range Nuclear Forces treaty; and rebuild divisional and field artillery units atrophied by a decade of counterinsurgency warfare.

Their assessment echoes similar comments made earlier this year by Lieutenant General H. R. McMaster, director of the U.S. Army’s Capabilities Integration Center. Another option for countering enemy fire artillery capabilities, McMaster suggested, was the employment of “cross-domain fires.” As he explained, “When an Army fires unit arrives somewhere, it should be able to do surface-to-air, surface-to-surface, and shore-to-ship capabilities.

The notion of land-based fire elements engaging more than just other land or counter-air targets has given rise to a concept being called “multi-domain battle.” It’s proponents, Dr. Albert Palazzo of the Australian Army’s War Research Centre, and Lieutenant Colonel David P. McLain III, Chief, Integration and Operations Branch in the Joint and Army Concepts Division of the Army Capabilities Integration Center, argue (also at War on the Rocks) that

While Western forces have embraced jointness, traditional boundaries between land, sea, and air have still defined which service and which capability is tasked with a given mission. Multi-domain battle breaks down the traditional environmental boundaries between domains that have previously limited who does what where. The theater of operations, in this view, is a unitary whole. The most useful capability needs to get the mission no matter what domain it technically comes from. Newly emerging technologies will enable the land force to operate in ways that, in the past, have been limited by the boundaries of its domain. These technologies will give the land force the ability to dominate not just the land but also project power into and across the other domains.

Palazzo and McClain contend that future land warfare forces

…must be designed, equipped, and trained to gain and maintain advantage across all domains and to understand and respond to the requirements of the future operating environment… Multi-domain battle will create options and opportunities for the joint force, while imposing multiple dilemmas on the adversary. Through land-to-sea, land-to-air, land-to-land, land-to-space, and land-to-cyberspace fires and effects, land forces can deter, deny, and defeat the adversary. This will allow the joint commander to seize, retain, and exploit the initiative.

As an example of their concept, Palazzo and McClain cite a combined, joint operation from the Pacific Theater in World War II:

Just after dawn on September 4, 1943, Australian soldiers of the 9th Division came ashore near Lae, Papua in the Australian Army’s first major amphibious operation since Gallipoli. Supporting them were U.S. naval forces from VII Amphibious Force. The next day, the 503rd U.S. Parachute Regiment seized the airfield at Nadzab to the West of Lae, which allowed the follow-on landing of the 7th Australian Division.  The Japanese defenders offered some resistance on the land, token resistance in the air, and no resistance at sea. Terrain was the main obstacle to Lae’s capture.

From the beginning, the allied plan for Lae was a joint one. The allies were able to get their forces across the approaches to the enemy’s position, establish secure points of entry, build up strength, and defeat the enemy because they dominated the three domains of war relevant at the time — land, sea, and air.

The concept of multi-domain warfare seems like a logical conceptualization for integrating land-based weapons of increased range and effect into the sorts of near-term future conflicts envisioned by U.S. policy-makers and defense analysts. It comports fairly seamlessly with the precepts of the Third Offset Strategy.

However, as has been observed with the Third Offset Strategy, this raises questions about the role of long-range fires in conflicts that do not involve near-peer adversaries, such as counterinsurgencies. Is an emphasis on technological determinism reducing the capabilities of land combat units to just what they shoot? Is the ability to take and hold ground an anachronism in anti-access/area-denial environments? Do long-range fires obviate the relationship between fire and maneuver in modern combat tactics? If even infantry squads are equipped with stand-off weapons, what is the future of close quarters combat?

Should Defense Department Campaign-Level Combat Modeling Be Reinstated?

Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller

In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”

According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.

CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).

CAPE took the criticism to heart.

CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.

However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”

As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.

Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)

Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.

In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”

As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.

In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)

So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

The Uncongenial Lessons of Past Conflicts

Williamson Murray, professor emeritus of history at Ohio State University, on the notion that military failures can be traced to an overemphasis on the lessons of the last war:

It is a myth that military organizations tend to do badly in each new war because they have studied too closely the last one; nothing could be farther from the truth. The fact is that military organizations, for the most part, study what makes them feel comfortable about themselves, not the uncongenial lessons of past conflicts. The result is that more often than not, militaries have to relearn in combat—and usually at a heavy cost—lessons that were readily apparent at the end of the last conflict.

[Williamson Murray, “Thinking About Innovation,” Naval War College Review, Spring 2001, 122-123. This passage was cited in a recent essay by LTG H.R. McMaster, “Continuity and Change: The Army Operating Concept and Clear Thinking About Future War,” Military Review, March-April 2015. I recommend reading both.]

Will This Weapon Change Infantry Warfare Forever? Maybe, But Probably Not

XM25 Counter Defilade Target Engagement (CDTE) System

The weapon pictured above is the XM25 Counter Defilade Target Engagement (CDTE) precision-guided grenade launcher. According to its manufacturer, Orbital ATK,

The XM25 is a next-generation, semi-automatic weapon designed for effectiveness against enemies protected by walls, dug into foxholes or hidden in hard-to-reach places.

The XM25 provides the soldier with a 300 percent to 500 percent increase in hit probability to defeat point, area and defilade targets out to 500 meters. The weapon features revolutionary high-explosive, airburst ammunition programmed by the weapon’s target acquisition/fire control system.

Following field testing in Afghanistan that reportedly produced mixed results, the U.S. Army is seeking funding the Fiscal Year 2017 defense budget to acquire 105 of the weapons for issue to specifically-trained personnel at the tactical unit level.

The purported capabilities of the weapon have certainly raised expectations for its utility. A program manager in the Army’s Program Executive Office declared “The introduction of the XM25 is akin to other revolutionary systems such as the machine gun, the airplane and the tank, all of which changed battlefield tactics.” An industry observer concurred, claiming that “The weapon’s potential revolutionary impact on infantry tactics is undeniable.”

Well…maybe. There is little doubt that the availability of precision-guided standoff weapons at the squad or platoon level will afford significant tactical advantages. Whatever technical problems that currently exist will be addressed and there will surely be improvements and upgrades.

It seems unlikely, however, that the XM25 will bring revolutionary change to the battlefield. In his 1980 study The Evolution of Weapons and Warfare, Trevor N. Dupuy explored the ongoing historical relationship between technological change and adaptation on the battlefield. The introduction of increasingly lethal weapons has led to corresponding changes in the ways armies fight.

Assimilation of a significant increase in [weapon] lethality has generally been marked (a) by dispersion, thus reducing the number of people exposed to the new weapon in the enemy’s hands; (b) by giving greater freedom of maneuver; and (c) by improving cooperation among the different arms and services. [p. 337]

As the chart below illustrates (click for a larger version), as weapons have become more lethal over time, combat forces have adjusted by dispersing in greater frontage and depth on the battlefield (as reflected by the red line).

[pp. 288-289]

Dupuy noted that there is a lag between the introduction of a new weapon and its full integration into an army’s tactics and force structure.

In modern times — and to some extent in earlier eras — there has been an interval of approximately twenty years between introduction and assimilation of new weapons…it is significant that, despite the rising tempo of invention, this time lag remained relatively constant. [p. 338]

Moreover, Dupuy observed that true military revolutions are historically rare, and require more than technological change to occur.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and more lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons. [p. 340]

Looking at the trends over time suggests that any resulting changes will be evolutionary rather than revolutionary. The ways armies historically have adapted to new weapons — dispersion, tactical flexibility, and combined arms —- are hallmarks of the fire and movement concept that is at the heart of modern combat tactics, which evolved in the early years of the 20th century, particularly during the First World War. However effective the XM25 may prove to be, it’s impact is unlikely to alter the basic elements of fire and movement tactics. Enemy combatants will likely adapt through even greater dispersion (the modern “empty battlefield“), tactical innovation, and combinations of countering weapons. It is also likely that it will take time, trial and error, and effective organizational leadership in order to take full advantage of the XM25’s capabilities.

[Edited]

Are They Channeling Trevor Dupuy?

TrevorCoverShot

Continuing the RAND description of their hex boardgame:

Ground unit combat strengths were based on a systematic scoring of individual weapons, from tanks and artillery down to light machine guns, which were then aggregated according to the tables of organization and equipment for the various classes of NATO and Russian units. Overall unit scores were adjusted to account for differences in training, sustainment, and other factors not otherwise captured. Air unit combat strengths were derived from the results of offline engagement, mission, and campaign-level modeling.

This looks like some kind of firepower or combat power score, or perhaps Trevor Dupuy’s OLIs (Operational Lethality Indexes). As they say “systematic scoring” one wonders what system they used. Know of only one scoring system that is systematic (meaning the OLIs, which are based upon formulae). The subject is probably best summarized in Dr. James Taylor’s article on “Consistent Scoring of Weapons and Aggregation of Forces:” http://www.dupuyinstitute.org/pdf/v2n2.pdf. This is the same James Taylor who wrote the definitive two-volume work on Lanchester equations.

I do note with interest the adjustment for “differences in training, sustainment, and other factors.” That is always good to see.

Also noted:

Full documentation of the gaming platform will be forthcoming in a subsequent report.

Look forward to reading it.