Tag wargaming

Assessing The Assessments Of The Military Balance In The China Seas

“If we maintain our faith in God, love of freedom, and superior global airpower, the future [of the US] looks good.” — U.S. Air Force General Curtis E. LeMay (Commander, U.S. Strategic Command, 1948-1957)

Curtis LeMay was involved in the formation of RAND Corporation after World War II. RAND created several models to measure the dynamics of the US-China military balance over time. Since 1996, this has been computed for two scenarios, differing by range from mainland China: one over Taiwan and the other over the Spratly Islands. The results of the model results for selected years can be seen in the graphic below.

The capabilities listed in the RAND study are interesting, notable in that the air superiority category, rough parity exists as of 2017. Also, the ability to attack air bases has given an advantage to the Chinese forces.

Investigating the methodology used does not yield any precise quantitative modeling examples, as would be expected in a rigorous academic effort, although there is some mention of statistics, simulation and historical examples.

The analysis presented here necessarily simplifies a great number of conflict characteristics. The emphasis throughout is on developing and assessing metrics in each area that provide a sense of the level of difficulty faced by each side in achieving its objectives. Apart from practical limitations, selectivity is driven largely by the desire to make the work transparent and replicable. Moreover, given the complexities and uncertainties in modern warfare, one could make the case that it is better to capture a handful of important dynamics than to present the illusion of comprehensiveness and precision. All that said, the analysis is grounded in recognized conclusions from a variety of historical sources on modern warfare, from the air war over Korea and Vietnam to the naval conflict in the Falklands and SAM hunting in Kosovo and Iraq. [Emphasis added].

We coded most of the scorecards (nine out of ten) using a five-color stoplight scheme to denote major or minor U.S. advantage, a competitive situation, or major or minor Chinese advantage. Advantage, in this case, means that one side is able to achieve its primary objectives in an operationally relevant time frame while the other side would have trouble in doing so. [Footnote] For example, even if the U.S. military could clear the skies of Chinese escort fighters with minimal friendly losses, the air superiority scorecard could be coded as “Chinese advantage” if the United States cannot prevail while the invasion hangs in the balance. If U.S. forces cannot move on to focus on destroying attacking strike and bomber aircraft, they cannot contribute to the larger mission of protecting Taiwan.

All of the dynamic modeling methodology (which involved a mix of statistical analysis, Monte Carlo simulation, and modified Lanchester equations) is publicly available and widely used by specialists at U.S. and foreign civilian and military universities.” [Emphasis added].

As TDI has contended before, the problem with using Lanchester’s equations is that, despite numerous efforts, no one has been able to demonstrate that they accurately represent real-world combat. So, even with statistics and simulation, how good are the results if they have relied on factors or force ratios with no relation to actual combat?

What about new capabilities?

As previously posted, the Kratos Mako Unmanned Combat Aerial Vehicle (UCAV), marketed as the “unmanned wingman,” has recently been cleared for export by the U.S. State Department. This vehicle is specifically oriented towards air-to-air combat, is stated to have unparalleled maneuverability, as it need not abide by limits imposed by human physiology. The Mako “offers fighter-like performance and is designed to function as a wingman to manned aircraft, as a force multiplier in contested airspace, or to be deployed independently or in groups of UASs. It is capable of carrying both weapons and sensor systems.” In addition, the Mako has the capability to be launched independently of a runway, as illustrated below. The price for these vehicles is three million each, dropping to two million each for an order of at least 100 units. Assuming a cost of $95 million for an F-35A, we can imagine a hypothetical combat scenario pitting two F-35As up against 100 of these Mako UCAVs in a drone swarm; a great example of the famous phrase, quantity has a quality all its own.

A battery of Kratos Aerial Target drone ready for take off. One of the advantages of the low-cost Kratos drones are their ability to get into the air quickly. [Kratos Defense]

How to evaluate the effects of these possible UCAV drone swarms?

In building up towards the analysis of all of these capabilities in the full theater, campaign level conflict, some supplemental wargaming may be useful. One game that takes a good shot at modeling these dynamics is Asian Fleet.  This is a part of the venerable Fleet Series, published by Victory Games, designed by Joseph Balkoski to model modern (that is Cold War) naval combat. This game system has been extended in recent years, originally by Command Magazine Japan, and then later by Technical Term Gaming Company.

Screenshot of Asian Fleet module by Bryan Taylor [vassalengine.org]

More to follow on how this game transpires!

‘Love’s Tables’: U.S. War Department Casualty Estimation in World War II

The same friend of TDI who asked about ‘Evett’s Rates,” the British casualty estimation methodology during World War II, also mentioned that the work of Albert G. Love III was now available on-line. Rick Atkinson also referenced “Love’s Tables” in The Guns At Last Light.

In 1931, Lieutenant Colonel (later Brigadier General) Love, then a Medical Corps physician in the U.S. Army Medical Field Services School, published a study of American casualty data in the recent Great War, titled “War Casualties.”[1] This study was likely the source for tables used for casualty estimation by the U.S. Army through 1944.[2]

Love, who had no advanced math or statistical training, undertook his study with the support of the Army Surgeon General, Merritte W. Ireland, and initial assistance from Dr. Lowell J. Reed, a professor of biostatistics at John Hopkins University. Love’s posting in the Surgeon General’s Office afforded him access to an array of casualty data collected from the records of the American Expeditionary Forces in France, as well as data from annual Surgeon General reports dating back to 1819, the official medical history of the U.S. Civil War, and U.S. general population statistics.

Love’s research was likely the basis for rate tables for calculating casualties that first appeared in the 1932 edition of the War Department’s Staff Officer’s Field Manual.[3]

Battle Casualties, including Killed, in Percent of Unit Strength, Staff Officer’s Field Manual (1932).

The 1932 Staff Officer’s Field Manual estimation methodology reflected Love’s sophisticated understanding of the factors influencing combat casualty rates. It showed that both the resistance and combat strength (and all of the factors that comprised it) of the enemy, as well as the equipment and state of training and discipline of the friendly troops had to be taken into consideration. The text accompanying the tables pointed out that loss rates in small units could be quite high and variable over time, and that larger formations took fewer casualties as a fraction of overall strength, and that their rates tended to become more constant over time. Casualties were not distributed evenly, but concentrated most heavily among the combat arms, and in the front-line infantry in particular. Attackers usually suffered higher loss rates than defenders. Other factors to be accounted for included the character of the terrain, the relative amount of artillery on each side, and the employment of gas.

The 1941 iteration of the Staff Officer’s Field Manual, now designated Field Manual (FM) 101-10[4], provided two methods for estimating battle casualties. It included the original 1932 Battle Casualties table, but the associated text no longer included the section outlining factors to be considered in calculating loss rates. This passage was moved to a note appended to a new table showing the distribution of casualties among the combat arms.

Rather confusingly, FM 101-10 (1941) presented a second table, Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength. It included rates for front line regiments and divisions, corps and army units, reserves, and attached cavalry. The rates were broken down by posture and tactical mission.

Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength, FM 101-10 (1941)

The source for this table is unknown, nor the method by which it was derived. No explanatory text accompanied it, but a footnote stated that “this table is intended primarily for use in school work and in field exercises.” The rates in it were weighted toward the upper range of the figures provided in the 1932 Battle Casualties table.

The October 1943 edition of FM 101-10 contained no significant changes from the 1941 version, except for the caveat that the 1932 Battle Casualties table “may or may not prove correct when applied to the present conflict.”

The October 1944 version of FM 101-10 incorporated data obtained from World War II experience.[5] While it also noted that the 1932 Battle Casualties table might not be applicable, the experiences of the U.S. II Corps in North Africa and one division in Italy were found to be in agreement with the table’s division and corps loss rates.

FM 101-10 (1944) included another new table, Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), meaning that it now provided three distinct methods for estimating battle casualties.

Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), FM 101-10 (1944)

Like the 1941 Estimated Daily Losses in Campaign table, the sources for this new table were not provided, and the text contained no guidance as to how or when it should be used. The rates it contained fell roughly within the span for daily rates for severe (6-8%) to maximum (12%) combat listed in the 1932 Battle Casualty table, but would produce vastly higher overall rates if applied consistently, much higher than the 1932 table’s 1% daily average.

FM 101-10 (1944) included a table showing the distribution of losses by branch for the theater based on experience to that date, except for combat in the Philippine Islands. The new chart was used in conjunction with the 1944 Estimate of Battle Losses for a Front-Line Division table to determine daily casualty distribution.

Distribution of Battle Losses–Theater of Operations, FM 101-10 (1944)

The final World War II version of FM 101-10 issued in August 1945[6] contained no new casualty rate tables, nor any revisions to the existing figures. It did finally effectively invalidate the 1932 Battle Casualties table by noting that “the following table has been developed from American experience in active operations and, of course, may not be applicable to a particular situation.” (original emphasis)

NOTES

[1] Albert G. Love, War Casualties, The Army Medical Bulletin, No. 24, (Carlisle Barracks, PA: 1931)

[2] This post is adapted from TDI, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 314-317).

[3] U.S. War Department, Staff Officer’s Field Manual, Part Two: Technical and Logistical Data (Government Printing Office, Washington, D.C., 1932)

[4] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., June 15, 1941)

[5] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., October 12, 1944)

[6] U.S. War Department, FM 101-10 Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., August 1, 1945)

‘Evett’s Rates’: British War Office Wastage Tables

Stretcher bearers of the East Surrey Regiment, with a Churchill tank of the North Irish Horse in the background, during the attack on Longstop Hill, Tunisia, 23 April 1943. [Imperial War Museum/Wikimedia]

A friend of TDI queried us recently about a reference in Rick Atkinson’s The Guns at Last Light: The War in Western Europe, 1944-1945 to a British casualty estimation methodology known as “Evett’s Rates.” There are few references to Evett’s Rates online, but as it happens, TDI did find out some details about them for a study on casualty estimation. [1]

British Army staff officers during World War II and the 1950s used a set of look-up tables which listed expected monthly losses in percentage of strength for various arms under various combat conditions. The origin of the tables is not known, but they were officially updated twice, in 1942 by a committee chaired by Major General Evett, and in 1951-1955 by the Army Operations Research Group (AORG).[2]

The methodology was based on staff predictions of one of three levels of operational activity, “Intense,” “Normal,” and “Quiet.” These could be applied to an entire theater, or to individual divisions. The three levels were defined the same way for both the Evett Committee and AORG rates:

The rates were broken down by arm and rank, and included battle and nonbattle casualties.

Rates of Personnel Wastage Including Both Battle and Non-battle Casualties According to the Evett Committee of 1942. (Percent per 30 days).

The Evett Committee rates were criticized during and after the war. After British forces suffered twice the anticipated casualties at Anzio, the British 21st Army Group applied a “double intense rate” which was twice the Evett Committee figure and intended to apply to assaults. When this led to overestimates of casualties in Normandy, the double intense rate was discarded.

From 1951 to 1955, AORG undertook a study of casualty rates in World War II. Its analysis was based on casualty data from the following campaigns:

  • Northwest Europe, 1944
    • 6-30 June – Beachhead offensive
    • 1 July-1 September – Containment and breakout
    • 1 October-30 December – Semi-static phase
    • 9 February to 6 May – Rhine crossing and final phase
  • Italy, 1944
    • January to December – Fighting a relatively equal enemy in difficult country. Warfare often static.
    • January to February (Anzio) – Beachhead held against severe and well-conducted enemy counter-attacks.
  • North Africa, 1943
    • 14 March-13 May – final assault
  • Northwest Europe, 1940
    • 10 May-2 June – Withdrawal of BEF
  • Burma, 1944-45

From the first four cases, the AORG study calculated two sets of battle casualty rates as percentage of strength per 30 days. “Overall” rates included KIA, WIA, C/MIA. “Apparent rates” included these categories but subtracted troops returning to duty. AORG recommended that “overall” rates be used for the first three months of a campaign.

The Burma campaign data was evaluated differently. The analysts defined a “force wastage” category which included KIA, C/MIA, evacuees from outside the force operating area and base hospitals, and DNBI deaths. “Dead wastage” included KIA, C/MIA, DNBI dead, and those discharged from the Army as a result of injuries.

The AORG study concluded that the Evett Committee underestimated intense loss rates for infantry and armor during periods of very hard fighting and overestimated casualty rates for other arms. It recommended that if only one brigade in a division was engaged, two-thirds of the intense rate should be applied, if two brigades were engaged the intense rate should be applied, and if all brigades were engaged then the intense rate should be doubled. It also recommended that 2% extra casualties per month should be added to all the rates for all activities should the forces encounter heavy enemy air activity.[1]

The AORG study rates were as follows:

Recommended AORG Rates of Personnel Wastage. (Percent per 30 days).

If anyone has further details on the origins and activities of the Evett Committee and AORG, we would be very interested in finding out more on this subject.

NOTES

[1] This post is adapted from The Dupuy Institute, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 51-53).

[2] Rowland Goodman and Hugh Richardson. “Casualty Estimation in Open and Guerrilla Warfare.” (London: Directorate of Science (Land), U.K. Ministry of Defence, June 1995.), Appendix A.

Perla On Dupuy

Dr. Peter Perla, noted defense researcher, wargame designer and expert, and author of the seminal The Art of Wargaming: A Guide for Professionals and Hobbyists, gave the keynote address at the 2017 Connections Wargaming Conference last August. The topic of his speech, which served as his valedictory address on the occasion of his retirement from government service, addressed the predictive power of wargaming. In it, Perla recalled a conversation he once had with Trevor Dupuy in the early 1990s:

Like most good stories, this one has a beginning, a middle, and an end. I have sort of jumped in at the middle. So let’s go back to the beginning.

As it happens, that beginning came during one of the very first Connections. It may even have been the first one. This thread is one of those vivid memories we all have of certain events in life. In my case, it is a short conversation I had with Trevor Dupuy.

I remember the setting well. We were in front of the entrance to the O Club at Maxwell. It was kind of dark, but I can’t recall if it was in the morning before the club opened for our next session, or the evening, before a dinner. Trevor and I were chatting and he said something about wargaming being predictive. I still recall what I said.

“Good grief, Trevor, we can’t even predict the outcome of a Super Bowl game much less that of a battle!” He seemed taken by surprise that I felt that way, and he replied, “Well, if that is true, what are we doing? What’s the point?”

I had my usual stock answers. We wargame to develop insights, to identify issues, and to raise questions. We certainly don’t wargame to predict what will happen in a battle or a war. I was pretty dogmatic in those days. Thank goodness I’m not that way any more!

The question of prediction did not go away, however.

For the rest of Perla’s speech, see here. For a wonderful summary of the entire 2017 Connections Wargaming conference, see here.

 

Spotted In The New Books Section Of The U.S. Naval Academy Library…

Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

Attrition In Future Land Combat

Soldiers with Battery C, 1st Battalion, 82nd Field Artillery Regiment, 1st Brigade Combat Team, 1st Cavalry Division maneuver their Paladins through Hohenfels Training Area, Oct. 26. Photo Credit: Capt. John Farmer, 1st Brigade Combat Team, 1st Cav

[This post was originally published on June 9, 2017]

Last autumn, U.S. Army Chief of Staff General Mark Milley asserted that “we are on the cusp of a fundamental change in the character of warfare, and specifically ground warfare. It will be highly lethal, very highly lethal, unlike anything our Army has experienced, at least since World War II.” He made these comments while describing the Army’s evolving Multi-Domain Battle concept for waging future combat against peer or near-peer adversaries.

How lethal will combat on future battlefields be? Forecasting the future is, of course, an undertaking fraught with uncertainties. Milley’s comments undoubtedly reflect the Army’s best guesses about the likely impact of new weapons systems of greater lethality and accuracy, as well as improved capabilities for acquiring targets. Many observers have been closely watching the use of such weapons on the battlefield in the Ukraine. The spectacular success of the Zelenopillya rocket strike in 2014 was a convincing display of the lethality of long-range precision strike capabilities.

It is possible that ground combat attrition in the future between peer or near-peer combatants may be comparable to the U.S. experience in World War II (although there were considerable differences between the experiences of the various belligerents). Combat losses could be heavier. It certainly seems likely that they would be higher than those experienced by U.S. forces in recent counterinsurgency operations.

Unfortunately, the U.S. Defense Department has demonstrated a tenuous understanding of the phenomenon of combat attrition. Despite wildly inaccurate estimates for combat losses in the 1991 Gulf War, only modest effort has been made since then to improve understanding of the relationship between combat and casualties. The U.S. Army currently does not have either an approved tool or a formal methodology for casualty estimation.

Historical Trends in Combat Attrition

Trevor Dupuy did a great deal of historical research on attrition in combat. He found several trends that had strong enough empirical backing that he deemed them to be verities. He detailed his conclusions in Understanding War: History and Theory of Combat (1987) and Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (1995).

Dupuy documented a clear relationship over time between increasing weapon lethality, greater battlefield dispersion, and declining casualty rates in conventional combat. Even as weapons became more lethal, greater dispersal in frontage and depth among ground forces led daily personnel loss rates in battle to decrease.

The average daily battle casualty rate in combat has been declining since 1600 as a consequence. Since battlefield weapons continue to increase in lethality and troops continue to disperse in response, it seems logical to presume the trend in loss rates continues to decline, although this may not necessarily be the case. There were two instances in the 19th century where daily battle casualty rates increased—during the Napoleonic Wars and the American Civil War—before declining again. Dupuy noted that combat casualty rates in the 1973 Arab-Israeli War remained roughly the same as those in World War II (1939-45), almost thirty years earlier. Further research is needed to determine if average daily personnel loss rates have indeed continued to decrease into the 21st century.

Dupuy also discovered that, as with battle outcomes, casualty rates are influenced by the circumstantial variables of combat. Posture, weather, terrain, season, time of day, surprise, fatigue, level of fortification, and “all out” efforts affect loss rates. (The combat loss rates of armored vehicles, artillery, and other other weapons systems are directly related to personnel loss rates, and are affected by many of the same factors.) Consequently, yet counterintuitively, he could find no direct relationship between numerical force ratios and combat casualty rates. Combat power ratios which take into account the circumstances of combat do affect casualty rates; forces with greater combat power inflict higher rates of casualties than less powerful forces do.

Winning forces suffer lower rates of combat losses than losing forces do, whether attacking or defending. (It should be noted that there is a difference between combat loss rates and numbers of losses. Depending on the circumstances, Dupuy found that the numerical losses of the winning and losing forces may often be similar, even if the winner’s casualty rate is lower.)

Dupuy’s research confirmed the fact that the combat loss rates of smaller forces is higher than that of larger forces. This is in part due to the fact that smaller forces have a larger proportion of their troops exposed to enemy weapons; combat casualties tend to concentrated in the forward-deployed combat and combat support elements. Dupuy also surmised that Prussian military theorist Carl von Clausewitz’s concept of friction plays a role in this. The complexity of interactions between increasing numbers of troops and weapons simply diminishes the lethal effects of weapons systems on real world battlefields.

Somewhat unsurprisingly, higher quality forces (that better manage the ambient effects of friction in combat) inflict casualties at higher rates than those with less effectiveness. This can be seen clearly in the disparities in casualties between German and Soviet forces during World War II, Israeli and Arab combatants in 1973, and U.S. and coalition forces and the Iraqis in 1991 and 2003.

Combat Loss Rates on Future Battlefields

What do Dupuy’s combat attrition verities imply about casualties in future battles? As a baseline, he found that the average daily combat casualty rate in Western Europe during World War II for divisional-level engagements was 1-2% for winning forces and 2-3% for losing ones. For a divisional slice of 15,000 personnel, this meant daily combat losses of 150-450 troops, concentrated in the maneuver battalions (The ratio of wounded to killed in modern combat has been found to be consistently about 4:1. 20% are killed in action; the other 80% include mortally wounded/wounded in action, missing, and captured).

It seems reasonable to conclude that future battlefields will be less densely occupied. Brigades, battalions, and companies will be fighting in spaces formerly filled with armies, corps, and divisions. Fewer troops mean fewer overall casualties, but the daily casualty rates of individual smaller units may well exceed those of WWII divisions. Smaller forces experience significant variation in daily casualties, but Dupuy established average daily rates for them as shown below.

For example, based on Dupuy’s methodology, the average daily loss rate unmodified by combat variables for brigade combat teams would be 1.8% per day, battalions would be 8% per day, and companies 21% per day. For a brigade of 4,500, that would result in 81 battle casualties per day, a battalion of 800 would suffer 64 casualties, and a company of 120 would lose 27 troops. These rates would then be modified by the circumstances of each particular engagement.

Several factors could push daily casualty rates down. Milley envisions that U.S. units engaged in an anti-access/area denial environment will be constantly moving. A low density, highly mobile battlefield with fluid lines would be expected to reduce casualty rates for all sides. High mobility might also limit opportunities for infantry assaults and close quarters combat. The high operational tempo will be exhausting, according to Milley. This could also lower loss rates, as the casualty inflicting capabilities of combat units decline with each successive day in battle.

It is not immediately clear how cyberwarfare and information operations might influence casualty rates. One combat variable they might directly impact would be surprise. Dupuy identified surprise as one of the most potent combat power multipliers. A surprised force suffers a higher casualty rate and surprisers enjoy lower loss rates. Russian combat doctrine emphasizes using cyber and information operations to achieve it and forces with degraded situational awareness are highly susceptible to it. As Zelenopillya demonstrated, surprise attacks with modern weapons can be devastating.

Some factors could push combat loss rates up. Long-range precision weapons could expose greater numbers of troops to enemy fires, which would drive casualties up among combat support and combat service support elements. Casualty rates historically drop during night time hours, although modern night-vision technology and persistent drone reconnaissance might will likely enable continuous night and day battle, which could result in higher losses.

Drawing solid conclusions is difficult but the question of future battlefield attrition is far too important not to be studied with greater urgency. Current policy debates over whether or not the draft should be reinstated and the proper size and distribution of manpower in active and reserve components of the Army hinge on getting this right. The trend away from mass on the battlefield means that there may not be a large margin of error should future combat forces suffer higher combat casualties than expected.

TDI Friday Read: How Do We Know What We Know About War?

The late, great Carl Sagan.

Today’s edition of TDI Friday Read asks the question, how do we know if the theories and concepts we use to understand and explain war and warfare accurately depict reality? There is certainly no shortage of explanatory theories available, starting with Sun Tzu in the 6th century BCE and running to the present. As I have mentioned before, all combat models and simulations are theories about how combat works. Military doctrine is also a functional theory of warfare. But how do we know if any of these theories are actually true?

Well, one simple way to find out if a particular theory is valid is to use it to predict the outcome of the phenomenon it purports to explain. Testing theory through prediction is a fundamental aspect of the philosophy of science. If a theory is accurate, it should be able to produce a reasonable accurate prediction of future behavior.

In his 2016 article, “Can We Predict Politics? Toward What End?” Michael D. Ward, a Professor of Political Science at Duke University, made a case for a robust effort for using prediction as a way of evaluating the thicket of theory populating security and strategic studies. Dropping invalid theories and concepts is important, but there is probably more value in figuring out how and why they are wrong.

Screw Theory! We Need More Prediction in Security Studies!

Trevor Dupuy and TDI publicly put their theories to the test in the form of combat casualty estimates for the 1991 Gulf Way, the U.S. intervention in Bosnia, and the Iraqi insurgency. How well did they do?

Predictions

Dupuy himself argued passionately for independent testing of combat models against real-world data, a process known as validation. This is actually seldom done in the U.S. military operations research community.

Military History and Validation of Combat Models

However, TDI has done validation testing of Dupuy’s Quantified Judgement Model (QJM) and Tactical Numerical Deterministic Model (TNDM). The results are available for all to judge.

Validating Trevor Dupuy’s Combat Models

I will conclude this post on a dissenting note. Trevor Dupuy spent decades arguing for more rigor in the development of combat models and analysis, with only modest success. In fact, he encountered significant skepticism and resistance to his ideas and proposals. To this day, the U.S. Defense Department seems relatively uninterested in evidence-based research on this subject. Why?

David Wilkinson, Editor-in-Chief of the Oxford Review, wrote a fascinating blog post looking at why practitioners seem to have little actual interest in evidence-based practice.

Why evidence-based practice probably isn’t worth it…

His argument:

The problem with evidence based practice is that outside of areas like health care and aviation/technology is that most people in organisations don’t care about having research evidence for almost anything they do. That doesn’t mean they are not interesting in research but they are just not that interested in using the research to change how they do things – period.

His explanation for why this is and what might be done to remedy the situation is quite interesting.

Happy Holidays to all!

Validating Trevor Dupuy’s Combat Models

[The article below is reprinted from Winter 2010 edition of The International TNDM Newsletter.]

A Summation of QJM/TNDM Validation Efforts

By Christopher A. Lawrence

There have been six or seven different validation tests conducted of the QJM (Quantified Judgment Model) and the TNDM (Tactical Numerical Deterministic Model). As the changes to these two models are evolutionary in nature but do not fundamentally change the nature of the models, the whole series of validation tests across both models is worth noting. To date, this is the only model we are aware of that has been through multiple validations. We are not aware of any DOD [Department of Defense] combat model that has undergone more than one validation effort. Most of the DOD combat models in use have not undergone any validation.

The Two Original Validations of the QJM

After its initial development using a 60-engagement WWII database, the QJM was tested in 1973 by application of its relationships and factors to a validation database of 21 World War II engagements in Northwest Europe in 1944 and 1945. The original model proved to be 95% accurate in explaining the outcomes of these additional engagements. Overall accuracy in predicting the results of the 81 engagements in the developmental and validation databases was 93%.[1]

During the same period the QJM was converted from a static model that only predicted success or failure to one capable of also predicting attrition and movement. This was accomplished by adding variables and modifying factor values. The original QJM structure was not changed in this process. The addition of movement and attrition as outputs allowed the model to be used dynamically in successive “snapshot” iterations of the same engagement.

From 1973 to 1979 the QJM’s formulae, procedures, and variable factor values were tested against the results of all of the 52 significant engagements of the 1967 and 1973 Arab-Israeli Wars (19 from the former, 33 from the latter). The QJM was able to replicate all of those engagements with an accuracy of more than 90%?[2]

In 1979 the improved QJM was revalidated by application to 66 engagements. These included 35 from the original 81 engagements (the “development database”), and 31 new engagements. The new engagements included five from World War II and 26 from the 1973 Middle East War. This new validation test considered four outputs: success/failure, movement rates, personnel casualties, and tank losses. The QJM predicted success/failure correctly for about 85% of the engagements. It predicted movement rates with an error of 15% and personnel attrition with an error of 40% or less. While the error rate for tank losses was about 80%, it was discovered that the model consistently underestimated tank losses because input data included all kinds of armored vehicles, but output data losses included only numbers of tanks.[3]

This completed the original validations efforts of the QJM. The data used for the validations, and parts of the results of the validation, were published, but no formal validation report was issued. The validation was conducted in-house by Colonel Dupuy’s organization, HERO [Historical Evaluation Research Organization]. The data used were mostly from division-level engagements, although they included some corps- and brigade-level actions. We count these as two separate validation efforts.

The Development of the TNDM and Desert Storm

In 1990 Col. Dupuy, with the collaborative assistance of Dr. James G. Taylor (author of Lanchester Models of Warfare [vol. 1] [vol. 2], published by the Operations Research Society of America, Arlington, Virginia, in 1983) introduced a significant modification: the representation of the passage of time in the model. Instead of resorting to successive “snapshots,” the introduction of Taylor’s differential equation technique permitted the representation of time as a continuous flow. While this new approach required substantial changes to the software, the relationship of the model to historical experience was unchanged.[4] This revision of the model also included the substitution of formulae for some of its tables so that there was a continuous flow of values across the individual points in the tables. It also included some adjustment to the values and tables in the QJM. Finally, it incorporated a revised OLI [Operational Lethality Index] calculation methodology for modem armor (mobile fighting machines) to take into account all the factors that influence modern tank warfare.[5] The model was reprogrammed in Turbo PASCAL (the original had been written in BASIC). The new model was called the TNDM (Tactical Numerical Deterministic Model).

Building on its foundation of historical validation and proven attrition methodology, in December 1990, HERO used the TNDM to predict the outcome of, and losses from, the impending Operation DESERT STORM.[6] It was the most accurate (lowest) public estimate of U.S. war casualties provided before the war. It differed from most other public estimates by an order of magnitude.

Also, in 1990, Trevor Dupuy published an abbreviated form of the TNDM in the book Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War. A brief validation exercise using 12 battles from 1805 to 1973 was published in this book.[7] This version was used for creation of M-COAT[8] and was also separately tested by a student (Lieutenant Gozel) at the Naval Postgraduate School in 2000.[9] This version did not have the firepower scoring system, and as such neither M-COAT, Lieutenant Gozel’s test, nor Colonel Dupuy’s 12-battle validation included the OLI methodology that is in the primary version of the TNDM.

For counting purposes, I consider the Gulf War the third validation of the model. In the end, for any model, the proof is in the pudding. Can the model be used as a predictive tool or not? If not, then there is probably a fundamental flaw or two in the model. Still the validation of the TNDM was somewhat second-hand, in the sense that the closely-related previous model, the QJM, was validated in the 1970s to 200 World War II and 1967 and 1973 Arab-Israeli War battles, but the TNDM had not been. Clearly, something further needed to be done.

The Battalion-Level Validation of the TNDM

Under the guidance of Christopher A. Lawrence, The Dupuy Institute undertook a battalion-level validation of the TNDM in late 1996. This effort tested the model against 76 engagements from World War I, World War II, and the post-1945 world including Vietnam, the Arab-Israeli Wars, the Falklands War, Angola, Nicaragua, etc. This effort was thoroughly documented in The International TNDM Newsletter.[10] This effort was probably one of the more independent and better-documented validations of a casualty estimation methodology that has ever been conducted to date, in that:

  • The data was independently assembled (assembled for other purposes before the validation) by a number of different historians.
  • There were no calibration runs or adjustments made to the model before the test.
  • The data included a wide range of material from different conflicts and times (from 1918 to 1983).
  • The validation runs were conducted independently (Susan Rich conducted the validation runs, while Christopher A. Lawrence evaluated them).
  • The results of the validation were fully published.
  • The people conducting the validation were independent, in the sense that:

a) there was no contract, management, or agency requesting the validation;
b) none of the validators had previously been involved in designing the model, and had only very limited experience in using it; and
c) the original model designer was not able to oversee or influence the validation.[11]

The validation was not truly independent, as the model tested was a commercial product of The Dupuy Institute, and the person conducting the test was an employee of the Institute. On the other hand, this was an independent effort in the sense that the effort was employee-initiated and not requested or reviewed by the management of the Institute. Furthermore, the results were published.

The TNDM was also given a limited validation test back to its original WWII data around 1997 by Niklas Zetterling of the Swedish War College, who retested the model to about 15 or so Italian campaign engagements. This effort included a complete review of the historical data used for the validation back to their primarily sources, and details were published in The International TNDM Newsletter.[12]

There has been one other effort to correlate outputs from QJM/TNDM-inspired formulae to historical data using the Ardennes and Kursk campaign-level (i.e., division-level) databases.[13] This effort did not use the complete model, but only selective pieces of it, and achieved various degrees of “goodness of fit.” While the model is hypothetically designed for use from squad level to army group level, to date no validation has been attempted below battalion level, or above division level. At this time, the TNDM also needs to be revalidated back to its original WWII and Arab-Israeli War data, as it has evolved since the original validation effort.

The Corps- and Division-level Validations of the TNDM

Having now having done one extensive battalion-level validation of the model and published the results in our newsletters, Volume 1, issues 5 and 6, we were then presented an opportunity in 2006 to conduct two more validations of the model. These are discussed in depth in two articles of this issue of the newsletter.

These validations were again conducted using historical data, 24 days of corps-level combat and 25 cases of division-level combat drawn from the Battle of Kursk during 4-15 July 1943. It was conducted using an independently-researched data collection (although the research was conducted by The Dupuy Institute), using a different person to conduct the model runs (although that person was an employee of the Institute) and using another person to compile the results (also an employee of the Institute). To summarize the results of this validation (the historical figure is listed first followed by the predicted result):

There was one other effort that was done as part of work we did for the Army Medical Department (AMEDD). This is fully explained in our report Casualty Estimation Methodologies Study: The Interim Report dated 25 July 2005. In this case, we tested six different casualty estimation methodologies to 22 cases. These consisted of 12 division-level cases from the Italian Campaign (4 where the attack failed, 4 where the attacker advanced, and 4 Where the defender was penetrated) and 10 cases from the Battle of Kursk (2 cases Where the attack failed, 4 where the attacker advanced and 4 where the defender was penetrated). These 22 cases were randomly selected from our earlier 628 case version of the DLEDB (Division-level Engagement Database; it now has 752 cases). Again, the TNDM performed as well as or better than any of the other casualty estimation methodologies tested. As this validation effort was using the Italian engagements previously used for validation (although some had been revised due to additional research) and three of the Kursk engagements that were later used for our division-level validation, then it is debatable whether one would want to call this a seventh validation effort. Still, it was done as above with one person assembling the historical data and another person conducting the model runs. This effort was conducted a year before the corps and division-level validation conducted above and influenced it to the extent that we chose a higher CEV (Combat Effectiveness Value) for the later validation. A CEV of 2.5 was used for the Soviets for this test, vice the CEV of 3.0 that was used for the later tests.

Summation

The QJM has been validated at least twice. The TNDM has been tested or validated at least four times, once to an upcoming, imminent war, once to battalion-level data from 1918 to 1989, once to division-level data from 1943 and once to corps-level data from 1943. These last four validation efforts have been published and described in depth. The model continues, regardless of which validation is examined, to accurately predict outcomes and make reasonable predictions of advance rates, loss rates and armor loss rates. This is regardless of level of combat (battalion, division or corps), historic period (WWI, WWII or modem), the situation of the combats, or the nationalities involved (American, German, Soviet, Israeli, various Arab armies, etc.). As the QJM, the model was effectively validated to around 200 World War II and 1967 and 1973 Arab-Israeli War battles. As the TNDM, the model was validated to 125 corps-, division-, and battalion-level engagements from 1918 to 1989 and used as a predictive model for the 1991 Gulf War. This is the most extensive and systematic validation effort yet done for any combat model. The model has been tested and re-tested. It has been tested across multiple levels of combat and in a wide range of environments. It has been tested where human factors are lopsided, and where human factors are roughly equal. It has been independently spot-checked several times by others outside of the Institute. It is hard to say what more can be done to establish its validity and accuracy.

NOTES

[1] It is unclear what these percentages, quoted from Dupuy in the TNDM General Theoretical Description, specify. We suspect it is a measurement of the model’s ability to predict winner and loser. No validation report based on this effort was ever published. Also, the validation figures seem to reflect the results after any corrections made to the model based upon these tests. It does appear that the division-level validation was “incremental.” We do not know if the earlier validation tests were tested back to the earlier data, but we have reason to suspect not.

[2] The original QJM validation data was first published in the Combat Data Subscription Service Supplement, vol. 1, no. 3 (Dunn Loring VA: HERO, Summer 1975). (HERO Report #50) That effort used data from 1943 through 1973.

[3] HERO published its QJM validation database in The QJM Data Base (3 volumes) Fairfax VA: HERO, 1985 (HERO Report #100).

[4] The Dupuy Institute, The Tactical Numerical Deterministic Model (TNDM): A General and Theoretical Description, McLean VA: The Dupuy Institute, October 1994.

[5] This had the unfortunate effect of undervaluing WWII-era armor by about 75% relative to other WWII weapons when modeling WWII engagements. This left The Dupuy Institute with the compromise methodology of using the old OLI method for calculating armor (Mobile Fighting Machines) when doing WWII engagements and using the new OLI method for calculating armor when doing modem engagements

[6] Testimony of Col. T. N. Dupuy, USA, Ret, Before the House Armed Services Committee, 13 Dec 1990. The Dupuy Institute File I-30, “Iraqi Invasion of Kuwait.”

[7] Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (HERO Books, Fairfax, VA, 1990), 123-4.

[8] M-COAT is the Medical Course of Action Tool created by Major Bruce Shahbaz. It is a spreadsheet model based upon the elements of the TNDM provided in Dupuy’s Attrition (op. cit.) It used a scoring system derived from elsewhere in the U.S. Army. As such, it is a simplified form of the TNDM with a different weapon scoring system.

[9] See Gözel, Ramazan. “Fitting Firepower Score Models to the Battle of Kursk Data,” NPGS Thesis. Monterey CA: Naval Postgraduate School.

[10] Lawrence, Christopher A. “Validation of the TNDM at Battalion Level.” The International TNDM Newsletter, vol. 1, no. 2 (October 1996); Bongard, Dave “The 76 Battalion-Level Engagements.” The International TNDM Newsletter, vol. 1, no. 4 (February 1997); Lawrence, Christopher A. “The First Test of the TNDM Battalion-Level Validations: Predicting the Winner” and “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, vol. 1 no. 5 (April 1997); and Lawrence, Christopher A. “Use of Armor in the 76 Battalion-Level Engagements,” and “The Second Test of the Battalion-Level Validation: Predicting Casualties Final Scorecard.” The International TNDM Newsletter, vol. 1, no. 6 (June 1997).

[11] Trevor N. Dupuy passed away in July 1995, and the validation was conducted in 1996 and 1997.

[12] Zetterling, Niklas. “CEV Calculations in Italy, 1943,” The International TNDM Newsletter, vol. 1, no. 6. McLean VA: The Dupuy Institute, June 1997. See also Research Plan, The Dupuy Institute Report E-3, McLean VA: The Dupuy Institute, 7 Oct 1998.

[13] See Gözel, “Fitting Firepower Score Models to the Battle of Kursk Data.”

TDI Friday Read: U.S. Airpower

[Image by Geopol Intelligence]

This weekend’s edition of TDI’s Friday Read is a collection of posts on the current state of U.S. airpower by guest contributor Geoffery Clark. The same factors changing the character of land warfare are changing the way conflict will be waged in the air. Clark’s posts highlight some of the way these changes are influencing current and future U.S. airpower plans and concepts.

F-22 vs. F-35: Thoughts On Fifth Generation Fighters

The F-35 Is Not A Fighter

U.S. Armed Forces Vision For Future Air Warfare

The U.S. Navy and U.S. Air Force Debate Future Air Superiority

U.S. Marine Corps Concepts of Operation with the F-35B

The State of U.S. Air Force Air Power

Fifth Generation Deterrence

 

The Effects Of Dispersion On Combat

[The article below is reprinted from the December 1996 edition of The International TNDM Newsletter. A revised version appears in Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Potomac Books, 2017), Chapter 13.]

The Effects of Dispersion on Combat
by Christopher A. Lawrence

The TNDM[1] does not play dispersion. But it is clear that dispersion has continued to increase over time, and this must have some effect on combat. This effect was identified by Trevor N. Dupuy in his various writings, starting with the Evolution of Weapons and Warfare. His graph in Understanding War of the battle casualties trends over time is presented here as Figure 1. As dispersion changes over time (dramatically), one would expect the casualties would change over time. I therefore went back to the Land Warfare Database (the 605 engagement version[2]) and proceeded to look at casualties over time and dispersion from every angle that l could.

l eventually realized that l was going to need some better definition of the time periods l was measuring to, as measuring by years scattered the data, measuring by century assembled the data in too gross a manner, and measuring by war left a confusing picture due to the number of small wars with only two or three battles in them in the Land Warfare Database. I eventually defined the wars into 14 categories, so I could fit them onto one readable graph:

To give some idea of how representative the battles listed in the LWDB were for covering the period, I have included a count of the number of battles listed in Michael Clodfelter’s two-volume book Warfare and Armed Conflict, 1618-1991. In the case of WWI, WWII and later, battles tend to be defined as a divisional-level engagement, and there were literally tens of thousands of those.

I then tested my data again looking at the 14 wars that I defined:

  • Average Strength by War (Figure 2)
  • Average Losses by War (Figure 3)
  • Percent Losses Per Day By War (Figure 4)a
  • Average People Per Kilometer By War (Figure 5)
  • Losses per Kilometer of Front by War (Figure 6)
  • Strength and Losses Per Kilometer of Front By War (Figure 7)
  • Ratio of Strength and Losses per Kilometer of Front by War (Figure 8)
  • Ratio of Strength and Loses per Kilometer of Front by Century (Figure 9)

A review of average strengths over time by century and by war showed no surprises (see Figure 2). Up through around 1900, battles were easy to define: they were one- to three-day affairs between clearly defined forces at a locale. The forces had a clear left flank and right flank that was not bounded by other friendly forces. After 1900 (and in a few cases before), warfare was fought on continuous fronts

with a ‘battle’ often being a large multi-corps operation. It is no longer clearly understood what is meant by a battle, as the forces, area covered, and duration can vary widely. For the LWDB, each battle was defined as the analyst wished. ln the case of WWI, there are a lot of very large battles which drive the average battle size up. ln the cases of the WWII, there are a lot of division-level battles, which bring the average down. In the case of the Arab-Israeli Wars, there are nothing but division and brigade-level battles, which bring the average down.

The interesting point to notice is that the average attacker strength in the 16th and 17th century is lower than the average defender strength. Later it is higher. This may be due to anomalies in our data selection.

Average loses by war (see Figure 3) suffers from the same battle definition problem.

Percent losses per day (see Figure 4) is a useful comparison through the end of the 19th Century. After that, the battles get longer and the definition of a duration of the battle is up to the analyst. Note the very dear and definite downward pattern of percent loses per day from the Napoleonic Wars through the Arab-Israeli Wars. Here is a very clear indication of the effects of dispersion. It would appear that from the 1600s to the 1800s the pattern was effectively constant and level, then declines in a very systematic pattern. This partially contradicts Trevor Dupuy’s writing and graphs (see Figure 1). It does appear that after this period of decline that the percent losses per day are being set at a new, much lower plateau. Percent losses per day by war is attached.

Looking at the actual subject of the dispersion of people (measured in people per kilometer of front) remained relatively constant from 1600 through the American Civil War (see Figure 5). Trevor Dupuy defined dispersion as the number of people in a box-like area. Unfortunately, l do not know how to measure that. lean clearly identify the left and right of a unit, but it is more difficult to tell how deep it is Furthermore, density of occupation of this box is far from uniform, with a very forward bias By the same token, fire delivered into this box is also not uniform, with a very forward bias. Therefore, l am quite comfortable measuring dispersion based upon unit frontage, more so than front multiplied by depth.

Note, when comparing the Napoleonic Wars to the American Civil War that the dispersion remains about the same. Yet, if you look at the average casualties (Figure 3) and the average percent casualties per day (Figure 4), it is clear that the rate of casualty accumulation is lower in the American Civil War (this again partially contradicts Dupuy‘s writings). There is no question that with the advent of the Minié ball, allowing for rapid-fire rifled muskets, the ability to deliver accurate firepower increased.

As you will also note, the average people per linear kilometer between WWI and WWII differs by a factor of a little over 1.5 to 1. Yet the actual difference in casualties (see Figure 4) is much greater. While one can just postulate that the difference is the change in dispersion squared (basically Dupuy‘s approach), this does not seem to explain the complete difference, especially the difference between the Napoleonic Wars and the Civil War.

lnstead of discussing dispersion, we should be discussing “casualty reduction efforts.” This basically consists of three elements:

  • Dispersion (D)
  • Increased engagement ranges (R)
  • More individual use of cover and concealment (C&C).

These three factors together result in the reduced chance to hit. They are also partially interrelated, as one cannot make more individual use of cover and concealment unless one is allowed to disperse. So, therefore. The need for cover and concealment increases the desire to disperse and the process of dispersing allows one to use more cover and concealment.

Command and control are integrated into this construct as being something that allows dispersion, and dispersion creates the need for better command control. Therefore, improved command and control in this construct does not operate as a force modifier, but enables a force to disperse.

Intelligence becomes more necessary as the opposing forces use cover and concealment and the ranges of engagement increase. By the same token, improved intelligence allows you to increase the range of engagement and forces the enemy to use better concealment.

This whole construct could be represented by the diagram at the top of the next page.

Now, I may have said the obvious here, but this construct is probably provable in each individual element, and the overall outcome is measurable. Each individual connection between these boxes may also be measurable.

Therefore, to measure the effects of reduced chance to hit, one would need to measure the following formula (assuming these formulae are close to being correct):

(K * ΔD) + (K * ΔC&C) + (K * ΔR) = H

(K * ΔC2) = ΔD

(K * ΔD) = ΔC&C

(K * ΔW) + (K * ΔI) = ΔR

K = a constant
Δ = the change in….. (alias “Delta”)
D = Dispersion
C&C = Cover & Concealment
R = Engagement Range
W = Weapon’s Characteristics
H = the chance to hit
C2 = Command and control
I = Intelligence or ability to observe

Also, certain actions lead to a desire for certain technological and system improvements. This includes the effect of increased dispersion leading to a need for better C2 and increased range leading to a need for better intelligence. I am not sure these are measurable.

I have also shown in the diagram how the enemy impacts upon this. There is also an interrelated mirror image of this construct for the other side.

I am focusing on this because l really want to come up with some means of measuring the effects of a “revolution in warfare.” The last 400 years of human history have given us more revolutionary inventions impacting war than we can reasonably expect to see in the next 100 years. In particular, I would like to measure the impact of increased weapon accuracy, improved intelligence, and improved C2 on combat.

For the purposes of the TNDM, I would very specifically like to work out an attrition multiplier for battles before WWII (and theoretically after WWII) based upon reduced chance to be hit (“dispersion”). For example, Dave Bongard is currently using an attrition multiplier of 4 for his WWI engagements that he is running for the battalion-level validation data base.[3] No one can point to a piece of paper saying this is the value that should be used. Dave picked this value based upon experience and familiarity with the period.

I have also attached Average Loses per Kilometer of Front by War (see Figure 6 above), and a summary chart showing the two on the same chart (see figure 7 above).

The values from these charts are:

The TNDM sets WWII dispersion factor at 3,000 (which l gather translates into 30,000 men per square kilometer). The above data shows a linear dispersion per kilometer of 2,992 men, so this number parallels Dupuy‘s figures.

The final chart I have included is the Ratio of Strength and Losses per Kilometer of Front by War (Figure 8). Each line on the bar graph measures the average ratio of strength over casualties for either the attacker or defender. Being a ratio, unusual outcomes resulted in some really unusually high ratios. I took the liberty of taking out six

data points because they appeared unusually lop-sided. Three of these points are from the English Civil War and were way out of line with everything else. These were the three Scottish battles where you had a small group of mostly sword-armed troops defeating a “modem” army. Also, Walcourt (1689), Front Royal (1862), and Calbritto (1943) were removed. L also have included the same chart, except by century (Figure 9).
Again, one sees a consistency in results in over 300+ years of war, in this case going all the way through WWI, then sees an entirely different pattern with WWII and the Arab-Israeli Wars

A very tentative set of conclusions from all this is:

  1. Dispersion has been relatively constant and driven by factors other than firepower from 1600-1815.
  2. Since the Napoleonic Wars, units have increasingly dispersed (found ways to reduce their chance to be hit) in response to increased lethality of weapons.
  3. As a result of this increased dispersion, casualties in a given space have declined.
  4. The ratio of this decline in casualties over area have been roughly proportional to the strength over an area from 1600 through WWI. Starting with WWII, it appears that people have dispersed faster than weapons lethality, and this trend has continued.
  5. In effect, people dispersed in direct relation to increased firepower from 1815 through 1920, and then after that time dispersed faster than the increase in lethality.
  6. It appears that since WWII, people have gone back to dispersing (reducing their chance to be hit) at the same rate that firepower is increasing.
  7. Effectively, there are four patterns of casualties in modem war:

Period 1 (1600 – 1815): Period of Stability

  • Short battles
  • Short frontages
  • High attrition per day
  • Constant dispersion
  • Dispersion decreasing slightly after late 1700s
  • Attrition decreasing slightly after mid-1700s.

Period 2 (1816 – 1905): Period of Adjustment

  • Longer battles
  • Longer frontages
  • Lower attrition per day
  • Increasing dispersion
  • Dispersion increasing slightly faster than lethality

Period 3 (1912 – 1920): Period of Transition

  • Long Battles
  • Continuous Frontages
  • Lower attrition per day
  • Increasing dispersion
  • Relative lethality per kilometer similar to past, but lower
  • Dispersion increasing slightly faster than lethality

Period 4 (1937 – present): Modern Warfare

  • Long Battles
  • Continuous Frontages
  • Low Attrition per day
  • High dispersion (perhaps constant?)
  • Relatively lethality per kilometer much lower than the past
  • Dispersion increased much faster than lethality going into the period.
  • Dispersion increased at the same rate as lethality within the period.

So the question is whether warfare of the next 50 years will see a new “period of adjustment,” where the rate of dispersion (and other factors) adjusts in direct proportion to increased lethality, or will there be a significant change in the nature of war?

Note that when l use the word “dispersion” above, l often mean “reduced chance to be hit,” which consists of dispersion, increased engagement ranges, and use of cover & concealment.

One of the reasons l wandered into this subject was to see if the TNDM can be used for predicting combat before WWII. l then spent the next few days attempting to find some correlation between dispersion and casualties. Using the data on historical dispersion provided above, l created a mathematical formulation and tested that against the actual historical data points, and could not get any type of fit.

I then locked at the length of battles over time, at one-day battles, and attempted to find a pattern. I could find none. I also looked at other permutations, but did not keep a record of my attempts. I then looked through the work done by Dean Hartley (Oakridge) with the LWDB and called Paul Davis (RAND) to see if there was anyone who had found any correlation between dispersion and casualties, and they had not noted any.

It became clear to me that if there is any such correlation, it is buried so deep in the data that it cannot be found by any casual search. I suspect that I can find a mathematical correlation between weapon lethality, reduced chance to hit (including dispersion), and casualties. This would require some improvement to the data, some systematic measure of weapons lethality, and some serious regression analysis. I unfortunately cannot pursue this at this time.

Finally, for reference, l have attached two charts showing the duration of the battles in the LWDB in days (Figure 10, Duration of Battles Over Time and Figure 11, A Count of the Duration of Battles by War).

NOTES

[1] The Tactical Numerical Deterministic Model, a combat model developed by Trevor Dupuy in 1990-1991 as the follow-up to his Quantified Judgement Model. Dr. James G. Taylor and Jose Perez also contributed to the TNDM’s development.

[2] TDI’s Land Warfare Database (LWDB) was a revised version of a database created by the Historical Evaluation Research Organization (HERO) for the then-U.S. Army Concepts and Analysis Agency (now known as the U.S. Army Center for Army Analysis (CAA)) in 1984. Since the original publication of this article, TDI expanded and revised the data into a suite of databases.

[3] This matter is discussed in Christopher A. Lawrence, “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, April 1997, pp. 40-50.