Mystics & Statistics

A blog on quantitative historical analysis hosted by The Dupuy Institute

Disappearing Statistics

There was a time during the Iraq insurgency when statistics on the war were readily available. As a small independent contractor, we were getting the daily feed of incidents, casualties and other such material during the Iraq War. It was one of the daily intelligence reports for Iraq. We had simply emailed someone in the field and were put on their distribution list, even though we had no presence in Iraq and no official position. This was public information so it was not a problem….until the second half of 2005…when suddenly the war was not going very well…then someone decided to restrict distribution. We received daily intelligence reports from 4 September 2004. They ended on 25 August 2005. There is more to this story, but maybe later.

This article was brought to my attention today: https://www.militarytimes.com/flashpoints/2017/10/30/report-us-officials-classify-crucial-metrics-on-afghan-casualties-readiness/

A few highlights:

  1. From January 1 to May 8 Afghan forces sustained 2,531 killed in action and 4,238 wounded (a 1.67-to-1 wounded-to-killed ratio, which seems very low).

  2. The Afghan armed forces control 56.8% of the 407 districts, a one percentage point drop over the last six months.

  3. The Afghan government controls 63.7% percent of the population.

  4. Some of these statistics will now be classified.

 

One of our older posts on wounded-to-killed ratios. I have an entire chapter on the subject in War by Numbers.

Wounded-To-Killed Ratios

The Historical Combat Effectiveness of Lighter-Weight Armored Forces

A Stryker Infantry Carrier Vehicle-Dragoon fires 30 mm rounds during a live-fire demonstration at Aberdeen Proving Ground, Md., Aug. 16, 2017. Soldiers with 2nd Cavalry Regiment spent six weeks at Aberdeen testing and training on the new Stryker vehicle and a remote Javelin system, which are expected to head to Germany early next year for additional user testing. (Photo Credit: Sean Kimmons)

In 2001, The Dupuy Institute conducted a study for the U.S. Army Center for Army Analysis (CAA) on the historical effectiveness of lighter-weight armored forces. At the time, the Army had developed a requirement for an Interim Armored Vehicle (IAV), lighter and more deployable than existing M1 Abrams Main Battle Tank and the M2 Bradley Infantry Fighting Vehicle, to form the backbone of the future “Objective Force.” This program would result in development of the Stryker Infantry Fighting Vehicle.

CAA initiated the TDI study at the request of Walter W. “Don” Hollis, then the Deputy Undersecretary of the Army for Operations Research (a position that was eliminated in 2006.) TDI completed and submitted “The Historical Combat Effectiveness of Lighter-Weight Armored Forces” to CAA in August 2001. It examined the effectiveness of light and medium-weight armored forces in six scenarios:

  • Conventional conflicts against an armor supported or armor heavy force.
  • Emergency insertions against an armor supported or armor heavy force.
  • Conventional conflict against a primarily infantry force (as one might encounter in sub-Saharan Africa).
  • Emergency insertion against a primarily infantry force.
  • A small to medium insurgency (includes an insurgency that develops during a peacekeeping operation).
  • A peacekeeping operation or similar Operation Other Than War (OOTW) that has some potential for violence.

The historical data the study drew upon came from 146 cases of small-scale contingency operations; U.S. involvement in Vietnam; German counterinsurgency operations in the Balkans, 1941-1945; the Philippines Campaign, 1941-42; the Normandy Campaign, 1944; the Korean War 1950-51; the Persian Gulf War, 1990-91; and U.S. and European experiences with light and medium-weight armor in World War II.

The major conclusions of the study were:

Small Scale Contingency Operations (SSCOs)

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It would appear that existing systems (M-2 and M-3 Bradley and M-113) can fulfill most requirements. Current plans to develop an advanced LAV-type vehicle may cover almost all other shortfalls. Mine protection is a design feature that should be emphasized.
  2. Implications for the Interim Brigade Combat Team (IBCT). The need for armor in SSCOs that are not conventional or closely conventional in nature is limited and rarely approaches the requirements of a brigade-size armored force.

Insurgencies

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It would appear that existing systems (M-2 and M-3 Bradley and M-113) can fulfill most requirements. The armor threat in insurgencies is very limited until the later stages if the conflict transitions to conventional war. In either case, mine protection is a design feature that may be critical.
  2. Implications for the Interim Brigade Combat Team (IBCT). It is the nature of insurgencies that rapid deployment of armor is not essential. The armor threat in insurgencies is very limited until the later stages if the conflict transitions to a conventional war and rarely approaches the requirements of a brigade-size armored force.

Conventional Warfare

Conventional Conflict Against An Armor Supported Or Armor Heavy Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It may be expected that opposing heavy armor in a conventional armor versus armor engagement could significantly overmatch the IAV. In this case the primary requirement would be for a weapon system that would allow the IAV to defeat the enemy armor before it could engage the IAV.
  2. Implications for the Interim Brigade Combat Team (IBCT). The IBCT could substitute as an armored cavalry force in such a scenario.

Conventional Conflict Against A Primarily Infantry Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. This appears to be little different from those conclusions found for the use of armor in SSCOs and Insurgencies.
  2. Implications for the Interim Brigade Combat Team (IBCT). The lack of a major armor threat will make the presence of armor useful.

Emergency Insertion Against An Armor Supported Or Armor Heavy Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. It appears that the IAV may be of great use in an emergency insertion. However, the caveat regarding the threat of being overmatched by conventional heavy armor mentioned above should not be ignored. In this case the primary requirement would be for a weapon system that would allow the IAV to defeat the enemy armor before it could engage the IAV.
  2. Implications for the Interim Brigade Combat Team (IBCT). Although the theoretical utility of the IBCT in this scenario may be great it should be noted that The Dupuy Institute was only able to find one comparable case of such a deployment which resulted in actual conflict in US military history in the last 60 years (Korea, 1950). In this case the effect of pushing forward light tanks into the face of heavier enemy tanks was marginal.

Emergency Insertion Against A Primarily Infantry Force

  1. Implications for the Interim Armored Vehicle (IAV) Family of Vehicles. The lack of a major armor threat in this scenario will make the presence of any armor useful. However, The Dupuy Institute was unable to identify the existence of any such cases in the historical record.
  2. Implications for the Interim Brigade Combat Team (IBCT). The lack of a major armor threat will make the presence of any armor useful. However, The Dupuy Institute was unable to identify the existence of any such cases in the historical record.

Other Conclusions

Wheeled Vehicles

  1. There is little historical evidence one way or the other establishing whether wheels or tracks are the preferable feature of AFVs.

Vehicle Design

  1. In SSCOs access to a large-caliber main gun was useful for demolishing obstacles and buildings. This capability is not unique and could be replaced by AT missiles armed CFVs, IFVs and APCs.
  2. Any new lighter tank-like vehicle should make its gun system the highest priority, armor secondary and mobility and maneuverability tertiary.
  3. Mine protection should be emphasized. Mines were a major threat to all types of armor in many scenarios. In many SSCOs it was the major cause of armored vehicle losses.
  4. The robust carrying capacity offered by an APC over a tank is an advantage during many SSCOs.

Terrain Issues

  1. The use of armor in urban fighting, even in SSCOs, is still limited. The threat to armor from other armor in urban terrain during SSCOs is almost nonexistent. Most urban warfare armor needs, where armor basically serves as a support weapon, can be met with light armor (CFVs, IFVs, and APCs).
  2. Vehicle weight is sometimes a limiting factor in less developed areas. In all cases where this was a problem, there was not a corresponding armor threat. As such, in almost all cases, the missions and tasks of a tank can be fulfilled with other light armor (CFVs, IFVs, or APCs).
  3. The primary terrain problem is rivers and flooded areas. It would appear that in difficult terrain, especially heavily forested terrain (areas with lots of rainfall, like jungles), a robust river crossing capability is required.

Operational Factors

  1. Emergency insertions and delaying actions sometimes appear to be a good way to lose lots of armor for limited gain. This tends to come about due to terrain problems, enemy infiltration and bypassing, and the general confusion prevalent in such operations. The Army should be careful not to piecemeal assets when inserting valuable armor resources into a ‘hot’ situation. In many cases holding back and massing the armor for defense or counter-attack may be the better option.
  2. Transportability limitations have not been a major factor in the past for determining whether lighter or heavier armor were sent into a SSCO or a combat environment.

Casualty Sensitivity

  1. In a SSCO or insurgency, in most cases the weight and armor of the AFVs is not critical. As such, one would not expect any significant changes in losses regardless of the type of AFV used (MBT, medium-weight armor, or light armor). However, the perception that US forces are not equipped with the best-protected vehicle may cause some domestic political problems. The US government is very casualty sensitive during SSCOs. Furthermore, the current US main battle tank particularly impressive, and may help provide some additional intimidation in SSCOs.
  2. In any emergency insertion scenario or conventional war scenario, the use of lighter armor could result in higher US casualties and lesser combat effectiveness. This will certainly cause some domestic political problems and may impact army morale. However by the same token, light infantry forces, unsupported by easily deployable armor could present a worse situation.

U.S. Army Solicits Proposals For Mobile Protected Firepower (MPF) Light Tank

The U.S. Army’s late and apparently lamented M551 Sheridan light tank. [U.S. Department of the Army/Wikipedia]

The U.S. Army recently announced that it will begin soliciting Requests for Proposal (RFP) in November to produce a new lightweight armored vehicle for its Mobile Protected Firepower (MPF) program. MPF is intended to field a company of vehicles for each Army Infantry Brigade Combat Team to provide them with “a long-range direct-fire capability for forcible entry and breaching operations.”

The Army also plans to field the new vehicle quickly. It is dispensing with the usual two-to-three year technology development phase, and will ask for delivery of the first sample vehicles by April 2018, one month after the RFP phase is scheduled to end. This will invariably favor proposals using existing off-the-shelf vehicle designs and “mature technology.”

The Army apparently will also accept RFPs with turret-mounted 105mm main guns, at least initially. According to previous MFP parameters, acceptable designs will eventually need to be able to accommodate 120mm guns.

I have observed in the past that the MPF is the result of the Army’s concerns that its light infantry may be deprived of direct fire support on anti-access/area denial (A2/AD) battlefields. Track-mounted, large caliber direct fire guns dedicated to infantry support are something of a doctrinal throwback to the assault guns of World War II, however.

There was a noted tendency during World War II to use anything on the battlefield that resembled a tank as a main battle tank, with unhappy results for the not-main battle tanks. As a consequence, assault guns, tank destroyers, and light tanks became evolutionary dead-ends in the development of post-World War II armored doctrine (the late M551 Sheridan, retired without replacement in 1996, notwithstanding). [For more on the historical background, see The Dupuy Institute, “The Historical Effectiveness of Lighter-Weight Armored Forces,” August 2001.]

The Army has been reluctant to refer to MPF as a light tank, but as David Dopp, the MPF Program Manager admitted, “I don’t want to say it’s a light tank, but it’s kind of like a light tank.” He went on to say that “It’s not going toe to toe with a tank…It’s for the infantry. It goes where the infantry goes — it breaks through bunkers, it works through targets that the infantry can’t get through.”

Major General David Bassett, program executive officer for the Army’s Ground Combat Systems concurred. It will be a tracked vehicle with substantial armor protection, Bassett said, “but certainly not what you’d see on a main battle tank.”

It will be interesting to see what the RFPs have to offer.

Previous TDI commentaries on the MPF Program:

https://dupuyinstitute.dreamhosters.com/2016/10/19/back-to-the-future-the-mobile-protected-firepower-mpf-program/

https://dupuyinstitute.dreamhosters.com/2017/03/21/u-s-army-moving-forward-with-mobile-protected-firepower-mpf-program/

Validating Trevor Dupuy’s Combat Models

[The article below is reprinted from Winter 2010 edition of The International TNDM Newsletter.]

A Summation of QJM/TNDM Validation Efforts

By Christopher A. Lawrence

There have been six or seven different validation tests conducted of the QJM (Quantified Judgment Model) and the TNDM (Tactical Numerical Deterministic Model). As the changes to these two models are evolutionary in nature but do not fundamentally change the nature of the models, the whole series of validation tests across both models is worth noting. To date, this is the only model we are aware of that has been through multiple validations. We are not aware of any DOD [Department of Defense] combat model that has undergone more than one validation effort. Most of the DOD combat models in use have not undergone any validation.

The Two Original Validations of the QJM

After its initial development using a 60-engagement WWII database, the QJM was tested in 1973 by application of its relationships and factors to a validation database of 21 World War II engagements in Northwest Europe in 1944 and 1945. The original model proved to be 95% accurate in explaining the outcomes of these additional engagements. Overall accuracy in predicting the results of the 81 engagements in the developmental and validation databases was 93%.[1]

During the same period the QJM was converted from a static model that only predicted success or failure to one capable of also predicting attrition and movement. This was accomplished by adding variables and modifying factor values. The original QJM structure was not changed in this process. The addition of movement and attrition as outputs allowed the model to be used dynamically in successive “snapshot” iterations of the same engagement.

From 1973 to 1979 the QJM’s formulae, procedures, and variable factor values were tested against the results of all of the 52 significant engagements of the 1967 and 1973 Arab-Israeli Wars (19 from the former, 33 from the latter). The QJM was able to replicate all of those engagements with an accuracy of more than 90%?[2]

In 1979 the improved QJM was revalidated by application to 66 engagements. These included 35 from the original 81 engagements (the “development database”), and 31 new engagements. The new engagements included five from World War II and 26 from the 1973 Middle East War. This new validation test considered four outputs: success/failure, movement rates, personnel casualties, and tank losses. The QJM predicted success/failure correctly for about 85% of the engagements. It predicted movement rates with an error of 15% and personnel attrition with an error of 40% or less. While the error rate for tank losses was about 80%, it was discovered that the model consistently underestimated tank losses because input data included all kinds of armored vehicles, but output data losses included only numbers of tanks.[3]

This completed the original validations efforts of the QJM. The data used for the validations, and parts of the results of the validation, were published, but no formal validation report was issued. The validation was conducted in-house by Colonel Dupuy’s organization, HERO [Historical Evaluation Research Organization]. The data used were mostly from division-level engagements, although they included some corps- and brigade-level actions. We count these as two separate validation efforts.

The Development of the TNDM and Desert Storm

In 1990 Col. Dupuy, with the collaborative assistance of Dr. James G. Taylor (author of Lanchester Models of Warfare [vol. 1] [vol. 2], published by the Operations Research Society of America, Arlington, Virginia, in 1983) introduced a significant modification: the representation of the passage of time in the model. Instead of resorting to successive “snapshots,” the introduction of Taylor’s differential equation technique permitted the representation of time as a continuous flow. While this new approach required substantial changes to the software, the relationship of the model to historical experience was unchanged.[4] This revision of the model also included the substitution of formulae for some of its tables so that there was a continuous flow of values across the individual points in the tables. It also included some adjustment to the values and tables in the QJM. Finally, it incorporated a revised OLI [Operational Lethality Index] calculation methodology for modem armor (mobile fighting machines) to take into account all the factors that influence modern tank warfare.[5] The model was reprogrammed in Turbo PASCAL (the original had been written in BASIC). The new model was called the TNDM (Tactical Numerical Deterministic Model).

Building on its foundation of historical validation and proven attrition methodology, in December 1990, HERO used the TNDM to predict the outcome of, and losses from, the impending Operation DESERT STORM.[6] It was the most accurate (lowest) public estimate of U.S. war casualties provided before the war. It differed from most other public estimates by an order of magnitude.

Also, in 1990, Trevor Dupuy published an abbreviated form of the TNDM in the book Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War. A brief validation exercise using 12 battles from 1805 to 1973 was published in this book.[7] This version was used for creation of M-COAT[8] and was also separately tested by a student (Lieutenant Gozel) at the Naval Postgraduate School in 2000.[9] This version did not have the firepower scoring system, and as such neither M-COAT, Lieutenant Gozel’s test, nor Colonel Dupuy’s 12-battle validation included the OLI methodology that is in the primary version of the TNDM.

For counting purposes, I consider the Gulf War the third validation of the model. In the end, for any model, the proof is in the pudding. Can the model be used as a predictive tool or not? If not, then there is probably a fundamental flaw or two in the model. Still the validation of the TNDM was somewhat second-hand, in the sense that the closely-related previous model, the QJM, was validated in the 1970s to 200 World War II and 1967 and 1973 Arab-Israeli War battles, but the TNDM had not been. Clearly, something further needed to be done.

The Battalion-Level Validation of the TNDM

Under the guidance of Christopher A. Lawrence, The Dupuy Institute undertook a battalion-level validation of the TNDM in late 1996. This effort tested the model against 76 engagements from World War I, World War II, and the post-1945 world including Vietnam, the Arab-Israeli Wars, the Falklands War, Angola, Nicaragua, etc. This effort was thoroughly documented in The International TNDM Newsletter.[10] This effort was probably one of the more independent and better-documented validations of a casualty estimation methodology that has ever been conducted to date, in that:

  • The data was independently assembled (assembled for other purposes before the validation) by a number of different historians.
  • There were no calibration runs or adjustments made to the model before the test.
  • The data included a wide range of material from different conflicts and times (from 1918 to 1983).
  • The validation runs were conducted independently (Susan Rich conducted the validation runs, while Christopher A. Lawrence evaluated them).
  • The results of the validation were fully published.
  • The people conducting the validation were independent, in the sense that:

a) there was no contract, management, or agency requesting the validation;
b) none of the validators had previously been involved in designing the model, and had only very limited experience in using it; and
c) the original model designer was not able to oversee or influence the validation.[11]

The validation was not truly independent, as the model tested was a commercial product of The Dupuy Institute, and the person conducting the test was an employee of the Institute. On the other hand, this was an independent effort in the sense that the effort was employee-initiated and not requested or reviewed by the management of the Institute. Furthermore, the results were published.

The TNDM was also given a limited validation test back to its original WWII data around 1997 by Niklas Zetterling of the Swedish War College, who retested the model to about 15 or so Italian campaign engagements. This effort included a complete review of the historical data used for the validation back to their primarily sources, and details were published in The International TNDM Newsletter.[12]

There has been one other effort to correlate outputs from QJM/TNDM-inspired formulae to historical data using the Ardennes and Kursk campaign-level (i.e., division-level) databases.[13] This effort did not use the complete model, but only selective pieces of it, and achieved various degrees of “goodness of fit.” While the model is hypothetically designed for use from squad level to army group level, to date no validation has been attempted below battalion level, or above division level. At this time, the TNDM also needs to be revalidated back to its original WWII and Arab-Israeli War data, as it has evolved since the original validation effort.

The Corps- and Division-level Validations of the TNDM

Having now having done one extensive battalion-level validation of the model and published the results in our newsletters, Volume 1, issues 5 and 6, we were then presented an opportunity in 2006 to conduct two more validations of the model. These are discussed in depth in two articles of this issue of the newsletter.

These validations were again conducted using historical data, 24 days of corps-level combat and 25 cases of division-level combat drawn from the Battle of Kursk during 4-15 July 1943. It was conducted using an independently-researched data collection (although the research was conducted by The Dupuy Institute), using a different person to conduct the model runs (although that person was an employee of the Institute) and using another person to compile the results (also an employee of the Institute). To summarize the results of this validation (the historical figure is listed first followed by the predicted result):

There was one other effort that was done as part of work we did for the Army Medical Department (AMEDD). This is fully explained in our report Casualty Estimation Methodologies Study: The Interim Report dated 25 July 2005. In this case, we tested six different casualty estimation methodologies to 22 cases. These consisted of 12 division-level cases from the Italian Campaign (4 where the attack failed, 4 where the attacker advanced, and 4 Where the defender was penetrated) and 10 cases from the Battle of Kursk (2 cases Where the attack failed, 4 where the attacker advanced and 4 where the defender was penetrated). These 22 cases were randomly selected from our earlier 628 case version of the DLEDB (Division-level Engagement Database; it now has 752 cases). Again, the TNDM performed as well as or better than any of the other casualty estimation methodologies tested. As this validation effort was using the Italian engagements previously used for validation (although some had been revised due to additional research) and three of the Kursk engagements that were later used for our division-level validation, then it is debatable whether one would want to call this a seventh validation effort. Still, it was done as above with one person assembling the historical data and another person conducting the model runs. This effort was conducted a year before the corps and division-level validation conducted above and influenced it to the extent that we chose a higher CEV (Combat Effectiveness Value) for the later validation. A CEV of 2.5 was used for the Soviets for this test, vice the CEV of 3.0 that was used for the later tests.

Summation

The QJM has been validated at least twice. The TNDM has been tested or validated at least four times, once to an upcoming, imminent war, once to battalion-level data from 1918 to 1989, once to division-level data from 1943 and once to corps-level data from 1943. These last four validation efforts have been published and described in depth. The model continues, regardless of which validation is examined, to accurately predict outcomes and make reasonable predictions of advance rates, loss rates and armor loss rates. This is regardless of level of combat (battalion, division or corps), historic period (WWI, WWII or modem), the situation of the combats, or the nationalities involved (American, German, Soviet, Israeli, various Arab armies, etc.). As the QJM, the model was effectively validated to around 200 World War II and 1967 and 1973 Arab-Israeli War battles. As the TNDM, the model was validated to 125 corps-, division-, and battalion-level engagements from 1918 to 1989 and used as a predictive model for the 1991 Gulf War. This is the most extensive and systematic validation effort yet done for any combat model. The model has been tested and re-tested. It has been tested across multiple levels of combat and in a wide range of environments. It has been tested where human factors are lopsided, and where human factors are roughly equal. It has been independently spot-checked several times by others outside of the Institute. It is hard to say what more can be done to establish its validity and accuracy.

NOTES

[1] It is unclear what these percentages, quoted from Dupuy in the TNDM General Theoretical Description, specify. We suspect it is a measurement of the model’s ability to predict winner and loser. No validation report based on this effort was ever published. Also, the validation figures seem to reflect the results after any corrections made to the model based upon these tests. It does appear that the division-level validation was “incremental.” We do not know if the earlier validation tests were tested back to the earlier data, but we have reason to suspect not.

[2] The original QJM validation data was first published in the Combat Data Subscription Service Supplement, vol. 1, no. 3 (Dunn Loring VA: HERO, Summer 1975). (HERO Report #50) That effort used data from 1943 through 1973.

[3] HERO published its QJM validation database in The QJM Data Base (3 volumes) Fairfax VA: HERO, 1985 (HERO Report #100).

[4] The Dupuy Institute, The Tactical Numerical Deterministic Model (TNDM): A General and Theoretical Description, McLean VA: The Dupuy Institute, October 1994.

[5] This had the unfortunate effect of undervaluing WWII-era armor by about 75% relative to other WWII weapons when modeling WWII engagements. This left The Dupuy Institute with the compromise methodology of using the old OLI method for calculating armor (Mobile Fighting Machines) when doing WWII engagements and using the new OLI method for calculating armor when doing modem engagements

[6] Testimony of Col. T. N. Dupuy, USA, Ret, Before the House Armed Services Committee, 13 Dec 1990. The Dupuy Institute File I-30, “Iraqi Invasion of Kuwait.”

[7] Trevor N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (HERO Books, Fairfax, VA, 1990), 123-4.

[8] M-COAT is the Medical Course of Action Tool created by Major Bruce Shahbaz. It is a spreadsheet model based upon the elements of the TNDM provided in Dupuy’s Attrition (op. cit.) It used a scoring system derived from elsewhere in the U.S. Army. As such, it is a simplified form of the TNDM with a different weapon scoring system.

[9] See Gözel, Ramazan. “Fitting Firepower Score Models to the Battle of Kursk Data,” NPGS Thesis. Monterey CA: Naval Postgraduate School.

[10] Lawrence, Christopher A. “Validation of the TNDM at Battalion Level.” The International TNDM Newsletter, vol. 1, no. 2 (October 1996); Bongard, Dave “The 76 Battalion-Level Engagements.” The International TNDM Newsletter, vol. 1, no. 4 (February 1997); Lawrence, Christopher A. “The First Test of the TNDM Battalion-Level Validations: Predicting the Winner” and “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, vol. 1 no. 5 (April 1997); and Lawrence, Christopher A. “Use of Armor in the 76 Battalion-Level Engagements,” and “The Second Test of the Battalion-Level Validation: Predicting Casualties Final Scorecard.” The International TNDM Newsletter, vol. 1, no. 6 (June 1997).

[11] Trevor N. Dupuy passed away in July 1995, and the validation was conducted in 1996 and 1997.

[12] Zetterling, Niklas. “CEV Calculations in Italy, 1943,” The International TNDM Newsletter, vol. 1, no. 6. McLean VA: The Dupuy Institute, June 1997. See also Research Plan, The Dupuy Institute Report E-3, McLean VA: The Dupuy Institute, 7 Oct 1998.

[13] See Gözel, “Fitting Firepower Score Models to the Battle of Kursk Data.”

Survey of German WWI Records

At one point, we did a survey of German records from World War I. This was for an exploratory effort to look at measuring the impact of chemical weapons in an urban environment. As World War I was one of the few wars with extensive use of chemical weapons, then it was natural to look there for operational use of chemical weapons. Specifically we were looking at the use of chemical weapons in villages and such, as there was little urban combat in World War I.

As discussed in my last post on this subject, there is a two-sided collection of records in the U.S. Archives for those German units that fought the Americans in 1918. As our customer was British, they wanted to work with British units. They conducted the British research, but, they needed records from the German side. Ironically, the German World War I records were destroyed by the British bombing of Potsdam in April 1945. So where to find good opposing force data for units facing the British during World War I?

Germany did not form into a nation until 1871. During World War I, there were still several independent states, and duchies inside of the Germany and some of these maintained their own armies. The kingdoms of Bavaria, Wurttemberg and Saxony, along with the Grand Duchy of Baden fielded their own armies. They raised their own units and maintained their own records. So, if they maintained their records from World War I then we could developed a two-sided database of combat between the British and Germans in those cases where the British units opposed German units from those states.

So….for practical purposes, we ended up making a “research trip” to Freiburg (German archives), Stuttgart (Wurttemberg) and then Munich (Bavaria). Sure enough, Wurttemberg had an nice collection of records for its units (a total of seven divisions during the war) and Bavaria still had a complete collection of records for its many divisions. The Bavarian Army fielded over a dozen divisions during the war.

So we ended up in Munich for several days going through their records. Their archives were located near Munich’s Olympic Park, the place of the tragic 1972 Olympics. It was in the old Bavarian Army headquarters that had been converted to an archives. After World War II, it was occupied by the Americans, and on the doors of many of the offices was still the name tags of the American NCOs and officers who last occupied those offices. The records were in great shape. The German Army just before WWII had done a complete inventory of the Bavarian records and made sure that there were complete. It was clear that when we looked into that, that many of these files had not been opened since then. Many of the files had sixty years of dust on them. The exception was the Sixth Bavarian Reserve Division, which clearly had been accessed several times recently. Adolf Hitler had served in that division in WWI.

The staff was extremely helpful. I did bring them gifts of candy for their efforts. They were  neatly wrapped in the box with plastic mice attached to the packaging. Later, they sent me this:

So we were able to establish that good German data could be assembled for those Wurttemberg and Bavarian units that the face the British. The British company that hired us determine that the British records were good for their research efforts. So the exploratory research effort was a success, but main effort was never funded because of changing priorities among their sponsors. This research was occurring while the Iraq War (2003-2011) was going on, so sometimes budget priorities would change rather suddenly.

The Iran-Iraq War (1980-1988) also made extensive used of chemical weapons. This is discussed in depth in our newsletters. See: http://www.dupuyinstitute.org/tdipub4.htm  (issues Vol 2, No. 3; Vol 2, No. 4, Vol. 3, No 1, and Vol 3, No 2). Specifically see: http://www.dupuyinstitute.org/pdf/Issue11.pdf, page 21. To date, I am not aware of any significant work done on chemical warfare based upon their records of the war.

This post is the follow-up to these two posts:

Captured Records: World War I

The Sad Story Of The Captured Iraqi DESERT STORM Documents

New U.S. Army Security Force Assistance Brigades Face Challenges

The shoulder sleeve insignia of the U.S. Army 1st Security Forces Assistance Brigade (SFAB). [U.S. Army]

The recent deaths of four U.S. Army Special Forces (ARSOF) operators in an apparent ambush in support of the Train and Assist mission in Niger appears to have reminded Congress of the enormous scope of ongoing Security Force Assistance (SFA) activities being conducted world-wide by the Defense Department. U.S. military forces deployed to 138 countries in 2016, the majority of which were by U.S. Special Operations Forces (SOF) conducting SFA activities. (While SFA deployments continue at a high tempo, the number of U.S. active-duty troops stationed overseas has fallen below 200,000 for the first time in 60 years, interestingly enough.)

SFA is the umbrella term for U.S. whole-of-government support provided to develop the capability and capacity of foreign security forces and institutions. SFA is intended to help defend host nations from external and internal threats, and encompasses foreign internal defense (FID), counterterrorism (CT), counterinsurgency (COIN), and stability operations.

Last year, the U.S. Army announced that it would revamp its contribution to SFA by creating a new type of unit, the Security Force Assistance Brigade (SFAB), and by establishing a Military Advisor Training Academy. The first of six projected SFABs is scheduled to stand up this month at Ft. Benning, Georgia.

Rick Montcalm has a nice piece up at the Modern War Institute describing the doctrinal and organizational challenges the Army faces in implementing the SFABs. The Army’s existing SFA structure features regionally-aligned Brigade Combat Teams (BCTs) providing combined training and partnered mission assistance for foreign conventional forces from the team to company level, while ARSOF focuses on partner-nation counterterrorism missions and advising and assisting commando and special operations-type forces.

Ideally, the SFABs would supplement and gradually replace most, but not all, of the regionally-aligned BCTs to allow them to focus on warfighting tasks. Concerns have arisen with the ARSOF community, however, that a dedicated SFAB force would encroach functionally on its mission and compete within the Army for trained personnel. The SFABs currently lack the intelligence capabilities necessary to successfully conduct the advisory mission in hostile environments. Although U.S. Army Chief of Staff General Mark Milley asserts that the SFABs are not Special Forces, properly preparing them for advise and assist roles would make them very similar to existing ARSOF.

Montcalm also points out that Army personnel policies complicate maintain the SFABs in the long-term. The Army has not created a specific military advisor career field and volunteering to serve in a SFAB could complicate the career progression of active duty personnel. Although the Army has taken steps to address this, the prospect of long repeat overseas tours and uncertain career prospects has forced the service to offer cash incentives and automatic promotions to bolster SFAB recruiting. As of August, the 1st SFAB needed 350 more soldiers to fully man the unit, which was scheduled to be operational in November.

SFA and the Army’s role in it will not decline anytime soon, so there is considerable pressure to make the SFAB concept successful. Yet, in light of the largely unsuccessful efforts to build effective security forces in Iraq and Afghanistan, it remains an open question if the SFAB’s themselves will be enough to remedy the Army’s problematic approach to building partner capacity.

The 3-to-1 Rule in Histories

I was reading a book this last week, The Blitzkrieg Legend: The 1940 Campaign in the West by Karl-Heinz Frieser (originally published in German in 1996). On page 54 it states:

According to a military rule of thumb, the attack should be numerically superior to the defender at a ratio of 3:1. That ratio goes up if the defender can fight from well developed fortification, such as the Maginot Line.

This “rule” never seems to go away. Trevor Dupuy had a chapter on it in Understanding War, published in 1987. It was Chapter 4: The Three-to-One Theory of Combat. I didn’t really bother discussing the 3-to-1 rule in my book, War by Numbers: Understanding Conventional Combat. I do have a chapter on force ratios: Chapter 2: Force Ratios. In that chapter I show a number of force ratios based on history. Here is my chart from the European Theater of Operations, 1944 (page 10):

Force Ratio…………………..Result……………..Percentage of Failure………Number of Cases

0.55 to 1.01-to-1.00…………Attack Fails………………………….100……………………………………5

1.15 to 1.88-to-1.00…………Attack usually succeeds………21…………………………………..48

1.95 to 2.56-to-1.00…………Attack usually succeeds………10…………………………………..21

2.71 to 1.00 and higher….Attack advances……………………..0…………………………………..42

 

We have also done a number of blog posts on the subject (click on our category “Force Ratios”), primarily:

Trevor Dupuy and the 3-1 Rule

You will also see in that blog post another similar chart showing the odds of success at various force ratios.

Anyhow, I kind of think that people should probably quit referencing the 3-to-1 rule. It gives it far more weight and attention than it deserves.

 

TDI Friday Read: U.S. Airpower

[Image by Geopol Intelligence]

This weekend’s edition of TDI’s Friday Read is a collection of posts on the current state of U.S. airpower by guest contributor Geoffery Clark. The same factors changing the character of land warfare are changing the way conflict will be waged in the air. Clark’s posts highlight some of the way these changes are influencing current and future U.S. airpower plans and concepts.

F-22 vs. F-35: Thoughts On Fifth Generation Fighters

The F-35 Is Not A Fighter

U.S. Armed Forces Vision For Future Air Warfare

The U.S. Navy and U.S. Air Force Debate Future Air Superiority

U.S. Marine Corps Concepts of Operation with the F-35B

The State of U.S. Air Force Air Power

Fifth Generation Deterrence

 

The Effects Of Dispersion On Combat

[The article below is reprinted from the December 1996 edition of The International TNDM Newsletter. A revised version appears in Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Potomac Books, 2017), Chapter 13.]

The Effects of Dispersion on Combat
by Christopher A. Lawrence

The TNDM[1] does not play dispersion. But it is clear that dispersion has continued to increase over time, and this must have some effect on combat. This effect was identified by Trevor N. Dupuy in his various writings, starting with the Evolution of Weapons and Warfare. His graph in Understanding War of the battle casualties trends over time is presented here as Figure 1. As dispersion changes over time (dramatically), one would expect the casualties would change over time. I therefore went back to the Land Warfare Database (the 605 engagement version[2]) and proceeded to look at casualties over time and dispersion from every angle that l could.

l eventually realized that l was going to need some better definition of the time periods l was measuring to, as measuring by years scattered the data, measuring by century assembled the data in too gross a manner, and measuring by war left a confusing picture due to the number of small wars with only two or three battles in them in the Land Warfare Database. I eventually defined the wars into 14 categories, so I could fit them onto one readable graph:

To give some idea of how representative the battles listed in the LWDB were for covering the period, I have included a count of the number of battles listed in Michael Clodfelter’s two-volume book Warfare and Armed Conflict, 1618-1991. In the case of WWI, WWII and later, battles tend to be defined as a divisional-level engagement, and there were literally tens of thousands of those.

I then tested my data again looking at the 14 wars that I defined:

  • Average Strength by War (Figure 2)
  • Average Losses by War (Figure 3)
  • Percent Losses Per Day By War (Figure 4)a
  • Average People Per Kilometer By War (Figure 5)
  • Losses per Kilometer of Front by War (Figure 6)
  • Strength and Losses Per Kilometer of Front By War (Figure 7)
  • Ratio of Strength and Losses per Kilometer of Front by War (Figure 8)
  • Ratio of Strength and Loses per Kilometer of Front by Century (Figure 9)

A review of average strengths over time by century and by war showed no surprises (see Figure 2). Up through around 1900, battles were easy to define: they were one- to three-day affairs between clearly defined forces at a locale. The forces had a clear left flank and right flank that was not bounded by other friendly forces. After 1900 (and in a few cases before), warfare was fought on continuous fronts

with a ‘battle’ often being a large multi-corps operation. It is no longer clearly understood what is meant by a battle, as the forces, area covered, and duration can vary widely. For the LWDB, each battle was defined as the analyst wished. ln the case of WWI, there are a lot of very large battles which drive the average battle size up. ln the cases of the WWII, there are a lot of division-level battles, which bring the average down. In the case of the Arab-Israeli Wars, there are nothing but division and brigade-level battles, which bring the average down.

The interesting point to notice is that the average attacker strength in the 16th and 17th century is lower than the average defender strength. Later it is higher. This may be due to anomalies in our data selection.

Average loses by war (see Figure 3) suffers from the same battle definition problem.

Percent losses per day (see Figure 4) is a useful comparison through the end of the 19th Century. After that, the battles get longer and the definition of a duration of the battle is up to the analyst. Note the very dear and definite downward pattern of percent loses per day from the Napoleonic Wars through the Arab-Israeli Wars. Here is a very clear indication of the effects of dispersion. It would appear that from the 1600s to the 1800s the pattern was effectively constant and level, then declines in a very systematic pattern. This partially contradicts Trevor Dupuy’s writing and graphs (see Figure 1). It does appear that after this period of decline that the percent losses per day are being set at a new, much lower plateau. Percent losses per day by war is attached.

Looking at the actual subject of the dispersion of people (measured in people per kilometer of front) remained relatively constant from 1600 through the American Civil War (see Figure 5). Trevor Dupuy defined dispersion as the number of people in a box-like area. Unfortunately, l do not know how to measure that. lean clearly identify the left and right of a unit, but it is more difficult to tell how deep it is Furthermore, density of occupation of this box is far from uniform, with a very forward bias By the same token, fire delivered into this box is also not uniform, with a very forward bias. Therefore, l am quite comfortable measuring dispersion based upon unit frontage, more so than front multiplied by depth.

Note, when comparing the Napoleonic Wars to the American Civil War that the dispersion remains about the same. Yet, if you look at the average casualties (Figure 3) and the average percent casualties per day (Figure 4), it is clear that the rate of casualty accumulation is lower in the American Civil War (this again partially contradicts Dupuy‘s writings). There is no question that with the advent of the Minié ball, allowing for rapid-fire rifled muskets, the ability to deliver accurate firepower increased.

As you will also note, the average people per linear kilometer between WWI and WWII differs by a factor of a little over 1.5 to 1. Yet the actual difference in casualties (see Figure 4) is much greater. While one can just postulate that the difference is the change in dispersion squared (basically Dupuy‘s approach), this does not seem to explain the complete difference, especially the difference between the Napoleonic Wars and the Civil War.

lnstead of discussing dispersion, we should be discussing “casualty reduction efforts.” This basically consists of three elements:

  • Dispersion (D)
  • Increased engagement ranges (R)
  • More individual use of cover and concealment (C&C).

These three factors together result in the reduced chance to hit. They are also partially interrelated, as one cannot make more individual use of cover and concealment unless one is allowed to disperse. So, therefore. The need for cover and concealment increases the desire to disperse and the process of dispersing allows one to use more cover and concealment.

Command and control are integrated into this construct as being something that allows dispersion, and dispersion creates the need for better command control. Therefore, improved command and control in this construct does not operate as a force modifier, but enables a force to disperse.

Intelligence becomes more necessary as the opposing forces use cover and concealment and the ranges of engagement increase. By the same token, improved intelligence allows you to increase the range of engagement and forces the enemy to use better concealment.

This whole construct could be represented by the diagram at the top of the next page.

Now, I may have said the obvious here, but this construct is probably provable in each individual element, and the overall outcome is measurable. Each individual connection between these boxes may also be measurable.

Therefore, to measure the effects of reduced chance to hit, one would need to measure the following formula (assuming these formulae are close to being correct):

(K * ΔD) + (K * ΔC&C) + (K * ΔR) = H

(K * ΔC2) = ΔD

(K * ΔD) = ΔC&C

(K * ΔW) + (K * ΔI) = ΔR

K = a constant
Δ = the change in….. (alias “Delta”)
D = Dispersion
C&C = Cover & Concealment
R = Engagement Range
W = Weapon’s Characteristics
H = the chance to hit
C2 = Command and control
I = Intelligence or ability to observe

Also, certain actions lead to a desire for certain technological and system improvements. This includes the effect of increased dispersion leading to a need for better C2 and increased range leading to a need for better intelligence. I am not sure these are measurable.

I have also shown in the diagram how the enemy impacts upon this. There is also an interrelated mirror image of this construct for the other side.

I am focusing on this because l really want to come up with some means of measuring the effects of a “revolution in warfare.” The last 400 years of human history have given us more revolutionary inventions impacting war than we can reasonably expect to see in the next 100 years. In particular, I would like to measure the impact of increased weapon accuracy, improved intelligence, and improved C2 on combat.

For the purposes of the TNDM, I would very specifically like to work out an attrition multiplier for battles before WWII (and theoretically after WWII) based upon reduced chance to be hit (“dispersion”). For example, Dave Bongard is currently using an attrition multiplier of 4 for his WWI engagements that he is running for the battalion-level validation data base.[3] No one can point to a piece of paper saying this is the value that should be used. Dave picked this value based upon experience and familiarity with the period.

I have also attached Average Loses per Kilometer of Front by War (see Figure 6 above), and a summary chart showing the two on the same chart (see figure 7 above).

The values from these charts are:

The TNDM sets WWII dispersion factor at 3,000 (which l gather translates into 30,000 men per square kilometer). The above data shows a linear dispersion per kilometer of 2,992 men, so this number parallels Dupuy‘s figures.

The final chart I have included is the Ratio of Strength and Losses per Kilometer of Front by War (Figure 8). Each line on the bar graph measures the average ratio of strength over casualties for either the attacker or defender. Being a ratio, unusual outcomes resulted in some really unusually high ratios. I took the liberty of taking out six

data points because they appeared unusually lop-sided. Three of these points are from the English Civil War and were way out of line with everything else. These were the three Scottish battles where you had a small group of mostly sword-armed troops defeating a “modem” army. Also, Walcourt (1689), Front Royal (1862), and Calbritto (1943) were removed. L also have included the same chart, except by century (Figure 9).
Again, one sees a consistency in results in over 300+ years of war, in this case going all the way through WWI, then sees an entirely different pattern with WWII and the Arab-Israeli Wars

A very tentative set of conclusions from all this is:

  1. Dispersion has been relatively constant and driven by factors other than firepower from 1600-1815.
  2. Since the Napoleonic Wars, units have increasingly dispersed (found ways to reduce their chance to be hit) in response to increased lethality of weapons.
  3. As a result of this increased dispersion, casualties in a given space have declined.
  4. The ratio of this decline in casualties over area have been roughly proportional to the strength over an area from 1600 through WWI. Starting with WWII, it appears that people have dispersed faster than weapons lethality, and this trend has continued.
  5. In effect, people dispersed in direct relation to increased firepower from 1815 through 1920, and then after that time dispersed faster than the increase in lethality.
  6. It appears that since WWII, people have gone back to dispersing (reducing their chance to be hit) at the same rate that firepower is increasing.
  7. Effectively, there are four patterns of casualties in modem war:

Period 1 (1600 – 1815): Period of Stability

  • Short battles
  • Short frontages
  • High attrition per day
  • Constant dispersion
  • Dispersion decreasing slightly after late 1700s
  • Attrition decreasing slightly after mid-1700s.

Period 2 (1816 – 1905): Period of Adjustment

  • Longer battles
  • Longer frontages
  • Lower attrition per day
  • Increasing dispersion
  • Dispersion increasing slightly faster than lethality

Period 3 (1912 – 1920): Period of Transition

  • Long Battles
  • Continuous Frontages
  • Lower attrition per day
  • Increasing dispersion
  • Relative lethality per kilometer similar to past, but lower
  • Dispersion increasing slightly faster than lethality

Period 4 (1937 – present): Modern Warfare

  • Long Battles
  • Continuous Frontages
  • Low Attrition per day
  • High dispersion (perhaps constant?)
  • Relatively lethality per kilometer much lower than the past
  • Dispersion increased much faster than lethality going into the period.
  • Dispersion increased at the same rate as lethality within the period.

So the question is whether warfare of the next 50 years will see a new “period of adjustment,” where the rate of dispersion (and other factors) adjusts in direct proportion to increased lethality, or will there be a significant change in the nature of war?

Note that when l use the word “dispersion” above, l often mean “reduced chance to be hit,” which consists of dispersion, increased engagement ranges, and use of cover & concealment.

One of the reasons l wandered into this subject was to see if the TNDM can be used for predicting combat before WWII. l then spent the next few days attempting to find some correlation between dispersion and casualties. Using the data on historical dispersion provided above, l created a mathematical formulation and tested that against the actual historical data points, and could not get any type of fit.

I then locked at the length of battles over time, at one-day battles, and attempted to find a pattern. I could find none. I also looked at other permutations, but did not keep a record of my attempts. I then looked through the work done by Dean Hartley (Oakridge) with the LWDB and called Paul Davis (RAND) to see if there was anyone who had found any correlation between dispersion and casualties, and they had not noted any.

It became clear to me that if there is any such correlation, it is buried so deep in the data that it cannot be found by any casual search. I suspect that I can find a mathematical correlation between weapon lethality, reduced chance to hit (including dispersion), and casualties. This would require some improvement to the data, some systematic measure of weapons lethality, and some serious regression analysis. I unfortunately cannot pursue this at this time.

Finally, for reference, l have attached two charts showing the duration of the battles in the LWDB in days (Figure 10, Duration of Battles Over Time and Figure 11, A Count of the Duration of Battles by War).

NOTES

[1] The Tactical Numerical Deterministic Model, a combat model developed by Trevor Dupuy in 1990-1991 as the follow-up to his Quantified Judgement Model. Dr. James G. Taylor and Jose Perez also contributed to the TNDM’s development.

[2] TDI’s Land Warfare Database (LWDB) was a revised version of a database created by the Historical Evaluation Research Organization (HERO) for the then-U.S. Army Concepts and Analysis Agency (now known as the U.S. Army Center for Army Analysis (CAA)) in 1984. Since the original publication of this article, TDI expanded and revised the data into a suite of databases.

[3] This matter is discussed in Christopher A. Lawrence, “The Second Test of the TNDM Battalion-Level Validations: Predicting Casualties,” The International TNDM Newsletter, April 1997, pp. 40-50.

Raqqa Has Fallen

It would appear that Raqqa has fallen: https://www.yahoo.com/news/islamic-state-raqqa-mounts-last-stand-around-city-083330251.html

  1. This announcement comes from U.S.-backed militias.
  2. It was only a four-month battle (in contrast to Mosul).
  3. “A formal declaration of victory in Raqqa will be made soon”

This does appear to end the current phase of the Islamic State, which exploded out of the desert to take Raqqa and Mosul in the first half of 2014. It lasted less than 4 years. It was an interesting experiment for a guerilla movement to suddenly try to seize power in several cities and establish a functioning state in the middle of a war. Sort of gave conventional forces something to attack. You wonder if this worked to the advantage of ISIL in the long run or not.

I gather now that the state-less Islamic state will go back to being a guerilla movement. Not sure what its long term prognosis is. This is still a far-from-resolved civil war going on in Syria.