Mystics & Statistics

A blog on quantitative historical analysis hosted by The Dupuy Institute

Response 3 (Breakpoints)

This is in response to long comment by Clinton Reilly about Breakpoints (Forced Changes in Posture) on this thread:

Breakpoints in U.S. Army Doctrine

Reilly starts with a very nice statement of the issue:

Clearly breakpoints are crucial when modelling battlefield combat. I have read extensively about it using mostly first hand accounts of battles rather than high level summaries. Some of the major factors causing it appear to be loss of leadership (e.g. Harald’s death at Hastings), loss of belief in the units capacity to achieve its objectives (e.g. the retreat of the Old Guard at Waterloo, surprise often figured in Mongol successes, over confidence resulting in impetuous attacks which fail dramatically (e.g. French attacks at Agincourt and Crecy), loss of control over the troops (again Crecy and Agincourt) are some of the main ones I can think of off hand.

The break-point crisis seems to occur against a background of confusion, disorder, mounting casualties, increasing fatigue and loss of morale. Casualties are part of the background but not usually the actual break point itself.

He then states:

Perhaps a way forward in the short term is to review a number of first hand battle accounts (I am sure you can think of many) and calculate the percentage of times these factors and others appear as breakpoints in the literature.

This has been done. In effect this is what Robert McQuie did in his article and what was the basis for the DMSI breakpoints study.

Battle Outcomes: Casualty Rates As a Measure of Defeat

Mr. Reilly then concludes:

Why wait for the military to do something? You will die of old age before that happens!

That is distinctly possible. If this really was a simple issue that one person working for a year could produce a nice definitive answer for…..it would have already been done !!!

Let us look at the 1988 Breakpoints study. There was some effort leading up to that point. Trevor Dupuy and DMSI had already looked into the issue. This included developing a database of engagements (the Land Warfare Data Base or LWDB) and using that to examine the nature of breakpoints. The McQuie article was developed from this database, and his article was closely coordinated with Trevor Dupuy. This was part of the effort that led to the U.S. Army’s Concepts Analysis Agency (CAA) to issue out a RFP (Request for Proposal). It was competitive. I wrote the proposal that won the contract award, but the contract was given to Dr. Janice Fain to lead. My proposal was more quantitative in approach than what she actually did. Her effort was more of an intellectual exploration of the issue. I gather this was done with the assumption that there would be a follow-on contract (there never was). Now, up until that point at least a man-year of effort had been expended, and if you count the time to develop the databases used, it was several man-years.

Now the Breakpoints study was headed up by Dr. Janice B. Fain, who worked on it for the better part of a year. Trevor N. Dupuy worked on it part-time. Gay M. Hammerman conducted the interview with the veterans. Richard C. Anderson researched and created an additional 24 engagements that had clear breakpoints in them for the study (that is DMSI report 117B). Charles F. Hawkins was involved in analyzing the engagements from the LWDB. There were several other people also involved to some extent. Also, 39 veterans were interviewed for this effort. Many were brought into the office to talk about their experiences (that was truly entertaining). There were also a half-dozen other staff members and consultants involved in the effort, including Lt. Col. James T. Price (USA, ret), Dr. David Segal (sociologist), Dr. Abraham Wolf (a research psychologist), Dr. Peter Shapiro (social psychology) and Col. John R. Brinkerhoff (USA, ret). There were consultant fees, travel costs and other expenses related to that. So, the entire effort took at least three “man-years” of effort. This was what was needed just get to the point where we are able to take the next step.

This is not something that a single scholar can do. That is why funding is needed.

As to dying of old age before that happens…..that may very well be the case. Right now, I am working on two books, one of them under contract. I sort of need to finish those up before I look at breakpoints again. After that, I will decide whether to work on a follow-on to America’s Modern Wars (called Future American Wars) or work on a follow-on to War by Numbers (called War by Numbers II…being the creative guy that I am). Of course, neither of these books are selling well….so perhaps my time would be better spent writing another Kursk book, or any number of other interesting projects on my plate. Anyhow, if I do War by Numbers II, then I do plan on investing several chapters into addressing breakpoints. This would include using the 1,000+ cases that now populate our combat databases to do some analysis. This is going to take some time. So…….I may get to it next year or the year after that, but I may not. If someone really needs the issue addressed, they really need to contract for it.

Breakpoints in U.S. Army Doctrine

U.S. Army prisoners of war captured by German forces during the Battle of the Bulge in 1944. [Wikipedia]

One of the least studied aspects of combat is battle termination. Why do units in combat stop attacking or defending? Shifts in combat posture (attack, defend, delay, withdrawal) are usually voluntary, directed by a commander, but they can also be involuntary, as a result of direct or indirect enemy action. Why do involuntary changes in combat posture, known as breakpoints, occur?

As Chris pointed out in a previous post, the topic of breakpoints has only been addressed by two known studies since 1954. Most existing military combat models and wargames address breakpoints in at least a cursory way, usually through some calculation based on personnel casualties. Both of the breakpoints studies suggest that involuntary changes in posture are seldom related to casualties alone, however.

Current U.S. Army doctrine addresses changes in combat posture through discussions of culmination points in the attack, and transitions from attack to defense, defense to counterattack, and defense to retrograde. But these all pertain to voluntary changes, not breakpoints.

Army doctrinal literature has little to say about breakpoints, either in the context of friendly forces or potential enemy combatants. The little it does say relates to the effects of fire on enemy forces and is based on personnel and material attrition.

According to ADRP 1-02 Terms and Military Symbols, an enemy combat unit is considered suppressed after suffering 3% personnel casualties or material losses, neutralized by 10% losses, and destroyed upon sustaining 30% losses. The sources and methodology for deriving these figures is unknown, although these specific terms and numbers have been a part of Army doctrine for decades.

The joint U.S. Army and U.S. Marine Corps vision of future land combat foresees battlefields that are highly lethal and demanding on human endurance. How will such a future operational environment affect combat performance? Past experience undoubtedly offers useful insights but there seems to be little interest in seeking out such knowledge.

Trevor Dupuy criticized the U.S. military in the 1980s for its lack of understanding of the phenomenon of suppression and other effects of fire on the battlefield, and its seeming disinterest in studying it. Not much appears to have changed since then.

C-WAM 4 (Breakpoints)

A breakpoint or involuntary change in posture is an essential part of modeling. There is a breakpoint methodology in C-WAM. According to slide 18 and rule book section 5.7.2 is that ground unit below 50% strength can only defend. It is removed at below 30% strength. I gather this is a breakpoint for a brigade.

C-WAM 2

Let me just quote from Chapter 18 (Modeling Warfare) of my book War by Numbers: Understanding Conventional Combat (pages 288-289):

The original breakpoints study was done in 1954 by Dorothy Clark of ORO [which can be found here].[1] It examined forty-three battalion-level engagements where the units “broke,” including measuring the percentage of losses at the time of the break. Clark correctly determined that casualties were probably not the primary cause of the breakpoint and also declared the need to look at more data. Obviously, forty-three cases of highly variable social science-type data with a large number of variables influencing them are not enough for any form of definitive study. Furthermore, she divided the breakpoints into three categories, resulting in one category based upon only nine observations. Also, as should have been obvious, this data would apply only to battalion-level combat. Clark concluded “The statement that a unit can be considered no longer combat effective when it has suffered a specific casualty percentage is a gross oversimplification not supported by combat data.” She also stated “Because of wide variations in data, average loss percentages alone have limited meaning.”[2]

Yet, even with her clear rejection of a percent loss formulation for breakpoints, the 20 to 40 percent casualty breakpoint figures remained in use by the training and combat modeling community. Charts in the 1964 Maneuver Control field manual showed a curve with the probability of unit break based on percentage of combat casualties.[3] Once a defending unit reached around 40 percent casualties, the chance of breaking approached 100 percent. Once an attacking unit reached around 20 percent casualties, the chance of it halting (type I break) approached 100% and the chance of it breaking (type II break) reached 40 percent. These data were for battalion-level combat. Because they were also applied to combat models, many models established a breakpoint of around 30 or 40 percent casualties for units of any size (and often applied to division-sized units).

To date, we have absolutely no idea where these rule-of-thumb formulations came from and despair of ever discovering their source. These formulations persist despite the fact that in fifteen (35%) of the cases in Clark’s study, the battalions had suffered more than 40 percent casualties before they broke. Furthermore, at the division-level in World War II, only two U.S. Army divisions (and there were ninety-one committed to combat) ever suffered more than 30% casualties in a week![4] Yet, there were many forced changes in combat posture by these divisions well below that casualty threshold.

The next breakpoints study occurred in 1988.[5] There was absolutely nothing of any significance (meaning providing any form of quantitative measurement) in the intervening thirty-five years, yet there were dozens of models in use that offered a breakpoint methodology. The 1988 study was inconclusive, and since then nothing further has been done.[6]

This seemingly extreme case is a fairly typical example. A specific combat phenomenon was studied only twice in the last fifty years, both times with inconclusive results, yet this phenomenon is incorporated in most combat models. Sadly, similar examples can be pulled for virtually each and every phenomena of combat being modeled. This failure to adequately examine basic combat phenomena is a problem independent of actual combat modeling methodology.

Footnotes:

[1] Dorothy K. Clark, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, Johns Hopkins University, 1954).

 [2] Ibid, page 34.

[3] Headquarters, Department of the Army, FM 105-5 Maneuver Control (Washington, D.C., December, 1967), pages 128-133.

[4] The two exceptions included the U.S. 106th Infantry Division in December 1944, which incidentally continued fighting in the days after suffering more than 40 percent losses, and the Philippine Division upon its surrender in Bataan on 9 April 1942 suffered 100% losses in one day in addition to very heavy losses in the days leading up to its surrender.

[5] This was HERO Report number 117, Forced Changes of Combat Posture (Breakpoints) (Historical Evaluation and Research Organization, Fairfax, VA., 1988). The intervening years between 1954 and 1988 were not entirely quiet. See HERO Report number 112, Defeat Criteria Seminar, Seminar Papers on the Evaluation of the Criteria for Defeat in Battle (Historical Evaluation and Research Organization, Fairfax, VA., 12 June 1987) and the significant article by Robert McQuie, “Battle Outcomes: Casualty Rates as a Measure of Defeat” in Army, issue 37 (November 1987). Some of the results of the 1988 study was summarized in the book by Trevor N. Dupuy, Understanding Defeat: How to Recover from Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

 [6] The 1988 study was the basis for Trevor Dupuy’s book: Col. T. N. Dupuy, Understanding Defeat: How to Recover From Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

Also see:

Battle Outcomes: Casualty Rates As a Measure of Defeat

[NOTE: Post updated to include link to Dorothy Clark’s original breakpoints study.]

Pompeo: A couple hundred Russians were killed

We usually stay away from the news of the day, but hard to ignore this one as we were recently blogging about it:

Story: https://www.yahoo.com/news/u-military-killed-apos-couple-181324480.html

Video: https://www.usatoday.com/videos/news/nation/2018/04/12/pompeo-%e2%80%9c-couple-hundred-russians-were-killed%e2%80%9d-syria-shootout/33770113/

In testimony to the Senate Foreign Relations Committee, Mike Pompeo, currently the CIA Director and nominee to serve as Secretary of State stated that “a couple hundred Russians were killed” by U.S. forces in Syria.

Our discussion of this:

Russian Body Count: Update

More on Russian Body Counts

More Russian Body Counts

Russian Body Counts

Abstraction and Aggregation in Wargame Modeling

[IPMS/USA Reviews]

“All models are wrong, some models are useful.” – George Box

Models, no matter what their subjects, must always be an imperfect copy of the original. The term “model” inherently has this connotation. If the subject is exact and precise, then it is a duplicate, a replica, a clone, or a copy, but not a “model.” The most common dimension to be compromised is generally size, or more literally the three spatial dimensions of length, width and height. A good example of this would be a scale model airplane, generally available in several ratios from the original, such as 1/144, 1/72 or 1/48 (which are interestingly all factors of 12 … there are also 1/100 for the more decimal-minded). These mean that the model airplane at 1/72 scale would be 72 times smaller … take the length, width and height measurements of the real item, and divide by 72 to get the model’s value.

If we take the real item’s weight and divide by 72, we would not expect our model to weight 72 times less! Not unless the same or similar materials would be used, certainly. Generally, the model has a different purpose than replicating the subject’s functionality. It is helping to model the subject’s qualities, or to mimic them in some useful way. In the case of the 1/72 plastic model airplane of the F-15J fighter, this might be replicating the sight of a real F-15J, to satisfy the desire of the youth to look at the F-15J and to imagine themselves taking flight. Or it might be for pilots at a flight school to mimic air combat with models instead of ha

The model aircraft is a simple physical object; once built, it does not change over time (unless you want to count dropping it and breaking it…). A real F-15J, however, is a dynamic physical object, which changes considerably over the course of its normal operation. It is loaded with fuel, ordnance, both of which have a huge effect on its weight, and thus its performance characteristics. Also, it may be occupied by different crew members, whose experience and skills may vary considerably. These qualities of the unit need to be taken into account, if the purpose of the model is to represent the aircraft. The classic example of this is a flight envelope model of an F-15A/C:

[Quora]

This flight envelope itself is a model, it represents the flight characteristics of the F-15 using two primary quantitative axes – altitude and speed (in numbers of mach), and also throttle setting. Perhaps the most interesting thing about this is the realization than an F-15 slows down as it descends. Are these particular qualities of an F-15 required to model air combat involving such and aircraft?

How to Apply This Modeling Process to a Wargame?

The purpose of the war game is to model or represent the possible outcome of a real combat situation, played forward in the model at whatever pace and scale the designer has intended.

As mentioned previously, my colleague and I are playing Asian Fleet, a war game that covers several types of naval combat, including those involving air units, surface units and submarine units. This was published in 2007, and updated in 2010. We’ve selected a scenario that has only air units on either side. The premise of this scenario is quite simple:

The Chinese air force, in trying to prevent the United States from intervening in a Taiwan invasion, will carry out an attack on the SDF as well as the US military base on Okinawa. Forces around Shanghai consisting of state-of-the-art fighter bombers and long-range attack aircraft have been placed for the invasion of Taiwan, and an attack on Okinawa would be carried out with a portion of these forces. [Asian Fleet Scenario Book]

Of course, this game is a model of reality. The infinite geospatial and temporal possibilities of space-time which is so familiar to us has been replaced by highly aggregated discreet buckets, such as turns that may last for a day, or eight hours. Latitude, longitude and altitude are replaced with a two-dimensional hexagonal “honey comb” surface. Hence, distance is no longer computed in miles or meters, but rather in “hexes”, each of which is about 50 nautical miles. Aircraft are effectively aloft, or on the ground, although a “high mission profile” will provide endurance benefits. Submarines are considered underwater, or may use “deep mode” attempting to hide from sonar searches.

Maneuver units are represented by “counters” or virtual chits to be moved about the map as play progresses. Their level of aggregation varies from large and powerful ships and subs represented individually, to smaller surface units and weaker subs grouped and represented by a single counter (a “flotilla”), to squadrons or regiments of aircraft represented by a single counter. Depending upon the nation and the military branch, this may be a few as 3-5 aircraft in a maritime patrol aircraft (MPA) detachment (“recon” in this game), to roughly 10-12 aircraft in a bomber unit, to 24 or even 72 aircraft in a fighter unit (“interceptor” in this game).

Enough Theory, What Happened?!

The Chinese Air Force mobilized their H6H bomber, escorted by large numbers of Flankers (J11 and Su-30MK2 fighters from the Shanghai area, and headed East towards Okinawa. The US Air Force F-15Cs supported by airborne warning and control system (AWACS) detected this inbound force and delayed engagement until their Japanese F-15J unit on combat air patrol (CAP) could support them, and then engaged the Chinese force about 50 miles from the AWACS orbits. In this game, air combat is broken down into two phases, long-range air to air (LRAA) combat (aka beyond visual range, BVR), and “regular” air combat, or within visual range (WVR) combat.

In BVR combat, only units marked as equipped with BVR capability may attack:

  • 2 x F-15C units have a factor of 32; scoring a hit in 5 out of 10 cases, or roughly 50%.
  • Su-30MK2 unit has a factor of 16; scoring a hit in 4 out of 10 cases, ~40%.

To these numbers a modifier of +2 exists when the attacker is supported by AWACS, so the odds to score a hit increase to roughly 70% for the F-15Cs … but in our example they miss, and the Chinese shot misses as well. Thus, the combat proceeds to WVR.

In WVR combat, each opposing side sums their aerial combat factors:

  • 2 x F-15C (32) + F-15J (13) = 45
  • Su-30MK2 (15) + J11 (13) + H6H (1) = 29

These two numbers are then expressed as a ratio, attacker-to-defender (45:29), and rounded down in favor of the defender (1:1), and then a ten-sided-die (d10) is rolled to consult the Air-to-Air Combat Results Table, on the “CAP/AWACS Interception” line. The die was rolled, and a result of “0/0r” was achieved, which basically says that neither side takes losses, but the defender is turned back from the mission (“r” being code for “return to base”). Given the +2 modifier for the AWACS, the worst outcome for the Allies would be a mutual return to base result (“0r/0r”). The best outcome would be inflicting two “steps” of damage, and sending the rest home (“0/2r”). A step of loss is about one half of an air unit, represented by flipping over the counter or chit, and operating with the combat factors at about half strength.

To sum this up, as the Allied commander, my conclusion was that the Americans were hung-over or asleep for this engagement.

I am encouraged by some similarities between this game and the fantastic detail that TDI has just posted about the DACM model, here and here. Thus, I plan to not only dissect this Asian Fleet game (VGAF), but also go a gap analysis between VGAF and DACM.

The Dupuy Air Campaign Model (DACM)

[The article below is reprinted from the April 1997 edition of The International TNDM Newsletter. A description of the TDI Air Model Historical Data Study can be found here.]

The Dupuy Air Campaign Model
by Col. Joseph A. Bulger, Jr., USAF, Ret.

The Dupuy Institute, as part of the DACM [Dupuy Air Campaign Model], created a draft model in a spreadsheet format to show how such a model would calculate attrition. Below are the actual printouts of the “interim methodology demonstration,” which shows the types of inputs, outputs, and equations used for the DACM. The spreadsheet was created by Col. Bulger, while many of the formulae were the work of Robert Shaw.

Response 2 (Performance of Armies)

In an exchange with one of readers, he mentioned that about the possibility to quantifiably access the performances of armies and produce a ranking from best to worst. The exchange is here:

The Dupuy Institute Air Model Historical Data Study

We have done some work on this, and are the people who have done the most extensive published work on this. Swedish researcher Niklas Zetterling in his book Normandy 1944: German Military Organization, Combat Power and Organizational Effectiveness also addresses this subject, as he has elsewhere, for example, an article in The International TNDM Newsletter, volume I, No. 6, pages 21-23 called “CEV Calculations in Italy, 1943.” It is here: http://www.dupuyinstitute.org/tdipub4.htm

When it came to measuring the differences in performance of armies, Martin van Creveld referenced Trevor Dupuy in his book Fighting Power: German and U.S. Army Performance, 1939-1945, pages 4-8.

What Trevor Dupuy has done is compare the performances of both overall forces and individual divisions based upon his Quantified Judgment Model (QJM). This was done in his book Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict. I bring the readers attention to pages ix, 62-63, Chapter 7: Behavioral Variables in World War II (pages 95-110), Chapter 9: Reliably Representing the Arab-Israeli Wars (pages 118-139), and in particular page 135, and pages 163-165. It was also discussed in Understanding War: History and Theory of Combat, Chapter Ten: Relative Combat Effectiveness (pages 105-123).

I ended up dedicating four chapters in my book War by Numbers: Understanding Conventional Combat to the same issue. One of the problems with Trevor Dupuy’s approach is that you had to accept his combat model as a valid measurement of unit performance. This was a reach for many people, especially those who did not like his conclusions to start with. I choose to simply use the combined statistical comparisons of dozens of division-level engagements, which I think makes the case fairly convincingly without adding a construct to manipulate the data. If someone has a disagreement with my statistical compilations and the results and conclusions from it, I have yet to hear them. I would recommend looking at Chapter 4: Human Factors (pages 16-18), Chapter 5: Measuring Human Factors in Combat: Italy 1943-1944 (pages 19-31), Chapter 6: Measuring Human Factors in Combat: Ardennes and Kursk (pages 32-48), and Chapter 7: Measuring Human Factors in Combat: Modern Wars (pages 49-59).

Now, I did end up discussing Trevor Dupuy’s model in Chapter 19: Validation of the TNDM and showing the results of the historical validations we have done of his model, but the model was not otherwise used in any of the analysis done in the book.

But….what we (Dupuy and I) have done is a comparison between forces that opposed each other. It is a measurement of combat value relative to each other. It is not an absolute measurement that can be compared to other armies in different times and places. Trevor Dupuy toyed with this on page 165 of NPW, but this could only be done by assuming that combat effectiveness of the U.S. Army in WWII was the same as the Israeli Army in 1973.

Anyhow, it is probably impossible to come up with a valid performance measurement that would allow you to rank an army from best to worse. It is possible to come up with a comparative performance measurement of armies that have faced each other. This, I believe we have done, using different methodologies and different historical databases. I do believe it would be possible to then determine what the different factors are that make up this difference. I do believe it would be possible to assign values or weights to those factors. I believe this would be very useful to know, in light of the potential training and organizational value of this knowledge.

Why is WWI so forgotten?

A view on the U.S. remembrance, or lack thereof, of World War One from the British paper The Guardian:  https://www.theguardian.com/world/2017/apr/06/world-war-1-centennial-us-history-modern-america

We do have World War I engagements in our databases and have included in some of our analysis. We have done some other research related to World War I (funded by the UK Ministry of Defence, of course):

Captured Records: World War I

Also have a few other blog post about the war:

Learning From Defeat in World War I

First World War Digital Resources

It was my grandfather’s war, but he was British at the time.

Murmansk

 

The Dupuy Institute Air Model Historical Data Study

British Air Ministry aerial combat diagram that sought to explain how the RAF had fought off the Luftwaffe. [World War II Today]

[The article below is reprinted from the April 1997 edition of The International TNDM Newsletter.]

Air Model Historical Data Study
by Col. Joseph A. Bulger, Jr., USAF, Ret

The Air Model Historical Study (AMHS) was designed to lead to the development of an air campaign model for use by the Air Command and Staff College (ACSC). This model, never completed, became known as the Dupuy Air Campaign Model (DACM). It was a team effort led by Trevor N. Dupuy and included the active participation of Lt. Col. Joseph Bulger, Gen. Nicholas Krawciw, Chris Lawrence, Dave Bongard, Robert Schmaltz, Robert Shaw, Dr. James Taylor, John Kettelle, Dr. George Daoust and Louis Zocchi, among others. After Dupuy’s death, I took over as the project manager.

At the first meeting of the team Dupuy assembled for the study, it became clear that this effort would be a serious challenge. In his own style, Dupuy was careful to provide essential guidance while, at the same time, cultivating a broad investigative approach to the unique demands of modeling for air combat. It would have been no surprise if the initial guidance established a focus on the analytical approach, level of aggregation, and overall philosophy of the QJM [Quantified Judgement Model] and TNDM [Tactical Numerical Deterministic Model]. It was clear that Trevor had no intention of steering the study into an air combat modeling methodology based directly on QJM/TNDM. To the contrary, he insisted on a rigorous derivation of the factors that would permit the final choice of model methodology.

At the time of Dupuy’s death in June 1995, the Air Model Historical Data Study had reached a point where a major decision was needed. The early months of the study had been devoted to developing a consensus among the TDI team members with respect to the factors that needed to be included in the model. The discussions tended to highlight three areas of particular interest—factors that had been included in models currently in use, the limitations of these models, and the need for new factors (and relationships) peculiar to the properties and dynamics of the air campaign. Team members formulated a family of relationships and factors, but the model architecture itself was not investigated beyond the surface considerations.

Despite substantial contributions from team members, including analytical demonstrations of selected factors and air combat relationships, no consensus had been achieved. On the contrary, there was a growing sense of need to abandon traditional modeling approaches in favor of a new application of the “Dupuy Method” based on a solid body of air combat data from WWII.

The Dupuy approach to modeling land combat relied heavily on the ratio of force strengths (largely determined by firepower as modified by other factors). After almost a year of investigations by the AMHDS team, it was beginning to appear that air combat differed in a fundamental way from ground combat. The essence of the difference is that in air combat, the outcome of the maneuver battle for platform position must be determined before the firepower relationships may be brought to bear on the battle outcome.

At the time of Dupuy’s death, it was apparent that if the study contract was to yield a meaningful product, an immediate choice of analysis thrust was required. Shortly prior to Dupuy’s death, I and other members of the TDI team recommended that we adopt the overall approach, level of aggregation, and analytical complexity that had characterized Dupuy’s models of land combat. We also agreed on the time-sequenced predominance of the maneuver phase of air combat. When I was asked to take the analytical lead for the contact in Dupuy’s absence, I was reasonably confident that there was overall agreement.

In view of the time available to prepare a deliverable product, it was decided to prepare a model using the air combat data we had been evaluating up to that point—June 1995. Fortunately, Robert Shaw had developed a set of preliminary analysis relationships that could be used in an initial assessment of the maneuver/firepower relationship. In view of the analytical, logistic, contractual, and time factors discussed, we decided to complete the contract effort based on the following analytical thrust:

  1. The contract deliverable would be based on the maneuver/firepower analysis approach as currently formulated in Robert Shaw’s performance equations;
  2. A spreadsheet formulation of outcomes for selected (Battle of Britain) engagements would be presented to the customer in August 1995;
  3. To the extent practical, a working model would be provided to the customer with suggestions for further development.

During the following six weeks, the demonstration model was constructed. The model (programmed for a Lotus 1-2-3 style spreadsheet formulation) was developed, mechanized, and demonstrated to ACSC in August 1995. The final report was delivered in September of 1995.

The working model demonstrated to ACSC in August 1995 suggests the following observations:

  • A substantial contribution to the understanding of air combat modeling has been achieved.
  • While relationships developed in the Dupuy Air Combat Model (DACM) are not fully mature, they are analytically significant.
  • The approach embodied in DACM derives its authenticity from the famous “Dupuy Method” thus ensuring its strong correlations with actual combat data.
  • Although demonstrated only for air combat in the Battle of Britain, the methodology is fully capable of incorporating modem technology contributions to sensor, command and control, and firepower performance.
  • The knowledge base, fundamental performance relationships, and methodology contributions embodied in DACM are worthy of further exploration. They await only the expression of interest and a relatively modest investment to extend the analysis methodology into modem air combat and the engagements anticipated for the 21st Century.

One final observation seems appropriate. The DACM demonstration provided to ACSC in August 1995 should not be dismissed as a perhaps interesting, but largely simplistic approach to air combat modeling. It is a significant contribution to the understanding of air combat relationships that will prevail in the 21st Century. The Dupuy Institute is convinced that further development of DACM makes eminent good sense. An exploitation of the maneuver and firepower relationships already demonstrated in DACM will provide a valid basis for modeling air combat with modern technology sensors, control mechanisms, and weapons. It is appropriate to include the Dupuy name in the title of this latest in a series of distinguished combat models. Trevor would be pleased.

Why it is difficult to withdraw from (Syria, Iraq, Afghanistan….)

Leaving an unstable country in some regions is an invite to further international problems. This was the case with Afghanistan in the 1990s, which resulted in Al-Qaeda being hosted there. This was the case with Somalia, which not only hosted elements of Al-Qaeda, but also conducted rampant piracy. This was the case with Iraq/Syria, which gave the Islamic State a huge opening and resulted in them seizing the second largest city in Iraq. It seems a bad idea to ignore these areas, even though there is a cost to not ignoring them.

The cost of not ignoring them is one must maintain a presence of something like 2,000 to 20,000 or more support troops, Air Force personnel, trainers, advisors, special operations forces, etc. And they must be maintained for a while. It will certainly result in the loss of a few American lives, perhaps even dozens. It will certainly cost hundreds of millions to pay for deployment, security operations, develop the local forces, and to re-build and re-vitalize these areas. In fact, the bill usually ends up costing billions. Furthermore, these operations go on for a decade or two or more. The annual cost times 20 years gets considerable. We have never done any studies of “security operations” or “advisory missions.” The focus of our work was on insurgencies, but we have no doubt that these things tend to drag on a while before completion.

The cost of ignoring these countries may be nothing. If there is no international terror threat and no direct threat to our interests, then there may not be a major cost to withdrawing. On the other hand, the cost of ignoring Somalia was a pirate campaign that started around 2005 and where they attacked at least 232 ships. They captured over 3,500 seafarers. At least 62 of them died. The cost of ignoring Afghanistan in the 1990s? Well, was it 9-11? Would 9-11 have occurred anyway if Al-Qaeda was not free to reside, organize, recruit and train in Afghanistan? I don’t know for sure…..but I think it was certainly an enabling factor.

I have never seen a study that analyzes/estimates the cost of these interventions (although some such studies may exist).  Conversely, I have never seen a study that analyzes/estimates the cost of not doing these interventions (and I kind of doubt that such a study exists).

Hard to do analyze the cost of the trade-off if we really don’t know the cost.