Category War by Numbers

U.S. Army Force Ratios

People do send me some damn interesting stuff. Someone just sent me a page clipped from U.S. Army FM 3-0 Operations, dated 6 October 2017. There is a discussion in Chapter 7 on “penetration.” This brief discussion on paragraph 7-115 states in part:

7-115. A penetration is a form of maneuver in which an attacking force seeks to rupture enemy defenses on a narrow front to disrupt the defensive system (FM 3-90-1) ….The First U.S. Army’s Operation Cobra (the breakout from the Normandy lodgment in July 1944) is a classic example of a penetration. Figure 7-10 illustrates potential correlation of forces or combat power for a penetration…..”

This is figure 7-10:

So:

  1. Corps shaping operations: 3:1
  2. Corps decisive operations: 9-1
    1. Lead battalion: 18-1

Now, in contrast, let me pull some material from War by Numbers:

From page 10:

European Theater of Operations (ETO) Data, 1944

 

Force Ratio                       Result                          Percent Failure   Number of cases

0.55 to 1.01-to-1.00            Attack Fails                          100%                     5

1.15 to 1.88-to-1.00            Attack usually succeeds        21%                   48

1.95 to 2.56-to-1.00            Attack usually succeeds        10%                   21

2.71-to-1.00 and higher      Attacker Advances                   0%                   42

 

Note that these are division-level engagements. I guess I could assemble the same data for corps-level engagements, but I don’t think it would look much different.

From page 210:

Force Ratio…………Cases……Terrain…….Result

1.18 to 1.29 to 1        4             Nonurban   Defender penetrated

1.51 to 1.64               3             Nonurban   Defender penetrated

2.01 to 2.64               2             Nonurban   Defender penetrated

3.03 to 4.28               2             Nonurban   Defender penetrated

4.16 to 4.78               2             Urban         Defender penetrated

6.98 to 8.20               2             Nonurban   Defender penetrated

6.46 to 11.96 to 1      2             Urban         Defender penetrated

 

These are also division-level engagements from the ETO. One will note that out of 17 cases where the defender was penetrated, only once was the force ratio as high as 9 to 1. The mean force ratio for these 17 cases is 3.77 and the median force ratio is 2.64.

Now the other relevant tables in this book are in Chapter 8: Outcome of Battles (page 60-71). There I have a set of tables looking at the loss rates based upon one of six outcomes. Outcome V is defender penetrated. Unfortunately, as the purpose of the project was to determine prisoner of war capture rates, we did not bother to calculate the average force ratio for each outcome. But, knowing the database well, the average force ratio for defender penetrated results may be less than 3-to-1 and is certainly is less than 9-to-1. Maybe I will take few days at some point and put together a force ratio by outcome table.

Now, the source of FM 3.0 data is not known to us and is not referenced in the manual. Why they don’t provide such a reference is a mystery to me, as I can point out several examples of this being an issue. On more than one occasion data has appeared in Army manuals that we can neither confirm or check, and which we could never find the source for. But…it is not referenced. I have not looked at the operation in depth, but don’t doubt that at some point during Cobra they had a 9:1 force ratio and achieved a penetration. But…..this is different than leaving the impression that a 9:1 force ratio is needed to achieve a penetration. I do not know it that was the author’s intent, but it is something that the casual reader might infer. This probably needs to be clarified.

Response 3 (Breakpoints)

This is in response to long comment by Clinton Reilly about Breakpoints (Forced Changes in Posture) on this thread:

Breakpoints in U.S. Army Doctrine

Reilly starts with a very nice statement of the issue:

Clearly breakpoints are crucial when modelling battlefield combat. I have read extensively about it using mostly first hand accounts of battles rather than high level summaries. Some of the major factors causing it appear to be loss of leadership (e.g. Harald’s death at Hastings), loss of belief in the units capacity to achieve its objectives (e.g. the retreat of the Old Guard at Waterloo, surprise often figured in Mongol successes, over confidence resulting in impetuous attacks which fail dramatically (e.g. French attacks at Agincourt and Crecy), loss of control over the troops (again Crecy and Agincourt) are some of the main ones I can think of off hand.

The break-point crisis seems to occur against a background of confusion, disorder, mounting casualties, increasing fatigue and loss of morale. Casualties are part of the background but not usually the actual break point itself.

He then states:

Perhaps a way forward in the short term is to review a number of first hand battle accounts (I am sure you can think of many) and calculate the percentage of times these factors and others appear as breakpoints in the literature.

This has been done. In effect this is what Robert McQuie did in his article and what was the basis for the DMSI breakpoints study.

Battle Outcomes: Casualty Rates As a Measure of Defeat

Mr. Reilly then concludes:

Why wait for the military to do something? You will die of old age before that happens!

That is distinctly possible. If this really was a simple issue that one person working for a year could produce a nice definitive answer for…..it would have already been done !!!

Let us look at the 1988 Breakpoints study. There was some effort leading up to that point. Trevor Dupuy and DMSI had already looked into the issue. This included developing a database of engagements (the Land Warfare Data Base or LWDB) and using that to examine the nature of breakpoints. The McQuie article was developed from this database, and his article was closely coordinated with Trevor Dupuy. This was part of the effort that led to the U.S. Army’s Concepts Analysis Agency (CAA) to issue out a RFP (Request for Proposal). It was competitive. I wrote the proposal that won the contract award, but the contract was given to Dr. Janice Fain to lead. My proposal was more quantitative in approach than what she actually did. Her effort was more of an intellectual exploration of the issue. I gather this was done with the assumption that there would be a follow-on contract (there never was). Now, up until that point at least a man-year of effort had been expended, and if you count the time to develop the databases used, it was several man-years.

Now the Breakpoints study was headed up by Dr. Janice B. Fain, who worked on it for the better part of a year. Trevor N. Dupuy worked on it part-time. Gay M. Hammerman conducted the interview with the veterans. Richard C. Anderson researched and created an additional 24 engagements that had clear breakpoints in them for the study (that is DMSI report 117B). Charles F. Hawkins was involved in analyzing the engagements from the LWDB. There were several other people also involved to some extent. Also, 39 veterans were interviewed for this effort. Many were brought into the office to talk about their experiences (that was truly entertaining). There were also a half-dozen other staff members and consultants involved in the effort, including Lt. Col. James T. Price (USA, ret), Dr. David Segal (sociologist), Dr. Abraham Wolf (a research psychologist), Dr. Peter Shapiro (social psychology) and Col. John R. Brinkerhoff (USA, ret). There were consultant fees, travel costs and other expenses related to that. So, the entire effort took at least three “man-years” of effort. This was what was needed just get to the point where we are able to take the next step.

This is not something that a single scholar can do. That is why funding is needed.

As to dying of old age before that happens…..that may very well be the case. Right now, I am working on two books, one of them under contract. I sort of need to finish those up before I look at breakpoints again. After that, I will decide whether to work on a follow-on to America’s Modern Wars (called Future American Wars) or work on a follow-on to War by Numbers (called War by Numbers II…being the creative guy that I am). Of course, neither of these books are selling well….so perhaps my time would be better spent writing another Kursk book, or any number of other interesting projects on my plate. Anyhow, if I do War by Numbers II, then I do plan on investing several chapters into addressing breakpoints. This would include using the 1,000+ cases that now populate our combat databases to do some analysis. This is going to take some time. So…….I may get to it next year or the year after that, but I may not. If someone really needs the issue addressed, they really need to contract for it.

C-WAM 4 (Breakpoints)

A breakpoint or involuntary change in posture is an essential part of modeling. There is a breakpoint methodology in C-WAM. According to slide 18 and rule book section 5.7.2 is that ground unit below 50% strength can only defend. It is removed at below 30% strength. I gather this is a breakpoint for a brigade.

C-WAM 2

Let me just quote from Chapter 18 (Modeling Warfare) of my book War by Numbers: Understanding Conventional Combat (pages 288-289):

The original breakpoints study was done in 1954 by Dorothy Clark of ORO [which can be found here].[1] It examined forty-three battalion-level engagements where the units “broke,” including measuring the percentage of losses at the time of the break. Clark correctly determined that casualties were probably not the primary cause of the breakpoint and also declared the need to look at more data. Obviously, forty-three cases of highly variable social science-type data with a large number of variables influencing them are not enough for any form of definitive study. Furthermore, she divided the breakpoints into three categories, resulting in one category based upon only nine observations. Also, as should have been obvious, this data would apply only to battalion-level combat. Clark concluded “The statement that a unit can be considered no longer combat effective when it has suffered a specific casualty percentage is a gross oversimplification not supported by combat data.” She also stated “Because of wide variations in data, average loss percentages alone have limited meaning.”[2]

Yet, even with her clear rejection of a percent loss formulation for breakpoints, the 20 to 40 percent casualty breakpoint figures remained in use by the training and combat modeling community. Charts in the 1964 Maneuver Control field manual showed a curve with the probability of unit break based on percentage of combat casualties.[3] Once a defending unit reached around 40 percent casualties, the chance of breaking approached 100 percent. Once an attacking unit reached around 20 percent casualties, the chance of it halting (type I break) approached 100% and the chance of it breaking (type II break) reached 40 percent. These data were for battalion-level combat. Because they were also applied to combat models, many models established a breakpoint of around 30 or 40 percent casualties for units of any size (and often applied to division-sized units).

To date, we have absolutely no idea where these rule-of-thumb formulations came from and despair of ever discovering their source. These formulations persist despite the fact that in fifteen (35%) of the cases in Clark’s study, the battalions had suffered more than 40 percent casualties before they broke. Furthermore, at the division-level in World War II, only two U.S. Army divisions (and there were ninety-one committed to combat) ever suffered more than 30% casualties in a week![4] Yet, there were many forced changes in combat posture by these divisions well below that casualty threshold.

The next breakpoints study occurred in 1988.[5] There was absolutely nothing of any significance (meaning providing any form of quantitative measurement) in the intervening thirty-five years, yet there were dozens of models in use that offered a breakpoint methodology. The 1988 study was inconclusive, and since then nothing further has been done.[6]

This seemingly extreme case is a fairly typical example. A specific combat phenomenon was studied only twice in the last fifty years, both times with inconclusive results, yet this phenomenon is incorporated in most combat models. Sadly, similar examples can be pulled for virtually each and every phenomena of combat being modeled. This failure to adequately examine basic combat phenomena is a problem independent of actual combat modeling methodology.

Footnotes:

[1] Dorothy K. Clark, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, Johns Hopkins University, 1954).

 [2] Ibid, page 34.

[3] Headquarters, Department of the Army, FM 105-5 Maneuver Control (Washington, D.C., December, 1967), pages 128-133.

[4] The two exceptions included the U.S. 106th Infantry Division in December 1944, which incidentally continued fighting in the days after suffering more than 40 percent losses, and the Philippine Division upon its surrender in Bataan on 9 April 1942 suffered 100% losses in one day in addition to very heavy losses in the days leading up to its surrender.

[5] This was HERO Report number 117, Forced Changes of Combat Posture (Breakpoints) (Historical Evaluation and Research Organization, Fairfax, VA., 1988). The intervening years between 1954 and 1988 were not entirely quiet. See HERO Report number 112, Defeat Criteria Seminar, Seminar Papers on the Evaluation of the Criteria for Defeat in Battle (Historical Evaluation and Research Organization, Fairfax, VA., 12 June 1987) and the significant article by Robert McQuie, “Battle Outcomes: Casualty Rates as a Measure of Defeat” in Army, issue 37 (November 1987). Some of the results of the 1988 study was summarized in the book by Trevor N. Dupuy, Understanding Defeat: How to Recover from Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

 [6] The 1988 study was the basis for Trevor Dupuy’s book: Col. T. N. Dupuy, Understanding Defeat: How to Recover From Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

Also see:

Battle Outcomes: Casualty Rates As a Measure of Defeat

[NOTE: Post updated to include link to Dorothy Clark’s original breakpoints study.]

Response 2 (Performance of Armies)

In an exchange with one of readers, he mentioned that about the possibility to quantifiably access the performances of armies and produce a ranking from best to worst. The exchange is here:

The Dupuy Institute Air Model Historical Data Study

We have done some work on this, and are the people who have done the most extensive published work on this. Swedish researcher Niklas Zetterling in his book Normandy 1944: German Military Organization, Combat Power and Organizational Effectiveness also addresses this subject, as he has elsewhere, for example, an article in The International TNDM Newsletter, volume I, No. 6, pages 21-23 called “CEV Calculations in Italy, 1943.” It is here: http://www.dupuyinstitute.org/tdipub4.htm

When it came to measuring the differences in performance of armies, Martin van Creveld referenced Trevor Dupuy in his book Fighting Power: German and U.S. Army Performance, 1939-1945, pages 4-8.

What Trevor Dupuy has done is compare the performances of both overall forces and individual divisions based upon his Quantified Judgment Model (QJM). This was done in his book Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict. I bring the readers attention to pages ix, 62-63, Chapter 7: Behavioral Variables in World War II (pages 95-110), Chapter 9: Reliably Representing the Arab-Israeli Wars (pages 118-139), and in particular page 135, and pages 163-165. It was also discussed in Understanding War: History and Theory of Combat, Chapter Ten: Relative Combat Effectiveness (pages 105-123).

I ended up dedicating four chapters in my book War by Numbers: Understanding Conventional Combat to the same issue. One of the problems with Trevor Dupuy’s approach is that you had to accept his combat model as a valid measurement of unit performance. This was a reach for many people, especially those who did not like his conclusions to start with. I choose to simply use the combined statistical comparisons of dozens of division-level engagements, which I think makes the case fairly convincingly without adding a construct to manipulate the data. If someone has a disagreement with my statistical compilations and the results and conclusions from it, I have yet to hear them. I would recommend looking at Chapter 4: Human Factors (pages 16-18), Chapter 5: Measuring Human Factors in Combat: Italy 1943-1944 (pages 19-31), Chapter 6: Measuring Human Factors in Combat: Ardennes and Kursk (pages 32-48), and Chapter 7: Measuring Human Factors in Combat: Modern Wars (pages 49-59).

Now, I did end up discussing Trevor Dupuy’s model in Chapter 19: Validation of the TNDM and showing the results of the historical validations we have done of his model, but the model was not otherwise used in any of the analysis done in the book.

But….what we (Dupuy and I) have done is a comparison between forces that opposed each other. It is a measurement of combat value relative to each other. It is not an absolute measurement that can be compared to other armies in different times and places. Trevor Dupuy toyed with this on page 165 of NPW, but this could only be done by assuming that combat effectiveness of the U.S. Army in WWII was the same as the Israeli Army in 1973.

Anyhow, it is probably impossible to come up with a valid performance measurement that would allow you to rank an army from best to worse. It is possible to come up with a comparative performance measurement of armies that have faced each other. This, I believe we have done, using different methodologies and different historical databases. I do believe it would be possible to then determine what the different factors are that make up this difference. I do believe it would be possible to assign values or weights to those factors. I believe this would be very useful to know, in light of the potential training and organizational value of this knowledge.

Why is WWI so forgotten?

A view on the U.S. remembrance, or lack thereof, of World War One from the British paper The Guardian:  https://www.theguardian.com/world/2017/apr/06/world-war-1-centennial-us-history-modern-america

We do have World War I engagements in our databases and have included in some of our analysis. We have done some other research related to World War I (funded by the UK Ministry of Defence, of course):

Captured Records: World War I

Also have a few other blog post about the war:

Learning From Defeat in World War I

First World War Digital Resources

It was my grandfather’s war, but he was British at the time.

Murmansk

 

Response

A fellow analyst posted an extended comment to two of our threads:

C-WAM 3

and

Military History and Validation of Combat Models

Instead of responding in the comments section, I have decided to respond with another blog post.

As the person points out, most Army simulations exist to “enable students/staff to maintain and improve readiness…improve their staff skills, SOPs, reporting procedures, and planning….”

Yes this true, but I argue that this does not obviate the need for accurate simulations. Assuming no change in complexity, I cannot think of a single scenario where having a less accurate model is more desirable that having a more accurate model.

Now what is missing from many of these models that I have seen? Often a realistic unit breakpoint methodology, a proper comparison of force ratios, a proper set of casualty rates, addressing human factors, and many other matters. Many of these things are being done in these simulations already, but are being done incorrectly. Quite simply, they do not realistically portray a range of historical or real combat examples.

He then quotes the 1997-1998 Simulation Handbook of the National Simulations Training Center:

The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training or units) and a myriad of human and environmental impacts are not modeled in sufficient detail….”

Let’s take their list made around 20 years ago. In the last 20 years, what significant quantitative studies have been done on the impact of leadership on combat? Can anyone list them? Can anyone point to even one? The same with morale or level of training of units. The Army has TRADOC, the Army Staff, Leavenworth, the War College, CAA and other agencies, and I have not seen in the last twenty years a quantitative study done to address these issues. And what of terrain and weather? They have been around for a long time.

Army simulations have been around since the late 1950s. So at the time these shortfalls are noted in 1997-1998, 40 years had passed. By their own admission, these issues had not been adequately addressed in the previous 40 years. I gather they have not been adequately in addressed in the last 20 years. So, the clock is ticking, 60 years of Army modeling and simulation, and no one has yet fully and properly address many of these issues. In many cases, they have not even gotten a good start in addressing them.

Anyhow, I have little interest in arguing these issues. My interest is in correcting them.

More on Russian Body Counts

Don’t have any resolution on the casualty counts for the fighting on 7 February, but do have a few additional newspaper reports of interest:

  1. The Guardian reposts that the Russian foreign ministry reports that dozens were killed or wounded.
    1. So, if 9 were killed (a figure that is probably the lowest possible count), then you would certainly get to dozens killed or wounded. As this is a conventional fight, I would be tempted to guess a figure of 3 or 4 wounded per killed, vice the 9 or 10 wounded per killed we have been getting from our operations in Iraq and Afghanistan (see War by Numbers, Chapter 15: Casualties).
    2. Guardian article is here:
    3. https://www.theguardian.com/world/2018/feb/20/russia-admits-several-dozen-its-citizens-killed-syria-fighting
  2. The BBC repeats these claims along with noting that “…at least 131 Russians died in Syria in the first nine months of 2017…”: http://www.bbc.com/news/world-europe-43125506
  3. Wikipedia does have an article on the subject that is worth looking at, even though its count halts on 3 February:
    1. https://en.wikipedia.org/wiki/Russian_Armed_Forces_casualties_in_Syria
  4. The original report was that about 100 Syrian soldiers had been killed. I still don’t know if this count of 100+ killed on 7 February is supposed to be all Russians, or a mix of Russians and Syrians. It could be possible there were 9 Russians killed and over 100 people killed. On the other hand, it could also be an inflated casualty count. See: https://www.nytimes.com/2018/02/13/world/europe/russia-syria-dead.html
  5. Some counts have gone as high as 215 Russians killed: https://thedefensepost.com/2018/02/10/russians-killed-coalition-strikes-deir-ezzor-syria/

Conclusions: A significant fight happened on 7 February, at least 9 Russians were killed and clearly several dozen wounded. It might have been over 100 killed in the fight, but we cannot find any clear confirmation of that. I am always suspicious of casualty claims, as anyone who has read my book on Kursk may note (and I think I provide plenty of examples in that book of claims that can be proven to be significantly in error).

What Makes Up Combat Power?

Trevor Dupuy used in his models and theoretical work the concept of the Combat Effectiveness Value. The combat multiplier consisted of:

  1. Morale,
  2. training,
  3. experience,
  4. leadership,
  5. motivation,
  6. cohesion,
  7. intelligence (including interpretation),
  8. momentum,
  9. initiative,
  10. doctrine,
  11. the effects of surprise,
  12. logistical systems,
  13. organizational habits,
  14. and even cultural differences.
  15. (generalship)

See War by Numbers, page 17 and Numbers, Predictions and War, page 33. To this list, I have added a fifteenth item: “generalship,” which I consider something different than leadership. As I stated in my footnote on pages 17 & 348 of War by Numbers:

“Leadership” is this sense represents the training and capabilities of the non-commission and commissioned officers throughout the unit, which is going to be fairly consistent in an army from unit to unit. This can be a fairly consistent positive or negative influence on a unit. On the other hand, “generalship” represents the guy at the top of the unit, making the decisions. This is widely variable; with the history of warfare populated with brilliant generals, a large number of perfectly competent ones, and a surprisingly large number of less than competent ones. Within in army, no matter the degree and competence of the officer corps, or the rigor of their training, poor generals show up, and sometimes, brilliant generals show up with no military training (like the politician turned general Julius Caesar).

 

Anyhow, looking at the previous blog post by Shawn, the U.S. Army states that “combat power” consists of eight elements:

  1. Leadership,
  2. information,
  3. mission command,
  4. movement and maneuver
  5. intelligence
  6. fires,
  7. sustainment,
  8. and protection.

I am not going to debate strengths and weaknesses of these two lists, but I do note that there are only two items on both lists (leadership and intelligence). I prefer the 15 point list.

Disappearing Statistics

There was a time during the Iraq insurgency when statistics on the war were readily available. As a small independent contractor, we were getting the daily feed of incidents, casualties and other such material during the Iraq War. It was one of the daily intelligence reports for Iraq. We had simply emailed someone in the field and were put on their distribution list, even though we had no presence in Iraq and no official position. This was public information so it was not a problem….until the second half of 2005…when suddenly the war was not going very well…then someone decided to restrict distribution. We received daily intelligence reports from 4 September 2004. They ended on 25 August 2005. There is more to this story, but maybe later.

This article was brought to my attention today: https://www.militarytimes.com/flashpoints/2017/10/30/report-us-officials-classify-crucial-metrics-on-afghan-casualties-readiness/

A few highlights:

  1. From January 1 to May 8 Afghan forces sustained 2,531 killed in action and 4,238 wounded (a 1.67-to-1 wounded-to-killed ratio, which seems very low).

  2. The Afghan armed forces control 56.8% of the 407 districts, a one percentage point drop over the last six months.

  3. The Afghan government controls 63.7% percent of the population.

  4. Some of these statistics will now be classified.

 

One of our older posts on wounded-to-killed ratios. I have an entire chapter on the subject in War by Numbers.

Wounded-To-Killed Ratios

The 3-to-1 Rule in Histories

I was reading a book this last week, The Blitzkrieg Legend: The 1940 Campaign in the West by Karl-Heinz Frieser (originally published in German in 1996). On page 54 it states:

According to a military rule of thumb, the attack should be numerically superior to the defender at a ratio of 3:1. That ratio goes up if the defender can fight from well developed fortification, such as the Maginot Line.

This “rule” never seems to go away. Trevor Dupuy had a chapter on it in Understanding War, published in 1987. It was Chapter 4: The Three-to-One Theory of Combat. I didn’t really bother discussing the 3-to-1 rule in my book, War by Numbers: Understanding Conventional Combat. I do have a chapter on force ratios: Chapter 2: Force Ratios. In that chapter I show a number of force ratios based on history. Here is my chart from the European Theater of Operations, 1944 (page 10):

Force Ratio…………………..Result……………..Percentage of Failure………Number of Cases

0.55 to 1.01-to-1.00…………Attack Fails………………………….100……………………………………5

1.15 to 1.88-to-1.00…………Attack usually succeeds………21…………………………………..48

1.95 to 2.56-to-1.00…………Attack usually succeeds………10…………………………………..21

2.71 to 1.00 and higher….Attack advances……………………..0…………………………………..42

 

We have also done a number of blog posts on the subject (click on our category “Force Ratios”), primarily:

Trevor Dupuy and the 3-1 Rule

You will also see in that blog post another similar chart showing the odds of success at various force ratios.

Anyhow, I kind of think that people should probably quit referencing the 3-to-1 rule. It gives it far more weight and attention than it deserves.