Category Research & Analysis

Response 3 (Breakpoints)

This is in response to long comment by Clinton Reilly about Breakpoints (Forced Changes in Posture) on this thread:

Breakpoints in U.S. Army Doctrine

Reilly starts with a very nice statement of the issue:

Clearly breakpoints are crucial when modelling battlefield combat. I have read extensively about it using mostly first hand accounts of battles rather than high level summaries. Some of the major factors causing it appear to be loss of leadership (e.g. Harald’s death at Hastings), loss of belief in the units capacity to achieve its objectives (e.g. the retreat of the Old Guard at Waterloo, surprise often figured in Mongol successes, over confidence resulting in impetuous attacks which fail dramatically (e.g. French attacks at Agincourt and Crecy), loss of control over the troops (again Crecy and Agincourt) are some of the main ones I can think of off hand.

The break-point crisis seems to occur against a background of confusion, disorder, mounting casualties, increasing fatigue and loss of morale. Casualties are part of the background but not usually the actual break point itself.

He then states:

Perhaps a way forward in the short term is to review a number of first hand battle accounts (I am sure you can think of many) and calculate the percentage of times these factors and others appear as breakpoints in the literature.

This has been done. In effect this is what Robert McQuie did in his article and what was the basis for the DMSI breakpoints study.

Battle Outcomes: Casualty Rates As a Measure of Defeat

Mr. Reilly then concludes:

Why wait for the military to do something? You will die of old age before that happens!

That is distinctly possible. If this really was a simple issue that one person working for a year could produce a nice definitive answer for…..it would have already been done !!!

Let us look at the 1988 Breakpoints study. There was some effort leading up to that point. Trevor Dupuy and DMSI had already looked into the issue. This included developing a database of engagements (the Land Warfare Data Base or LWDB) and using that to examine the nature of breakpoints. The McQuie article was developed from this database, and his article was closely coordinated with Trevor Dupuy. This was part of the effort that led to the U.S. Army’s Concepts Analysis Agency (CAA) to issue out a RFP (Request for Proposal). It was competitive. I wrote the proposal that won the contract award, but the contract was given to Dr. Janice Fain to lead. My proposal was more quantitative in approach than what she actually did. Her effort was more of an intellectual exploration of the issue. I gather this was done with the assumption that there would be a follow-on contract (there never was). Now, up until that point at least a man-year of effort had been expended, and if you count the time to develop the databases used, it was several man-years.

Now the Breakpoints study was headed up by Dr. Janice B. Fain, who worked on it for the better part of a year. Trevor N. Dupuy worked on it part-time. Gay M. Hammerman conducted the interview with the veterans. Richard C. Anderson researched and created an additional 24 engagements that had clear breakpoints in them for the study (that is DMSI report 117B). Charles F. Hawkins was involved in analyzing the engagements from the LWDB. There were several other people also involved to some extent. Also, 39 veterans were interviewed for this effort. Many were brought into the office to talk about their experiences (that was truly entertaining). There were also a half-dozen other staff members and consultants involved in the effort, including Lt. Col. James T. Price (USA, ret), Dr. David Segal (sociologist), Dr. Abraham Wolf (a research psychologist), Dr. Peter Shapiro (social psychology) and Col. John R. Brinkerhoff (USA, ret). There were consultant fees, travel costs and other expenses related to that. So, the entire effort took at least three “man-years” of effort. This was what was needed just get to the point where we are able to take the next step.

This is not something that a single scholar can do. That is why funding is needed.

As to dying of old age before that happens…..that may very well be the case. Right now, I am working on two books, one of them under contract. I sort of need to finish those up before I look at breakpoints again. After that, I will decide whether to work on a follow-on to America’s Modern Wars (called Future American Wars) or work on a follow-on to War by Numbers (called War by Numbers II…being the creative guy that I am). Of course, neither of these books are selling well….so perhaps my time would be better spent writing another Kursk book, or any number of other interesting projects on my plate. Anyhow, if I do War by Numbers II, then I do plan on investing several chapters into addressing breakpoints. This would include using the 1,000+ cases that now populate our combat databases to do some analysis. This is going to take some time. So…….I may get to it next year or the year after that, but I may not. If someone really needs the issue addressed, they really need to contract for it.

Breakpoints in U.S. Army Doctrine

U.S. Army prisoners of war captured by German forces during the Battle of the Bulge in 1944. [Wikipedia]

One of the least studied aspects of combat is battle termination. Why do units in combat stop attacking or defending? Shifts in combat posture (attack, defend, delay, withdrawal) are usually voluntary, directed by a commander, but they can also be involuntary, as a result of direct or indirect enemy action. Why do involuntary changes in combat posture, known as breakpoints, occur?

As Chris pointed out in a previous post, the topic of breakpoints has only been addressed by two known studies since 1954. Most existing military combat models and wargames address breakpoints in at least a cursory way, usually through some calculation based on personnel casualties. Both of the breakpoints studies suggest that involuntary changes in posture are seldom related to casualties alone, however.

Current U.S. Army doctrine addresses changes in combat posture through discussions of culmination points in the attack, and transitions from attack to defense, defense to counterattack, and defense to retrograde. But these all pertain to voluntary changes, not breakpoints.

Army doctrinal literature has little to say about breakpoints, either in the context of friendly forces or potential enemy combatants. The little it does say relates to the effects of fire on enemy forces and is based on personnel and material attrition.

According to ADRP 1-02 Terms and Military Symbols, an enemy combat unit is considered suppressed after suffering 3% personnel casualties or material losses, neutralized by 10% losses, and destroyed upon sustaining 30% losses. The sources and methodology for deriving these figures is unknown, although these specific terms and numbers have been a part of Army doctrine for decades.

The joint U.S. Army and U.S. Marine Corps vision of future land combat foresees battlefields that are highly lethal and demanding on human endurance. How will such a future operational environment affect combat performance? Past experience undoubtedly offers useful insights but there seems to be little interest in seeking out such knowledge.

Trevor Dupuy criticized the U.S. military in the 1980s for its lack of understanding of the phenomenon of suppression and other effects of fire on the battlefield, and its seeming disinterest in studying it. Not much appears to have changed since then.

C-WAM 4 (Breakpoints)

A breakpoint or involuntary change in posture is an essential part of modeling. There is a breakpoint methodology in C-WAM. According to slide 18 and rule book section 5.7.2 is that ground unit below 50% strength can only defend. It is removed at below 30% strength. I gather this is a breakpoint for a brigade.

C-WAM 2

Let me just quote from Chapter 18 (Modeling Warfare) of my book War by Numbers: Understanding Conventional Combat (pages 288-289):

The original breakpoints study was done in 1954 by Dorothy Clark of ORO [which can be found here].[1] It examined forty-three battalion-level engagements where the units “broke,” including measuring the percentage of losses at the time of the break. Clark correctly determined that casualties were probably not the primary cause of the breakpoint and also declared the need to look at more data. Obviously, forty-three cases of highly variable social science-type data with a large number of variables influencing them are not enough for any form of definitive study. Furthermore, she divided the breakpoints into three categories, resulting in one category based upon only nine observations. Also, as should have been obvious, this data would apply only to battalion-level combat. Clark concluded “The statement that a unit can be considered no longer combat effective when it has suffered a specific casualty percentage is a gross oversimplification not supported by combat data.” She also stated “Because of wide variations in data, average loss percentages alone have limited meaning.”[2]

Yet, even with her clear rejection of a percent loss formulation for breakpoints, the 20 to 40 percent casualty breakpoint figures remained in use by the training and combat modeling community. Charts in the 1964 Maneuver Control field manual showed a curve with the probability of unit break based on percentage of combat casualties.[3] Once a defending unit reached around 40 percent casualties, the chance of breaking approached 100 percent. Once an attacking unit reached around 20 percent casualties, the chance of it halting (type I break) approached 100% and the chance of it breaking (type II break) reached 40 percent. These data were for battalion-level combat. Because they were also applied to combat models, many models established a breakpoint of around 30 or 40 percent casualties for units of any size (and often applied to division-sized units).

To date, we have absolutely no idea where these rule-of-thumb formulations came from and despair of ever discovering their source. These formulations persist despite the fact that in fifteen (35%) of the cases in Clark’s study, the battalions had suffered more than 40 percent casualties before they broke. Furthermore, at the division-level in World War II, only two U.S. Army divisions (and there were ninety-one committed to combat) ever suffered more than 30% casualties in a week![4] Yet, there were many forced changes in combat posture by these divisions well below that casualty threshold.

The next breakpoints study occurred in 1988.[5] There was absolutely nothing of any significance (meaning providing any form of quantitative measurement) in the intervening thirty-five years, yet there were dozens of models in use that offered a breakpoint methodology. The 1988 study was inconclusive, and since then nothing further has been done.[6]

This seemingly extreme case is a fairly typical example. A specific combat phenomenon was studied only twice in the last fifty years, both times with inconclusive results, yet this phenomenon is incorporated in most combat models. Sadly, similar examples can be pulled for virtually each and every phenomena of combat being modeled. This failure to adequately examine basic combat phenomena is a problem independent of actual combat modeling methodology.

Footnotes:

[1] Dorothy K. Clark, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, Johns Hopkins University, 1954).

 [2] Ibid, page 34.

[3] Headquarters, Department of the Army, FM 105-5 Maneuver Control (Washington, D.C., December, 1967), pages 128-133.

[4] The two exceptions included the U.S. 106th Infantry Division in December 1944, which incidentally continued fighting in the days after suffering more than 40 percent losses, and the Philippine Division upon its surrender in Bataan on 9 April 1942 suffered 100% losses in one day in addition to very heavy losses in the days leading up to its surrender.

[5] This was HERO Report number 117, Forced Changes of Combat Posture (Breakpoints) (Historical Evaluation and Research Organization, Fairfax, VA., 1988). The intervening years between 1954 and 1988 were not entirely quiet. See HERO Report number 112, Defeat Criteria Seminar, Seminar Papers on the Evaluation of the Criteria for Defeat in Battle (Historical Evaluation and Research Organization, Fairfax, VA., 12 June 1987) and the significant article by Robert McQuie, “Battle Outcomes: Casualty Rates as a Measure of Defeat” in Army, issue 37 (November 1987). Some of the results of the 1988 study was summarized in the book by Trevor N. Dupuy, Understanding Defeat: How to Recover from Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

 [6] The 1988 study was the basis for Trevor Dupuy’s book: Col. T. N. Dupuy, Understanding Defeat: How to Recover From Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

Also see:

Battle Outcomes: Casualty Rates As a Measure of Defeat

[NOTE: Post updated to include link to Dorothy Clark’s original breakpoints study.]

Response 2 (Performance of Armies)

In an exchange with one of readers, he mentioned that about the possibility to quantifiably access the performances of armies and produce a ranking from best to worst. The exchange is here:

The Dupuy Institute Air Model Historical Data Study

We have done some work on this, and are the people who have done the most extensive published work on this. Swedish researcher Niklas Zetterling in his book Normandy 1944: German Military Organization, Combat Power and Organizational Effectiveness also addresses this subject, as he has elsewhere, for example, an article in The International TNDM Newsletter, volume I, No. 6, pages 21-23 called “CEV Calculations in Italy, 1943.” It is here: http://www.dupuyinstitute.org/tdipub4.htm

When it came to measuring the differences in performance of armies, Martin van Creveld referenced Trevor Dupuy in his book Fighting Power: German and U.S. Army Performance, 1939-1945, pages 4-8.

What Trevor Dupuy has done is compare the performances of both overall forces and individual divisions based upon his Quantified Judgment Model (QJM). This was done in his book Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict. I bring the readers attention to pages ix, 62-63, Chapter 7: Behavioral Variables in World War II (pages 95-110), Chapter 9: Reliably Representing the Arab-Israeli Wars (pages 118-139), and in particular page 135, and pages 163-165. It was also discussed in Understanding War: History and Theory of Combat, Chapter Ten: Relative Combat Effectiveness (pages 105-123).

I ended up dedicating four chapters in my book War by Numbers: Understanding Conventional Combat to the same issue. One of the problems with Trevor Dupuy’s approach is that you had to accept his combat model as a valid measurement of unit performance. This was a reach for many people, especially those who did not like his conclusions to start with. I choose to simply use the combined statistical comparisons of dozens of division-level engagements, which I think makes the case fairly convincingly without adding a construct to manipulate the data. If someone has a disagreement with my statistical compilations and the results and conclusions from it, I have yet to hear them. I would recommend looking at Chapter 4: Human Factors (pages 16-18), Chapter 5: Measuring Human Factors in Combat: Italy 1943-1944 (pages 19-31), Chapter 6: Measuring Human Factors in Combat: Ardennes and Kursk (pages 32-48), and Chapter 7: Measuring Human Factors in Combat: Modern Wars (pages 49-59).

Now, I did end up discussing Trevor Dupuy’s model in Chapter 19: Validation of the TNDM and showing the results of the historical validations we have done of his model, but the model was not otherwise used in any of the analysis done in the book.

But….what we (Dupuy and I) have done is a comparison between forces that opposed each other. It is a measurement of combat value relative to each other. It is not an absolute measurement that can be compared to other armies in different times and places. Trevor Dupuy toyed with this on page 165 of NPW, but this could only be done by assuming that combat effectiveness of the U.S. Army in WWII was the same as the Israeli Army in 1973.

Anyhow, it is probably impossible to come up with a valid performance measurement that would allow you to rank an army from best to worse. It is possible to come up with a comparative performance measurement of armies that have faced each other. This, I believe we have done, using different methodologies and different historical databases. I do believe it would be possible to then determine what the different factors are that make up this difference. I do believe it would be possible to assign values or weights to those factors. I believe this would be very useful to know, in light of the potential training and organizational value of this knowledge.

Response

A fellow analyst posted an extended comment to two of our threads:

C-WAM 3

and

Military History and Validation of Combat Models

Instead of responding in the comments section, I have decided to respond with another blog post.

As the person points out, most Army simulations exist to “enable students/staff to maintain and improve readiness…improve their staff skills, SOPs, reporting procedures, and planning….”

Yes this true, but I argue that this does not obviate the need for accurate simulations. Assuming no change in complexity, I cannot think of a single scenario where having a less accurate model is more desirable that having a more accurate model.

Now what is missing from many of these models that I have seen? Often a realistic unit breakpoint methodology, a proper comparison of force ratios, a proper set of casualty rates, addressing human factors, and many other matters. Many of these things are being done in these simulations already, but are being done incorrectly. Quite simply, they do not realistically portray a range of historical or real combat examples.

He then quotes the 1997-1998 Simulation Handbook of the National Simulations Training Center:

The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training or units) and a myriad of human and environmental impacts are not modeled in sufficient detail….”

Let’s take their list made around 20 years ago. In the last 20 years, what significant quantitative studies have been done on the impact of leadership on combat? Can anyone list them? Can anyone point to even one? The same with morale or level of training of units. The Army has TRADOC, the Army Staff, Leavenworth, the War College, CAA and other agencies, and I have not seen in the last twenty years a quantitative study done to address these issues. And what of terrain and weather? They have been around for a long time.

Army simulations have been around since the late 1950s. So at the time these shortfalls are noted in 1997-1998, 40 years had passed. By their own admission, these issues had not been adequately addressed in the previous 40 years. I gather they have not been adequately in addressed in the last 20 years. So, the clock is ticking, 60 years of Army modeling and simulation, and no one has yet fully and properly address many of these issues. In many cases, they have not even gotten a good start in addressing them.

Anyhow, I have little interest in arguing these issues. My interest is in correcting them.

Perla On Dupuy

Dr. Peter Perla, noted defense researcher, wargame designer and expert, and author of the seminal The Art of Wargaming: A Guide for Professionals and Hobbyists, gave the keynote address at the 2017 Connections Wargaming Conference last August. The topic of his speech, which served as his valedictory address on the occasion of his retirement from government service, addressed the predictive power of wargaming. In it, Perla recalled a conversation he once had with Trevor Dupuy in the early 1990s:

Like most good stories, this one has a beginning, a middle, and an end. I have sort of jumped in at the middle. So let’s go back to the beginning.

As it happens, that beginning came during one of the very first Connections. It may even have been the first one. This thread is one of those vivid memories we all have of certain events in life. In my case, it is a short conversation I had with Trevor Dupuy.

I remember the setting well. We were in front of the entrance to the O Club at Maxwell. It was kind of dark, but I can’t recall if it was in the morning before the club opened for our next session, or the evening, before a dinner. Trevor and I were chatting and he said something about wargaming being predictive. I still recall what I said.

“Good grief, Trevor, we can’t even predict the outcome of a Super Bowl game much less that of a battle!” He seemed taken by surprise that I felt that way, and he replied, “Well, if that is true, what are we doing? What’s the point?”

I had my usual stock answers. We wargame to develop insights, to identify issues, and to raise questions. We certainly don’t wargame to predict what will happen in a battle or a war. I was pretty dogmatic in those days. Thank goodness I’m not that way any more!

The question of prediction did not go away, however.

For the rest of Perla’s speech, see here. For a wonderful summary of the entire 2017 Connections Wargaming conference, see here.

 

Spotted In The New Books Section Of The U.S. Naval Academy Library…

Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

First World War Digital Resources

Informal portrait of Charles E. W. Bean working on official files in his Victoria Barracks office during the writing of the Official History of Australia in the War of 1914-1918. The files on his desk are probably the Operations Files, 1914-18 War, that were prepared by the army between 1925 and 1930 and are now held by the Australian War Memorial as AWM 26. Courtesy of the Australian War Memorial. [Defence in Depth]

Chris and I have both taken to task the highly problematic state of affairs with regard to military record-keeping in the digital era. So it is only fair to also highlight the strengths of the Internet for historical research, one of which is the increasing availability of digitized archival  holdings, documents, and sources.

Although the posts are a couple of years old now, Dr. Robert T. Foley of the Defence Studies Department at King’s College London has provided a wonderful compilation of  links to digital holdings and resources documenting the experiences of many of the many  belligerents in the First World War. The links include digitized archival holdings and electronic copies of often hard-to-find official histories of ground, sea, and air operations.

Digital First World War Resources: Online Archival Sources

Digital First World War Resources: Online Official Histories — The War on Land

Digital First World War Resources: Online Official Histories — The War at Sea and in the Air

For TDI, the availability of such materials greatly broadens potential sources for research on historical combat. For example, TDI made use of German regional archival holdings for to compile data on the use of chemical weapons in urban environments from the separate state armies that formed part of the Imperial German Army in the First World War. Although much of the German Army’s historical archives were destroyed by Allied bombing at the end of the Second World War, a great deal of material survived in regional state archives and in other places, as Dr. Foley shows. Access to the highly detailed official histories is another boon for such research.

The Digital Era hints at unprecedented access to historical resources and more materials are being added all the time. Current historians should benefit greatly. Future historians, alas, are not as likely to be so fortunate when it comes time to craft histories of the the current era.

TDI Reports at DTIC

Just as a quick easy test, I decided to find out which of The Dupuy Institue (TDI) reports are on the Defense Technical Information Center (DTIC). Our report list is here: http://www.dupuyinstitute.org/tdipub3.htm

We are a private company, but most of these reports were done under contract for the U.S. government. In my past searches of the DTIC file, I found that maybe 40% of Trevor Dupuy’s HERO reports were at DTIC. So, I would expect that a few of the TDI would be filed at DTIC.

TDI has 80 reports listed on its site. There are 0 listed on DTIC under our name.

https://publicaccess.dtic.mil/psm/api/service/search/search?&q=%22dupuy+institute%22&site=default_collection&sort=relevance&start=0

There are a significant number of reports listed based upon our work, but a search on “Dupuy Institute” yields no actual reports done by us. I searched for a few of our reports by name (combat in cities, situational awareness, enemy prisoner of war, our insurgency work, our Bosnia casualty estimate) and found four:

https://publicaccess.dtic.mil/psm/api/service/search/search?site=default_collection&q=capture+rate+study

This was four of eight reports we did as part of the Capture Rate Study. So apparently one of the contract managers was diligent enough to make sure those studies were placed in DTIC (as was our Kursk Data Base), but since then (2001), none of our reports have been placed in DTIC.

Now, I have not checked NTIS and other sources, but I have reason to believe that not much of what we have done in the last 20+ years is archived in government repositories. If you need a copy of a TDI report, you have to come to us.

We are a private company. What happens when we decide to close our doors?

Basements

Basements appear to be very important in the world of studies and analysis. That is where various obscure records and reports are stored. As the industry gets grayer and retires, significant pieces of work are becoming harder to find. Sometimes the easiest way to find these reports is to call someone you know and ask them where to find it.

Let me give a few examples. At one point, when we were doing an analysis of Lanchester equations in combat modeling. I was aware that Bob McQuie, formally of CAA, had done some work on it. So, I called him. Turns out he had a small file he kept of his own work, but he had loaned it to his neighbor as a result of a conversation he had. So…..he reclaimed the file, two of our researchers drove over to his house, he gave us the file, and we still have it today. Turns out that much of his material is also available through DTIC. A quick DTIC search shows the following: https://publicaccess.dtic.mil/psm/api/service/search/search?site=default_collection&q=mcquie

Of particular interest is his benchmarks studies. His work on “breakpoints” and comments on Lanchester equations is not included in the DTIC listing because it was published in Army, November 1987. I have a copy in my basement. Neither is his article on the 3:1 rule (published in Phalanx, December 1989). He also did some work on regression analysis of historical battles that I have yet to locate.

Battle Outcomes: Casualty Rates As a Measure of Defeat

So, some of his work had been preserved. But, on the other hand, during that same casualty estimation methodologies study we also sent two researchers over to another “gray beard’s” house and he let our researchers look through his basement. We found the very useful study called Report of the Model Input Data and Process Committee, reference in my book War by Numbers, page 295. It does not show up in DTIC. We could not of find this study without a visit to his basement. He now lives in Florida, where they don’t have basements. So I assume the remaining boxes of materials he had have disappeared.

I am currently trying to locate another major study right now that was done by SAIC. So far, I have found one former SAIC employee who has two volumes of the multi-volume study. It is not listed in DTIC. To obtain a complete copy of the study, I am afraid I will have to contract someone else and pay to have it copied. Again, I just happen to know who to talk to find out what basement it is stored away in.

It is hard to appreciate the unique efforts that go into researching some of these projects. But, there is a sense at this end that as the “gray beards” disappear; reports and research efforts are disappearing with them.