Category Combat Databases

The Battle of Prokhorovka — what does the book consist of

The book consists of:

  1. 638 numbered pages (and 14 pages of front matter)
  2. 75 Listed illustrations and maps
  3. Four photo sections
    1. 15 terrain photos
    2. 12 recon photos
    3. 64 battlefield photos
    4. 70 commander photos
  4. One map section with 17 maps
  5. 18 numbered tables
  6. 21 graphs
  7. 44 sidebars
  8. 76 engagement sheets

Just for the record, my original mega-book consisted of 192 engagement sheets. So one could make the argument that this book covers 40% of the Belgorod offensive (at least compared to the original book).

The book was edited by the same editor of the original book, Ariane Smith of Capital A: http://www.capitala.net/. Therefore, it is of a very similar format and style.

The book can obtained from Stackpole at: Stackpole Books

Or from Amazon.com at: Buy from Amazon

The Battle of Prokhorovka – 16 chapters

My new book The Battle of Prokhorovka consists of 16 chapters (the original mega-book had 27). The chapters are:

1. Preparing for the Showdown…..page 13
2. The Soviets Prepare…..page 35
3. The Belgorod Offensive: 4-8 July 1943…..page 51
4. The XLVIII Panzer Corps Heads West: 9 – 11 July 1943…..page 113
5. The Advance on Prokhorovka: 9-11 July…..page 133
6. The Advance on the Severnyii Donets: 9-11 July 1943…..page 203
7. The Situation as of 11 July 1943…..page 229
8. The Air War: 9-18 July 1943…..page 243
9. The Tank Fields of Prokhorovka, 12 July 1943…..page 291
10. SS Panzer Corps Attack Stalls, 13 July 1943…..page 359
11. Soviet Counterattacks against the III Panzer Corps: 12-13 July 1943…..page 375
12. Aftermath of Prokhorovka: 13 July 1943…..page 401
13. Cleaning Up the Donets Triangle: 14-15 July 1943…..page 475
14. The Battlefield is Quiet: 16-17 July 1943…..page 511
15. The German Withdrawal: 18-24 July 1943…..page 539
16. Post-Mortem…..page 559

There are only two short appendices in this book (the original book had 7 appendices totaling 342 pages):

Appendix I: German and Soviet Terminology…..page 615
Appendix II: The Engagements…..page 623

The book can obtained from Stackpole at: Stackpole Books

Or from Amazon.com at: Buy from Amazon

Million Dollar Books

Most of our work at The Dupuy Institute involved contracts from the U.S. Government. These were often six digit efforts. So for example, the Kursk Data Base was funded for three years (1993-1996) and involved a dozen people. The Ardennes Campaign Simulation Data Base (ACSDB) was actually a larger effort (1987-1990). Our various combat databases like DLEDB, BODB and BaDB were created by us independent of any contractual effort. They were originally based upon the LWDB (that became CHASE), the work we did on Kursk and Ardennes, the engagements we added because of our Urban Warfare studies, our Enemy Prisoner of War Capture Rates studies, our Situational Awareness study, our internal validation efforts, several modeling  related contracts from Boeing, etc. All of these were expanded and modified bit-by-bit as a result of a series of contracts from different sources. So, certainly over time, hundreds of thousands have been spent on each of these efforts, and involved the work of a half-dozen or more people.

So, when I sit down to write a book like Kursk: The Battle of Prokhorovka (based off of the Kursk Data Base) or America’s Modern Wars (based on our insurgency studies) or War by Numbers (which used our combat databases and significant parts of our various studies), these are books developed from an extensive collection of existing work. Certainly hundreds of thousands of dollars and the work of at least 6 to 12 people were involved in the studies and analysis that preceded these books. In some cases, like our insurgency studies, it was clearly more than a million dollars.

This is a unique situation, for me to be able to write a book based upon a million dollars of research and analysis. It is something that I could never have done as a single scholar or a professor or a teacher somewhere. It is not work I could of done working for the U.S. government. These are not books that I could have written based upon only my own work and research.

In many respects, this is what needs to be norm in the industry. Research and analysis efforts need to be properly funded and conducted by teams of people. There is a limit to what a single scholar, working in isolation, can do. Being with The Dupuy Institute allowed me to conduct research and analysis above and beyond anything I could have done on my own.

Summation of our Validation Posts

This extended series of posts about validation of combat models was originally started by Shawn Woodford’s post on future modeling efforts and the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

This post apparently irked some people at TRADOC and they wrote an article in the December issue of the Phalanx referencing his post and criticizing it. This resulted in the following seven responses from me:

Engaging the Phalanx

Validation

Validating Attrition

Physics-based Aspects of Combat

Historical Demonstrations?

SMEs

Engaging the Phalanx (part 7 of 7)

This was probably overkill…..but guys who write 1,662 page books sometimes tend to be a little wordy.

While it is very important to identify a problem, it is also helpful to show the way forward. Therefore, I decided to discuss what data bases were available for validation. After all, I would like to see the modeling and simulation efforts to move forward (and right now, they seem to be moving backward). This led to the following nine posts:

Validation Data Bases Available (Ardennes)

Validation Data Bases Available (Kursk)

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base

Battles versus Campaigns (for Validation)

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

Other TDI Data Bases

Other Validation Data Bases

There were also a few other validation issues that had come to mind while I was writing these blog posts, so this led to the following series of three posts:

Face Validation

Validation by Use

Do Training Models Need Validation?

Finally, there were a few other related posts that were scattered through this rather extended diatribe. It includes the following six posts:

Paul Davis (RAND) on Bugaboos

Diddlysquat

TDI Friday Read: Engaging The Phalanx

Combat Adjudication

China and Russia Defeats the USA

Building a Wargamer

That kind of ends this discussion on validation. It kept me busy for while. Not sure if you were entertained or informed by it. It is time for me to move onto another subject, not that I have figured out yet what that will be.

Face Validation

The phrase “face validation” shows up in our blog post earlier this week on Combat Adjudication. It is a phrase I have heard many times over the decades, sometimes by very established Operation Researchers (OR). So what does it mean?

Well, it is discussed in the Department of the Army Pamphlet 5-11: Verification, Validation and Accreditation of Army Models and Simulations: Pamphlet 5-11

Their first mention of it is on page 34: “SMEs [Subject Matter Experts] or other recognized individuals in the field of inquiry. The process by which experts compare M&S [Modeling and Simulation] structure and M&S output to their estimation of the real world is called face validation, peer review, or independent review.”

On page 35 they go on to state: “RDA [Research, Development, and Acquisition]….The validation method typically chosen for this category of M&S is face validation.”

And on page 36 under Technical Methods: “Face validation. This is the process of determining whether an M&S, on the surface, seems reasonable to personnel who are knowledgeable about the system or phenomena under study. This method applies the knowledge and understanding of experts in the field and is subject to their biases. It can produce a consensus of the community if the number of breadth of experience of the experts represent the key commands and agencies. Face validation is a point of departure to determine courses of action for more comprehensive validation efforts.” [I put the last part in bold]

Page 36: “Functional decomposition (sometimes known as piecewise validation)….When used in conjunction with face validation of the overall M&S results, functional decomposition is extremely useful in reconfirming previous validation of a recently modified portions of the M&S.”

I have not done a survey of all army, air force, navy, marine, coast guard or Department of Defense (DOD) regulations. This one is enough.

So, “face validation” is asking one or more knowledgeable (or more senior) people if the model looks good. I guess it really depends on whose the expert is and to what depth they look into it. I have never seen a “face validation” report (validation reports are also pretty rare).

Who’s “faces” do they use? Are they outside independent people or people inside the organization (or the model designer himself)? I am kind of an expert, yet, I have never been asked. I do happen to be one of the more experienced model validation people out there, having managed or directly created six+ validation databases and having conducted five validation-like exercises. When you consider that most people have not done one, should I be a “face” they contact? Or is this process often just to “sprinkle holy water” on the model and be done?

In the end, I gather for practical purposes the process of face validation is that if a group of people think it is good, then it is good. In my opinion, “face validation” is often just an argument that allows people to explain away or simply dismiss the need for any rigorous analysis of the model. The pamphlet does note that “Face validation is a point of departure to determine courses of action for more comprehensive validation efforts.” How often have we’ve seen the subsequent comprehensive validation effort? Very, very rarely. It appears that “face validation” is the end point.
Is this really part of the scientific method?

A Time for Crumpets

Charles MacDonald published in 1985 A Time for Trumpets, one of the better books on the Battle of the Bulge (and there are actually a lot of good works on this battle). In there he recounted a story of why the German Panzer Lehr Panzer Division, commanded by General Fritz Bayerlein, was held up for the better part of a day during the Battle for Bastogne. To quote:

For all Bayerlein’s concern about that armored force, he himself was at the point of directing less than full attention to conduct of the battle. In a wood outside Mageret, his troops had found a platoon from an American field hospital, and among the staff, a “young, blonde, and beautiful” American nurse attracted Bayerlein’s attention. Through much of December 19, he “dallied” with the nurse, who “held him spellbound.” [page 295]

Apparently MacDonald’s book was not the only source of this story: http://theminiaturespage.com/boards/msg.mv?id=186079

Now, I don’t know if “dallied” means that they were having tea and crumpets, or involved in something more intimate. The story apparently comes from Bayerlein himself, so something probably happened, but exactly what is not known. He was relieved of command after the failed offensive.

Fritz Bayerlein, March 1944 (Source: Bundesarchiv, Bild 146-1978-033-02/Dinstueler/CC-BY-SA 3.0)

When we met with Charles MacDonald in 1989, I did ask him about this story. He then recounted that he was recently at a U.S. veterans gathering talking to some other people, and some lady came up to him and told him that she knew the nurse in the story. MacDonald said he would get back to her….but then could not locate her later. So this was an opportunity to confirm and get more details of the story, but, it was lost (to history). But it does sort of confirm that there is some basis to Bayerlein’s story.

Now, this discussion with MacDonald is from memory, but I believe (the authors) Jay Karamales,  Richard Anderson and possibly Curt Johnson were also at that dinner, and they may remember the conversation (differently?).

Anyhow, A Time for Strumpets Trumpets is a book worth reading.

Other Validation Data Bases

There have been (only) three other major historical validations done that I am aware of that we were not involved in. They are 1) the validation of the Atlas model to the France 1940 campaign done in the 1970s, 2) the validation of the Vector model using the Golan Heights campaign of 1973, and 3) the validation of SIMNET/JANUS using 73 Easting Data from the 1991 Gulf War. I am not aware of any other major validation efforts done in the last 25 years other than what we have done (there is one face validation done in 2017 that I will discuss in a later post).

I have never seen a validation report for the ATLAS model and never seen a reference to any of its research or data from the France 1940 campaign. I suspect it does not exist. The validation of Vector was only done for unit movement. They did not validate the attrition or combat functions. These were inserted from the actual battle. The validation was done in-house by Vector, Inc. I have seen the reports from that effort but am not aware of any databases or special research used. See Chapter 18 of War by Numbers for more details and also our newsletter in 1996 on the subject: http://www.dupuyinstitute.org/pdf/v1n4.pdf

So, I know of only one useful validation database out there that is not created by us. This is the Battle of 73 Easting. It was created under contract and used for validation of the JTLS (Joint Theater-Level Simulation).

But, the Battle of 73 Easting is a strange one-sided affair. First, it was fought in a sandstorm. Visibility was severely limited. Our modern systems allowed us to see the Iraqis. The Iraqis could not see us. Therefore, it is a very one-sided affair where the U.S. had maybe 6 soldiers killed, 19 wounded ant lost one Bradley fighting vehicle. The Iraqi had been 600-1,000 casualties and dozens of tanks lost to combat (and dozens more lost to aerial bombardment in the days and weeks before the battle). According to Wikipedia they lost 160 tanks and 180 armored personnel carriers. It was a shooting gallery. I did have a phone conversation with some of the people who did the veteran interviews for this effort. They said that this fight devolved to the point that the U.S. troops were trying to fire in front of the Iraqi soldiers to encourage them to surrender. Over 1,300 Iraqis were taken prisoner.

This battle is discussed in the Wikipedia article here: https://en.wikipedia.org/wiki/Battle_of_73_Easting

I did get the validation report on this and it is somewhere in our files (although I have not seen it for years). I do remember one significant aspect of the validation effort, which is that while it indeed got the correct result (all the Iraqi’s were destroyed), it did so having the Americans use four times as much ammunition as they did historically. Does this mean that the models attrition calculation was off by a factor of four?

Anyhow, I gather the database and the validation report are available from the U.S. government. Of course, it is a very odd battle and doing a validation to just one odd one-sided battle runs the danger of the “N=1” problem. Probably best to do validations to multiple battles.

A more recent effort (2017) that included some validation effort is discussed in a report called “Using Combat Adjudication to Aid in Training for Campaign Planning.” I will discuss this in a later blog post.

Now, there are a number of other database out there addressing warfare. For example the Correlates of War (COW) databases (see: COW), databases maintained by Stockholm International Peace Research Institute (SIPRI) (see: SIPRI) and other such efforts. We have never used these but do not think by their nature that they are useful for validating combat models at division, battalion or company level.

Other TDI Data Bases

What we have listed in the previous articles is what we consider the six best databases to use for validation. The Ardennes Campaign Simulation Data Base (ACSDB) was used for a validation effort by CAA (Center for Army Analysis). The Kursk Data Base (KDB) was never used for a validation effort but was used, along with Ardennes, to test Lanchester equations (they failed).

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base to date has not been used for anything that we are aware of. As the program we were supporting was classified, then they may have done some work with it that we are not aware of, but I do not think that is the case.

The Battle of Britain Data Base

Our three battles databases, the division-level data base, the battalion-level data base and the company-level data base, have all be used for validating our own TNDM (Tactical Numerical Deterministic Model). These efforts have been written up in our newsletters (here: http://www.dupuyinstitute.org/tdipub4.htm) and briefly discussed in Chapter 19 of War by Numbers. These are very good databases to use for validation of a combat model or testing a casualty estimation methodology. We have also used them for a number of other studies (Capture Rate, Urban Warfare, Lighter-Weight Armor, Situational Awareness, Casualty Estimation Methodologies, etc.). They are extremely useful tools analyzing the nature of conflict and how it impacts certain aspects. They are, of course, unique to The Dupuy Institute and for obvious business reasons, we do keep them close hold.

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

We do have a number of other database that have not been used as much. There is a list of 793 conflicts from 1898-1998 that we have yet to use for anything (the WACCO – Warfare, Armed Conflict and Contingency Operations database). There is the Campaign Data Base (CaDB) of 196 cases from 1904 to 1991, which was used for the Lighter Weight Armor study. There are three databases that are mostly made of cases from the original Land Warfare Data Base (LWDB) that did not fit into our division-level, battalion-level, and company-level data bases. They are the Large Action Data Base (LADB) of 55 cases from 1912-1973, the Small Action Data Base (SADB) of 5 cases and the Battles Data Base (BaDB) of 243 cases from 1600-1900. We have not used these three database for any studies, although the BaDB is used for analysis in War by Numbers.

Finally, there are three databases on insurgencies, interventions and peacekeeping operations that we have developed. This first was the Modern Contingency Operations Data Base (MCODB) that we developed to use for Bosnia estimate that we did for the Joint Staff in 1995. This is discussed in Appendix II of America’s Modern Wars. It then morphed into the Small Scale Contingency Operations (SSCO) database which we used for the Lighter Weight Army study. We then did the Iraq Casualty Estimate in 2004 and significant part of the SSCO database was then used to create the Modern Insurgency Spread Sheets (MISS). This is all discussed in some depth in my book America’s Modern Wars.

None of these, except the Campaign Data Base and the Battles Data Base (1600-1900), are good for use in a model validation effort. The use of the Campaign Data Base should be supplementary to validation by another database, much like we used it in the Lighter Weight Armor study.

Now, there have been three other major historical validation efforts done that we were not involved in. I will discuss their supporting data on my next post on this subject.

Battalion and Company Level Data Bases

Since the collapse of the Soviet Union in 1991, the need and desire to model combat at the division-level has declined. The focus has shifted to lower levels of combat. As such, we have created the Battalion-Level Operations Data Base (BLODB) and the Company-Level Actions Data Base (CLADB).

The challenge for both of these databases is to find actions that have good data for both sides. It is the nature of military organizations that divisions have the staff and record keeping that allows one to model them. These records are often (but not always !!!) preserved. So, it is possible to assemble the data for both sides for an engagement at division level. This is true through at least World War II (up through 1945). After that, getting unit records from both sides is difficult. Usually one or both of the opponents are still keeping their records classified or close hold. This is why we ended up posting on this subject:

The Sad Story Of The Captured Iraqi DESERT STORM Documents

And:

So Why Are Iraqi Records Important?

 

Just to give an example of the difficulty of creating battalion-level engagements, for the southern offensive around Belgorod (Battle of Kursk) from 4-18 July 1943 I was able to created 192 engagements using the unit records for both sides. I have yet to create a single battalion-level engagement from those records. The only detailed description of a battalion-level action offered in the German records are of a mop-up operation done by the 74th Engineer Battalion. We have no idea of who they were facing or what their strength was. We do have strengths at times of various German battalions and we sometimes have strength and losses for some of the Soviet infantry and tank regiments, so it might be possible to work something up with a little estimation, but it certainly can not be done systematically like we have for division-level engagements. As U.S. and British armies (and USMC) tend to have better battalion-level record keeping than most other armies, it is possible to work something up from their records, if you can put together anything on their opponents. So far, our work on battalion-level and company-level combat has been more of a grab-bag and catch-and-catch-can effort that we had done over time.

Our battalion-level data base consists of 127 cases. They cover from 1918 to 1991. It is described here: http://www.dupuyinstitute.org/data/blodb.htm The blurry photo at the start of this blog if from that database.

Our company-level data base is more recent. It has not been set up yet as an Access data base. It consists of 98 cases from 1914 to 2000.

The BLODB was used for the battalion-level validation of the TNDM. This is discussed briefly in Chapter 19 of War by Numbers. These engagements are discussed in depth in four issues of  our International TNDM Newsletter (see Vol. 1, Numbers 2, 4, 5, 6 here: http://www.dupuyinstitute.org/tdipub4.htm )

The CLADB was used for a study done for Boeing on casualty rates compared to unit sizes in combat. This is discussed in depth in Chapter 12: The Nature of Lower Levels of Combat in War by Numbers.

Both databases are in need to expansion. To date, we have not found anyone willing to fund such an effort.