Category Modeling, Simulation & Wargaming

Military History and Validation of Combat Models

Soldiers from Britain’s Royal Artillery train in a “virtual world” during Exercise Steel Sabre, 2015 [Sgt Si Longworth RLC (Phot)/MOD]

Military History and Validation of Combat Models

A Presentation at MORS Mini-Symposium on Validation, 16 Oct 1990

By Trevor N. Dupuy

In the operations research community there is some confusion as to the respective meanings of the words “validation” and “verification.” My definition of validation is as follows:

“To confirm or prove that the output or outputs of a model are consistent with the real-world functioning or operation of the process, procedure, or activity which the model is intended to represent or replicate.”

In this paper the word “validation” with respect to combat models is assumed to mean assurance that a model realistically and reliably represents the real world of combat. Or, in other words, given a set of inputs which reflect the anticipated forces and weapons in a combat encounter between two opponents under a given set of circumstances, the model is validated if we can demonstrate that its outputs are likely to represent what would actually happen in a real-world encounter between these forces under those circumstances

Thus, in this paper, the word “validation” has nothing to do with the correctness of computer code, or the apparent internal consistency or logic of relationships of model components, or with the soundness of the mathematical relationships or algorithms, or with satisfying the military judgment or experience of one individual.

True validation of combat models is not possible without testing them against modern historical combat experience. And so, in my opinion, a model is validated only when it will consistently replicate a number of military history battle outcomes in terms of: (a) Success-failure; (b) Attrition rates; and (c) Advance rates.

“Why,” you may ask, “use imprecise, doubtful, and outdated history to validate a modem, scientific process? Field tests, experiments, and field exercises can provide data that is often instrumented, and certainly more reliable than any historical data.”

I recognize that military history is imprecise; it is only an approximate, often biased and/or distorted, and frequently inconsistent reflection of what actually happened on historical battlefields. Records are contradictory. I also recognize that there is an element of chance or randomness in human combat which can produce different results in otherwise apparently identical circumstances. I further recognize that history is retrospective, telling us only what has happened in the past. It cannot predict, if only because combat in the future will be fought with different weapons and equipment than were used in historical combat.

Despite these undoubted problems, military history provides more, and more accurate information about the real world of combat, and how human beings behave and perform under varying circumstances of combat, than is possible to derive or compile from arty other source. Despite some discrepancies, patterns are unmistakable and consistent. There is always a logical explanation for any individual deviations from the patterns. Historical examples that are inconsistent, or that are counter-intuitive, must be viewed with suspicion as possibly being poor or false history.

Of course absolute prediction of a future event is practically impossible, although not necessarily so theoretically. Any speculations which we make from tests or experiments must have some basis in terms of projections from past experience.

Training or demonstration exercises, proving ground tests, field experiments, all lack the one most pervasive and most important component of combat: Fear in a lethal environment. There is no way in peacetime, or non-battlefield, exercises, test, or experiments to be sure that the results are consistent with what would have been the behavior or performance of individuals or units or formations facing hostile firepower on a real battlefield.

We know from the writings of the ancients (for instance Sun Tze—pronounced Sun Dzuh—and Thucydides) that have survived to this day that human nature has not changed since the dawn of history. The human factor the way in which humans respond to stimuli or circumstances is the most important basis for speculation and prediction. What about the “scientific” approach of those who insist that we cart have no confidence in the accuracy or reliability of historical data, that it is therefore unscientific, and therefore that it should be ignored? These people insist that only “scientific” data should be used in modeling.

In fact, every model is based upon fundamental assumptions that are intuitive and unprovable. The first step in the creation of a model is a step away from scientific reality in seeking a basis for an unreal representation of a real phenomenon. I have shown that the unreality is perpetuated when we use other imitations of reality as the basis for representing reality. History is less than perfect, but to ignore it, and to use only data that is bound to be wrong, assures that we will not be able to represent human behavior in real combat.

At the risk of repetition, and even of protesting too much, let me assure you that I am well aware of the shortcomings of military history:

The record which is available to us, which is history, only approximately reflects what actually happened. It is incomplete. It is often biased, it is often distorted. Even when it is accurate, it may be reflecting chance rather than normal processes. It is neither precise nor consistent. But, it provides more, and more accurate, information on the real world of battle than is available from the most thoroughly documented field exercises, proving ground less, or laboratory or field experiments.

Military history is imperfect. At best it reflects the actions and interactions of unpredictable human beings. We must always realize that a single historical example can be misleading for either of two reasons: (1) The data may be inaccurate, or (2) The data may be accurate, but untypical.

Nevertheless, history is indispensable. I repeat that the most pervasive characteristic of combat is fear in a lethal environment. For all of its imperfections, military history and only military history represents what happens under the environmental condition of fear.

Unfortunately, and somewhat unfairly, the reported findings of S.L.A. Marshall about human behavior in combat, which he reported in Men Against Fire, have been recently discounted by revisionist historians who assert that he never could have physically performed the research on which the book’s findings were supposedly based. This has raised doubts about Marshall’s assertion that 85% of infantry soldiers didn’t fire their weapons in combat in World War ll. That dramatic and surprising assertion was first challenged in a New Zealand study which found, on the basis of painstaking interviews, that most New Zealanders fired their weapons in combat. Thus, either Americans were different from New Zealanders, or Marshall was wrong. And now American historians have demonstrated that Marshall had had neither the time nor the opportunity to conduct his battlefield interviews which he claimed were the basis for his findings.

I knew Marshall, moderately well. I was fully as aware of his weaknesses as of his strengths. He was not a historian. I deplored the imprecision and lack of documentation in Men Against Fire. But the revisionist historians have underestimated the shrewd journalistic assessment capability of “SLAM” Marshall. His observations may not have been scientifically precise, but they were generally sound, and his assessment has been shared by many American infantry officers whose judgements l also respect. As to the New Zealand study, how many people will, after the war, admit that they didn’t fire their weapons?

Perhaps most important, however, in judging the assessments of SLAM Marshall, is a recent study by a highly-respected British operations research analyst, David Rowland. Using impeccable OR methods Rowland has demonstrated that Marshall’s assessment of the inefficient performance, or non-performance, of most soldiers in combat was essentially correct. An unclassified version of Rowland’s study, “Assessments of Combat Degradation,” appeared in the June 1986 issue of the Royal United Services Institution Journal.

Rowland was led to his investigations by the fact that soldier performance in field training exercises, using the British version of MILES technology, was not consistent with historical experience. Even after allowances for degradation from theoretical proving ground capability of weapons, defensive rifle fire almost invariably stopped any attack in these field trials. But history showed that attacks were often in fact, usually successful. He therefore began a study in which he made both imaginative and scientific use of historical data from over 100 small unit battles in the Boer War and the two World Wars. He demonstrated that when troops are under fire in actual combat, there is an additional degradation of performance by a factor ranging between 10 and 7. A degradation virtually of an order of magnitude! And this, mind you, on top of a comparable built-in degradation to allow for the difference between field conditions and proving ground conditions.

Not only does Rowland‘s study corroborate SLAM Marshall’s observations, it showed conclusively that field exercises, training competitions and demonstrations, give results so different from real battlefield performance as to render them useless for validation purposes.

Which brings us back to military history. For all of the imprecision, internal contradictions, and inaccuracies inherent in historical data, at worst the deviations are generally far less than a factor of 2.0. This is at least four times more reliable than field test or exercise results.

I do not believe that history can ever repeat itself. The conditions of an event at one time can never be precisely duplicated later. But, bolstered by the Rowland study, I am confident that history paraphrases itself.

If large bodies of historical data are compiled, the patterns are clear and unmistakable, even if slightly fuzzy around the edges. Behavior in accordance with this pattern is therefore typical. As we have already agreed, sometimes behavior can be different from the pattern, but we know that it is untypical, and we can then seek for the reason, which invariably can be discovered.

This permits what l call an actuarial approach to data analysis. We can never predict precisely what will happen under any circumstances. But the actuarial approach, with ample data, provides confidence that the patterns reveal what is to happen under those circumstances, even if the actual results in individual instances vary to some extent from this “norm” (to use the Soviet military historical expression.).

It is relatively easy to take into account the differences in performance resulting from new weapons and equipment. The characteristics of the historical weapons and the current (or projected) weapons can be readily compared, and adjustments made accordingly in the validation procedure.

In the early 1960s an effort was made at SHAPE Headquarters to test the ATLAS Model against World War II data for the German invasion of Western Europe in May, 1940. The first excursion had the Allies ending up on the Rhine River. This was apparently quite reasonable: the Allies substantially outnumbered the Germans, they had more tanks, and their tanks were better. However, despite these Allied advantages, the actual events in 1940 had not matched what ATLAS was now predicting. So the analysts did a little “fine tuning,” (a splendid term for fudging). Alter the so-called adjustments, they tried again, and ran another excursion. This time the model had the Allies ending up in Berlin. The analysts (may the Lord forgive them!) were quite satisfied with the ability of ATLAS to represent modem combat. (Or at least they said so.) Their official conclusion was that the historical example was worthless, since weapons and equipment had changed so much in the preceding 20 years!

As I demonstrated in my book, Options of Command, the problem was that the model was unable to represent the German strategy, or to reflect the relative combat effectiveness of the opponents. The analysts should have reached a different conclusion. ATLAS had failed validation because a model that cannot with reasonable faithfulness and consistency replicate historical combat experience, certainly will be unable validly to reflect current or future combat.

How then, do we account for what l have said about the fuzziness of patterns, and the fact that individual historical examples may not fit the patterns? I will give you my rules of thumb:

  1. The battle outcome should reflect historical success-failure experience about four times out of five.
  2. For attrition rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.
  3. For the advance rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.

Just as the heavens are the laboratory of the astronomer, so military history is the laboratory of the soldier and the military operations research analyst. The scientific basis for both astronomy and military science is the recording of the movements and relationships of bodies, and then analysis of those movements. (In the one case the bodies are heavenly, in the other they are very terrestrial.)

I repeat: Military history is the laboratory of the soldier. Failure of the analyst to use this laboratory will doom him to live with the scientific equivalent of Ptolomean astronomy, whereas he could use the evidence available in his laboratory to progress to the military science equivalent of Copernican astronomy.

Against the Panzers

The book that came out of the A2/D2 Study (Anti-Armor Defense Data Study) was Against the Panzers, by Allyn R. Vannoy and Jay Karamales: Against the Panzers: United States Infantry Versus German Tanks, 1944-1945

The graphics person for of my three books and the images for this website is Jay Karamales. Jay is a multi-talented person whose primary occupation is a programmer. Apparently the challenge of writing a book while working a full-time job was stressful enough that he never tried it again. Unfortunately, there was never an Against the Panzers II, although I gathered he did some work on it.

For a taste of Mr. Karamales’ book, I recommend you take a look at his article in the TNDM Newsletter: http://www.dupuyinstitute.org/pdf/v1n6.pdf

A2/D2 Study

A2/D2 Study = Anti-armor defense data study.

In the last days of the Soviet Union—before anyone realized they *were* the last days—the NATO nations were still doing all they could to prepare for a possible Soviet onslaught into Western Europe. They had spent decades developing combat models to help them predict where the blow would fall, where defense would be critical, where logistics would make the difference, what mix of forces could survive. Their main problem was that they didn’t know how far they could trust those models. How could they validate them? Maybe if they could reverse-engineer the past, they could be relied upon to predict the future.

To that end, the American Department of Defense (DoD) and (particularly) the British Defence Operational Analysis Establishment (DOAE) undertook to collect data about historical battles that resembled the battles they expected to be fighting, with the aim of feeding that data into their models and seeing how much the models’ results resembled the historical outcomes of those battles. The thinking went that if the models could produce a result similar to history, they could be confident that feeding in modern data would produce a realistic result and teach them how to adjust their dispositions for optimal results.

One of the battles that NATO expected to fight was a Soviet armored drive through the Fulda Gap, a relatively flat corridor through otherwise rough terrain in south-central West Germany. The battle that most resembled such an operation, in the minds of the planners, was the December 1944 surprise attack by the German Army into the Ardennes Forest region along the German/Luxembourg/Belgian border, which became known as the Battle of the Bulge for the wedge-shaped salient it drove into American lines. As the British involvement in this epic battle—what Churchill called the greatest battle in the history of the U.S. Army—was minor, consisting of a minor holding action by XXX Corps, the DOAE delegated collecting the relevant data for this battle to the DoD. The responsible element of the DoD was the Army’s Concepts Analysis Agency (CAA), which in turn hired defense contractor Science Applications International Corporation (SAIC) to perform the data collection and study. In late 1990 SAIC began in-depth research, consisting of archival reviews and interviews of surviving veterans, for the project which hoped to identify engagements down to vehicle-on-vehicle action, with rounds expended, ammunition types, ranges, and other quantitative data which could be fed into models. Ultimately the study team, led by former HERO researcher and Trevor Dupuy protégé Jay Karamales, identified and recorded details for 56 combat actions from the ETO in 1944-1945, most from the Battle of the Bulge; and the detailed data from these engagements was used in the validation efforts for various combat models. This quantitative data, along with a copious amount of anecdotal information, was used as the basis for Karamales’ 1996 book with his co-author Allyn Vannoy titled Against the Panzers: United States Infantry versus German Tanks, 1944-1945: A History of Eight Battles Told through Diaries, Unit Histories and Interviews.

Copies of this study are available at DTIC. If you put “saic a2d2” into a search engine you should find all the volumes in PDF format on the DTIC website. As an example, http://www.dtic.mil/dtic/tr/fulltext/u2/a232910.pdf or http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA284378

 

Cost of Creating a Data Base

Invariably, especially with a new book coming out (War by Numbers), I expected to get requests for copies of our data bases. In fact, I already have.

Back around 1987 or so, a very wise man (Curt Johnson, VP of HERO) estimated that for the LWDB (Land Warfare Data Base) that it took 3 man-days to create an engagement. The LWDB was the basis for creating many of our later data bases, including the DLEDB (Division Level Engagement Data Base). My experience over time is that this estimate is low, especially if your are working with primary sources (unit records) for both sides. I think it may average more like 6 man-days an engagement if based upon unit records (this includes the time to conduct research).

But going with Curt’s estimate, let’s take the DLEDB of 752 cases and re-create it. This would take 3 man-days times 752 engagements = 2,256 man-days. This is 9 man-years of effort. Now 9 man-years times a loaded professional rate. A loaded man-year is the cost of a person’s labor times indirect costs (vacation, SS and Medicare contributions, health insurance, illness, office space, etc.), general and administrative costs (corporate expenses not included in the indirect costs, including senior management and marketing), and any fee or profit. Loaded rate is invariably at least 60% of the direct costs and usually closer to 100% of direct costs (and I worked at one company where it was 200% of direct costs). So a loaded man-year may be as low at $120,000 a year but for people like RAND or CNA, it is certainly much higher. Nine man-years times $120,000 = $1,080,000.

Would it really cost more than a million dollars to re-created the DLEDB? If one started from scratch, certainly. Probably (much) more, because of all the research into the Ardennes and Kursk that we did as part of those database projects. The data bases were created incrementally over the course of more than 30 years as part of various on-going contracts and efforts. We also had a core group of very experienced personnel who were doing this.

Needless to say, if any part of the data base is given away, loaned out, or otherwise not protected, we loose control of the “proprietary” aspect of these data bases. This includes the programming and formatting. Right now, they are unique to The Dupuy Institute, and for obvious business reasons, need to remain so unless proper compensation is arranged.

Sorry.

 

P.S. The image used is from the old Dbase IV version of the Kursk Data Base. We have re-programmed it in Access.

 

War By Numbers Published

Christopher A. Lawrence, War by Numbers Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

Table of Contents: War by Numbers

Preface                                                                                                   ix

Acknowledgments                                                                                  xi

Abbreviations                                                                                         xiii

  1. Understanding War                                                                        1

  2. Force Ratios                                                                                   8
  3. Attacker versus Defender                                                             14
  4. Human Factors                                                                             16
  5. Measuring Human Factors in Combat: Italy 1943-1944               19
  6. Measuring Human Factors in Combat: Ardennes and Kursk       32
  7. Measuring Human Factors in Combat: Modern Wars                  49
  8. Outcome of Battles                                                                       60
  9. Exchange Ratios                                                                          72
  10. The Combat Value of Superior Situational Awareness                79
  11. The Combat Value of Surprise                                                   121
  12. The Nature of Lower Levels of Combat                                      146
  13. The Effects of Dispersion on Combat                                         163
  14. Advance Rates                                                                            174
  15. Casualties                                                                                    181
  16. Urban Legends                                                                            206
  17. The Use of Case Studies                                                             265
  18. Modeling Warfare                                                                        285
  19. Validation of the TNDM                                                               299
  20. Conclusions                                                                                 325

Appendix I: Dupuy’s Timeless Verities of Combat                                329

Appendix II: Dupuy’s Combat Advance Rate Verities                           335

Appendix III: Dupuy’s Combat Attrition Verities                                    339

Notes                                                                                                     345

Bibliography                                                                                           369

 

The book is 374 pages plus 14 pages of front matter.

 

15 Books Received !!!

I just received my 15 author copies of War by Numbers. So it is now available for $39.95 from Potomac Books (University of Nebraska Press): War by Numbers

This means it should be available from Amazon.com next week: War by Numbers

I don’t how quickly the foreign book sellers will receive them, but expect them to have  copies available in the next couple of weeks.

I did not order 200 copies for The Dupuy Institute to sell, unlike I did with America’s Modern Wars, so it will not be directly available from us: http://www.dupuyinstitute.org/booksfs.htm

This figure is on page 175 of the book, Chapter 14: Advance Rates:

 

 

Aussie OR

Over the years I have run across a number of Australian Operations Research and Historical Analysis efforts. Overall, I have been impressed with what I have seen. Below is one of their papers written by Nigel Perry. He is not otherwise known to me. It is dated December 2011: Applications of Historical Analyses in Combat Modeling

It does address the value of Lanchester equations in force-on-force combat models, which in my mind is already a settled argument (see: Lanchester Equations Have Been Weighed). His is the latest argument that I gather reinforces this point.

The author of this paper references the work of Robert Helmbold and Dean Hartley (see page 14). He does favorably reference the work of Trevor Dupuy but does not seem to be completely aware of the extent or full nature of it (pages 14, 16, 17, 24 and 53). He does not seem to aware that the work of Helmbold and Hartley was both built from a database that was created by Trevor Dupuy’s companies HERO & DMSI. Without Dupuy, Helmbold and Hartley would not have had data to work from.

Specifically, Helmbold was using the Chase database, which was programmed by the government from the original paper version provided by Dupuy. I think it consisted of 597-599 battles (working from memory here). It also included a number of coding errors when they programmed it and did not include the battle narratives. Hartley had Oakridge National Laboratories purchase a computerized copy from Dupuy of what was now called the Land Warfare Data Base (LWDB). It consisted of 603 or 605 engagements (and did not have the coding errors but still did not include the narratives). As such, they both worked from almost the same databases.

Dr. Perrty does take a copy of Hartley’s  database and expands it to create more engagements. He says he expanded it from 750 battles (except the database we sold to Harley had 603 or 605 cases) to around 1600. It was estimated in the 1980s by Curt Johnson (Director and VP of HERO) to take three man-days to create a battle. If this estimate is valid (actually I think it is low), then to get to 1600 engagements the Australian researchers either invested something like 10 man-years of research, or relied heavily on secondary sources without any systematic research, or only partly developed each engagement (for example, only who won and lost). I suspect the latter.

Dr. Perry shows on page 25:

Data-segment……..Start…….End……Number of……Attacker…….Defender

Epoch…………………Year…….Year……..Battles………Victories……Victories

Ancient………………- 490…….1598………….63………………36……………..27

17th Century……….1600…….1692………….93………………67……………..26

18th Century……….1700…….1798………..147…………….100……………..47

Revolution…………..1792……1800…………238…………….168…………….70

Empire……………….1805……1815…………327……………..203…………..124

ACW………………….1861……1865…………143……………….75…………….68

19th Century……….1803…….1905…………126……………….81…………….45

WWI………………….1914…….1918…………129……………….83…………….46

WWII…………………1920…….1945…………233……………..165…………….68

Korea………………..1950…….1950…………..20……………….20………………0

Post WWII………….1950……..2008…………118……………….86…………….32

 

We, of course, did something very similar. We took the Land Warfare Data Base (the 605 engagement version), expanded in considerably with WWII and post-WWII data, proofed and revised a number of engagements using more primarily source data, divided it into levels of combat (army-level, division-level, battalion-level, company-level) and conducted analysis with the 1280 or so engagements we had. This was a much more powerful and better organized tool. We also looked at winner and loser, but used the 605 engagement version (as we did the analysis in 1996). An example of this, from pages 16 and 17 of my manuscript for War by Numbers shows:

Attacker Won:

 

                        Force Ratio                Force Ratio    Percent Attack Wins:

                        Greater than or         less than          Force Ratio Greater Than

                        equal to 1-to-1            1-to1                or equal to 1-to-1

1600-1699        16                              18                         47%

1700-1799        25                              16                         61%

1800-1899        47                              17                         73%

1900-1920        69                              13                         84%

1937-1945      104                                8                         93%

1967-1973        17                              17                         50%

Total               278                              89                         76%

 

Defender Won:

 

                        Force Ratio                Force Ratio    Percent Defense Wins:

                        Greater than or         less than          Force Ratio Greater Than

                        equal to 1-to-1            1-to1                or equal to 1-to-1

1600-1699           7                                6                       54%

1700-1799         11                              13                       46%

1800-1899         38                              20                       66%

1900-1920         30                              13                       70%

1937-1945         33                              10                       77%

1967-1973         11                                5                       69%

Total                130                              67                       66%

 

Anyhow, from there (pages 26-59) the report heads into an extended discussion of the analysis done by Helmbold and Hartley (which I am not that enamored with). My book heads in a different direction: War by Numbers III (Table of Contents)

 

 

Osipov

Back in 1915, a Russian named M. Osipov published a paper in a Tsarist military journal that was Lanchester like: http://www.dtic.mil/dtic/tr/fulltext/u2/a241534.pdf

He actually tested his equations to historical data, which are presented in his paper. He ended up coming up with something similar to Lanchester equations but it did not have a square law, but got a similar effect by putting things to the 3/2nds power.

As far as we know, because of the time it was published (June-October 1915), it was not influenced or done with any awareness of work that the far more famous Frederick Lanchester had done (and Lanchester was famous for a lot more than just his modeling equations).  Lanchester first published his work in the fall of 1914 (after the Great War had already started). It is possible that Osipov was aware of it, but he does not mention Lanchester. He was probably not aware of Lanchester’s work. It appears to be the case of him independently coming up with the use of differential equations to describe combat attrition. This was also the case with Rear Admiral J. V. Chase, who wrote a classified staff paper for U.S. Navy in 1902 that was not revealed until 1972.

Osipov, after he had written his paper, may have served in World War I, which was already underway at the time it was published. Between the war, the Russian revolutions, the civil war afterwards, the subsequent repressions by Cheka and later Stalin, we do not know what happened to M. Osipov. At the time I was asked by CAA if our Russian research team knew about him. I passed the question to Col. Sverdlov and Col. Vainer and they were not aware of him. It is probably possible to chase him down, but would probably take some effort. Perhaps some industrious researcher will find out more about him.

It does not appear that Osipov had any influence on Soviet operations research or military analysis. It appears that he was ignored or forgotten. His article was re-published in the September 1988  of the Soviet Military-Historical Journal with the propaganda influenced statement that they also had their own “Lanchester.” Of course, this “Soviet Lanchester” was publishing in a Tsarist military journal, hardly a demonstration of the strength of the Soviet system.

 

Soviet OR

There was a sense among some in the Sovietology community in the late 1980s that Soviet Operations Research (OR) was particularly advanced. People had noticed the 300-man Soviet Military History Institute and the Soviet use of the quantified “Correlation of Forces and Means,” which they used in WWII and since. Trevor Dupuy referenced these in his writings. They had noticed a number of OR books by professors at their Frunze Military Academy. In particular, the book Tactical Calculations by Anatoli Vainer was being used by a number of Sovietologists in their works and presentations (including TNDA alumni Col. John Sloan). There was a concern that the Soviet Union was conducting extensive quantitative analysis of its historical operations in World War II and using this to further improve their war fighting capabilities.

This is sort of a case of trying to determine what is going on by looking at the shadows on a cave wall (Plato analogy here). In October 1993 as part of the Kursk project, we meet with our Russian research team headed by Dr. Fyodor Sverdlov (retired Colonel, Soviet WWII veteran, and former head of the Frunze Military Academy History Department). Sitting there as his right hand man was Dr. Anatoli Vainer (also a retired Colonel, a Soviet WWII veteran and a Frunze Military Academy professor).

We had a list of quantitative data that we needed for the Kursk Data Base (KDB). The database was to be used as a validation database for the Center of Army Analysis (CAA) various modeling efforts. As such, we were trying to determine for each unit for each day the unit strength, losses, equipment lists, equipment losses, ammunition levels, ammunition expenditures, fuel levels, fuel expenditures, and so forth. They were stunned. They said that they did not have models like that. We were kind of surprised at that response.

Over the course of several days I got to know these two gentlemen, went swimming with Col. Sverdlov and had dinner over at Col. Vainer’s house. I got to see his personal library and the various books he wrote. Talked to him as much as I could sensitively do so about Soviet OR, and they were pretty adamant that there really wasn’t anything significant occurring. Vainer told me that his primary source for materials for his books was American writings on Operations Research. So, it appeared that we had completed a loop….the Soviets were writing OR books based upon our material and we were reading them and thinking they had a well developed OR structure.

Their historical research was also still primarily based upon one-side data. They simply were not allowed to access the German archives and regardless they knew that they should not be publishing Soviet casualty figures or any negative comparisons. Col. Sverdlov, who had been in the war since Moscow 1941, was well aware of the Soviet losses, and had some sense that the German losses were less, but this they could not publish [Sverdlov: “I was at Prokhorovka after the war, and I didn’t see 100 Tigers there”]. So, they were hardly able to freely conduct historical analysis in an unbiased manner.

In the end, at this time, they had not developed the analytical tools or capability to fully explore their own military history or to conduct operations research.