When Shawn Woodford sent me that article on the Captured Iraqi Records, it reminded me of a half-dozen examples I had dealt with over the years. This will keep me blogging for a week or more. Let me start a hundred years ago, with World War I.
The United States signed a separate treaty with Germany after the end of World War I. We were not part of the much maligned Versailles Treaty (although we had to come back to Europe to help clean up the mess they made of the peace). As part of that agreement, we required access to the German military archives.
We used this access well. We put together a research team that included Lt. Colonel Walter Kruger. The United States plopped this team of researchers in Germany for a while and carefully copied all the records of German units that the United States faced. This really covered only the last year of a war that lasted over four years (28 July 1914 – 11 November 1918). Krueger was in Germany with the team in 1922. This was a pretty significant effort back in the days before Xerox machines, microfilm, scanners, and other such tools.
These records were later saved to the U.S. National Archives. So one can access those records now, and will find the records of U.S. units involved, the German units involved (much of it translated), along with maps and descriptions of the fighting. It is all nicely assembled for researchers. It is a very meticulous and nicely done collection.
Just to add to the importance of this record collection, the German archives in Potsdam were bombed by the RAF in April 1945, destroying most of their World War I records (this raid on 14/15 April: apr45 and also: bundesarchiv). So, one cannot now go back and look up these German records. The only surviving primary source combat records for many German units from World War I is the records in the U.S. Archives copied (translated and typed from the original) done by the U.S. researchers.
Lt. Colonel Krueger was fluent in German because of his family background (he was born in Prussia). During World War II, he rose to be a full general (four-star general) in command of the Sixth Army in the Pacific: https://en.wikipedia.org/wiki/Walter_Krueger
This effort sets that standard almost a hundred years ago of what could/should be done with captured records. A later post will discuss the World War II effort.
The update is the result of the initial round of work between the U.S. Army and U.S. Air Force to redefine the scope of the multi-domain battlespace for the Joint Force. More work will be needed to refine the concept, but it shows remarkable cooperation in forging a common warfighting perspective between services long-noted for their independent thinking.
What difference can it make if those designing Multi-Domain Battle are acting on possibly the wrong threat diagnosis? Designing a solution for a misdiagnosed problem can result in the inculcation of a way of war unsuited for the wars of the future. One is reminded of the French Army during the interwar period. No one can accuse the French of not thinking seriously about war during these years, but, in the doctrine of the methodical battle, they got it wrong and misread the opportunities presented by mechanisation. There were many factors contributing to France’s defeat, but at their core was a misinterpretation of the art of the possible and a singular focus on a particular way of war. Shaping Multi-Domain Battle for the wrong problem may see the United States similarly sow the seeds for a military disaster that is avoidable.
He suggests that it would be wise for U.S. doctrine writers to take a more considered look at potential implications before venturing too far ahead with specific solutions.
Trevor Dupuy distilled his research and analysis on combat into a series of verities, or what he believed were empirically-derived principles. He intended for his verities to complement the classic principles of war, a slightly variable list of maxims of unknown derivation and provenance, which describe the essence of warfare largely from the perspective of Western societies. These are summarized below.
Soldiers from Britain’s Royal Artillery train in a “virtual world” during Exercise Steel Sabre, 2015 [Sgt Si Longworth RLC (Phot)/MOD]
Military History and Validation of Combat Models
A Presentation at MORS Mini-Symposium on Validation, 16 Oct 1990
By Trevor N. Dupuy
In the operations research community there is some confusion as to the respective meanings of the words “validation” and “verification.” My definition of validation is as follows:
“To confirm or prove that the output or outputs of a model are consistent with the real-world functioning or operation of the process, procedure, or activity which the model is intended to represent or replicate.”
In this paper the word “validation” with respect to combat models is assumed to mean assurance that a model realistically and reliably represents the real world of combat. Or, in other words, given a set of inputs which reflect the anticipated forces and weapons in a combat encounter between two opponents under a given set of circumstances, the model is validated if we can demonstrate that its outputs are likely to represent what would actually happen in a real-world encounter between these forces under those circumstances
Thus, in this paper, the word “validation” has nothing to do with the correctness of computer code, or the apparent internal consistency or logic of relationships of model components, or with the soundness of the mathematical relationships or algorithms, or with satisfying the military judgment or experience of one individual.
True validation of combat models is not possible without testing them against modern historical combat experience. And so, in my opinion, a model is validated only when it will consistently replicate a number of military history battle outcomes in terms of: (a) Success-failure; (b) Attrition rates; and (c) Advance rates.
“Why,” you may ask, “use imprecise, doubtful, and outdated history to validate a modem, scientific process? Field tests, experiments, and field exercises can provide data that is often instrumented, and certainly more reliable than any historical data.”
I recognize that military history is imprecise; it is only an approximate, often biased and/or distorted, and frequently inconsistent reflection of what actually happened on historical battlefields. Records are contradictory. I also recognize that there is an element of chance or randomness in human combat which can produce different results in otherwise apparently identical circumstances. I further recognize that history is retrospective, telling us only what has happened in the past. It cannot predict, if only because combat in the future will be fought with different weapons and equipment than were used in historical combat.
Despite these undoubted problems, military history provides more, and more accurate information about the real world of combat, and how human beings behave and perform under varying circumstances of combat, than is possible to derive or compile from arty other source. Despite some discrepancies, patterns are unmistakable and consistent. There is always a logical explanation for any individual deviations from the patterns. Historical examples that are inconsistent, or that are counter-intuitive, must be viewed with suspicion as possibly being poor or false history.
Of course absolute prediction of a future event is practically impossible, although not necessarily so theoretically. Any speculations which we make from tests or experiments must have some basis in terms of projections from past experience.
Training or demonstration exercises, proving ground tests, field experiments, all lack the one most pervasive and most important component of combat: Fear in a lethal environment. There is no way in peacetime, or non-battlefield, exercises, test, or experiments to be sure that the results are consistent with what would have been the behavior or performance of individuals or units or formations facing hostile firepower on a real battlefield.
We know from the writings of the ancients (for instance Sun Tze—pronounced Sun Dzuh—and Thucydides) that have survived to this day that human nature has not changed since the dawn of history. The human factor the way in which humans respond to stimuli or circumstances is the most important basis for speculation and prediction. What about the “scientific” approach of those who insist that we cart have no confidence in the accuracy or reliability of historical data, that it is therefore unscientific, and therefore that it should be ignored? These people insist that only “scientific” data should be used in modeling.
In fact, every model is based upon fundamental assumptions that are intuitive and unprovable. The first step in the creation of a model is a step away from scientific reality in seeking a basis for an unreal representation of a real phenomenon. I have shown that the unreality is perpetuated when we use other imitations of reality as the basis for representing reality. History is less than perfect, but to ignore it, and to use only data that is bound to be wrong, assures that we will not be able to represent human behavior in real combat.
At the risk of repetition, and even of protesting too much, let me assure you that I am well aware of the shortcomings of military history:
The record which is available to us, which is history, only approximately reflects what actually happened. It is incomplete. It is often biased, it is often distorted. Even when it is accurate, it may be reflecting chance rather than normal processes. It is neither precise nor consistent. But, it provides more, and more accurate, information on the real world of battle than is available from the most thoroughly documented field exercises, proving ground less, or laboratory or field experiments.
Military history is imperfect. At best it reflects the actions and interactions of unpredictable human beings. We must always realize that a single historical example can be misleading for either of two reasons: (1) The data may be inaccurate, or (2) The data may be accurate, but untypical.
Nevertheless, history is indispensable. I repeat that the most pervasive characteristic of combat is fear in a lethal environment. For all of its imperfections, military history and only military history represents what happens under the environmental condition of fear.
Unfortunately, and somewhat unfairly, the reported findings of S.L.A. Marshall about human behavior in combat, which he reported in Men Against Fire, have been recently discounted by revisionist historians who assert that he never could have physically performed the research on which the book’s findings were supposedly based. This has raised doubts about Marshall’s assertion that 85% of infantry soldiers didn’t fire their weapons in combat in World War ll. That dramatic and surprising assertion was first challenged in a New Zealand study which found, on the basis of painstaking interviews, that most New Zealanders fired their weapons in combat. Thus, either Americans were different from New Zealanders, or Marshall was wrong. And now American historians have demonstrated that Marshall had had neither the time nor the opportunity to conduct his battlefield interviews which he claimed were the basis for his findings.
I knew Marshall, moderately well. I was fully as aware of his weaknesses as of his strengths. He was not a historian. I deplored the imprecision and lack of documentation in Men Against Fire. But the revisionist historians have underestimated the shrewd journalistic assessment capability of “SLAM” Marshall. His observations may not have been scientifically precise, but they were generally sound, and his assessment has been shared by many American infantry officers whose judgements l also respect. As to the New Zealand study, how many people will, after the war, admit that they didn’t fire their weapons?
Perhaps most important, however, in judging the assessments of SLAM Marshall, is a recent study by a highly-respected British operations research analyst, David Rowland. Using impeccable OR methods Rowland has demonstrated that Marshall’s assessment of the inefficient performance, or non-performance, of most soldiers in combat was essentially correct. An unclassified version of Rowland’s study, “Assessments of Combat Degradation,” appeared in the June 1986 issue of the Royal United Services Institution Journal.
Rowland was led to his investigations by the fact that soldier performance in field training exercises, using the British version of MILES technology, was not consistent with historical experience. Even after allowances for degradation from theoretical proving ground capability of weapons, defensive rifle fire almost invariably stopped any attack in these field trials. But history showed that attacks were often in fact, usually successful. He therefore began a study in which he made both imaginative and scientific use of historical data from over 100 small unit battles in the Boer War and the two World Wars. He demonstrated that when troops are under fire in actual combat, there is an additional degradation of performance by a factor ranging between 10 and 7. A degradation virtually of an order of magnitude! And this, mind you, on top of a comparable built-in degradation to allow for the difference between field conditions and proving ground conditions.
Not only does Rowland‘s study corroborate SLAM Marshall’s observations, it showed conclusively that field exercises, training competitions and demonstrations, give results so different from real battlefield performance as to render them useless for validation purposes.
Which brings us back to military history. For all of the imprecision, internal contradictions, and inaccuracies inherent in historical data, at worst the deviations are generally far less than a factor of 2.0. This is at least four times more reliable than field test or exercise results.
I do not believe that history can ever repeat itself. The conditions of an event at one time can never be precisely duplicated later. But, bolstered by the Rowland study, I am confident that history paraphrases itself.
If large bodies of historical data are compiled, the patterns are clear and unmistakable, even if slightly fuzzy around the edges. Behavior in accordance with this pattern is therefore typical. As we have already agreed, sometimes behavior can be different from the pattern, but we know that it is untypical, and we can then seek for the reason, which invariably can be discovered.
This permits what l call an actuarial approach to data analysis. We can never predict precisely what will happen under any circumstances. But the actuarial approach, with ample data, provides confidence that the patterns reveal what is to happen under those circumstances, even if the actual results in individual instances vary to some extent from this “norm” (to use the Soviet military historical expression.).
It is relatively easy to take into account the differences in performance resulting from new weapons and equipment. The characteristics of the historical weapons and the current (or projected) weapons can be readily compared, and adjustments made accordingly in the validation procedure.
In the early 1960s an effort was made at SHAPE Headquarters to test the ATLAS Model against World War II data for the German invasion of Western Europe in May, 1940. The first excursion had the Allies ending up on the Rhine River. This was apparently quite reasonable: the Allies substantially outnumbered the Germans, they had more tanks, and their tanks were better. However, despite these Allied advantages, the actual events in 1940 had not matched what ATLAS was now predicting. So the analysts did a little “fine tuning,” (a splendid term for fudging). Alter the so-called adjustments, they tried again, and ran another excursion. This time the model had the Allies ending up in Berlin. The analysts (may the Lord forgive them!) were quite satisfied with the ability of ATLAS to represent modem combat. (Or at least they said so.) Their official conclusion was that the historical example was worthless, since weapons and equipment had changed so much in the preceding 20 years!
As I demonstrated in my book, Options of Command, the problem was that the model was unable to represent the German strategy, or to reflect the relative combat effectiveness of the opponents. The analysts should have reached a different conclusion. ATLAS had failed validation because a model that cannot with reasonable faithfulness and consistency replicate historical combat experience, certainly will be unable validly to reflect current or future combat.
How then, do we account for what l have said about the fuzziness of patterns, and the fact that individual historical examples may not fit the patterns? I will give you my rules of thumb:
The battle outcome should reflect historical success-failure experience about four times out of five.
For attrition rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.
For the advance rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.
Just as the heavens are the laboratory of the astronomer, so military history is the laboratory of the soldier and the military operations research analyst. The scientific basis for both astronomy and military science is the recording of the movements and relationships of bodies, and then analysis of those movements. (In the one case the bodies are heavenly, in the other they are very terrestrial.)
I repeat: Military history is the laboratory of the soldier. Failure of the analyst to use this laboratory will doom him to live with the scientific equivalent of Ptolomean astronomy, whereas he could use the evidence available in his laboratory to progress to the military science equivalent of Copernican astronomy.
Lt. General H. R. McMaster, the U.S. National Security Advisor, wrote a doctoral dissertation on Vietnam that was published in 1997 as Dereliction of Duty: Lyndon Johnson, the Joint Chiefs of Staff and the Lies That Led to Vietnam. Ronald Spector, former Marine, Vietnam vet, and historian just published this interesting article: What McMaster Gets Wrong About Vietnam
What caught my interest was the discussion by Spector, very brief, that the Vietnamese had something to do with the Vietnam war. Not an earthshaking statement, but certainly a deserved poke at the more American-centric view of the war.
In my book, America’s Modern Wars, I do have a chapter called “The Other Side” (Chapter 18). As I note in the intro to that chapter (page 224):
Warfare is always a struggle between at least two sides. Yet, the theoretical study of insurgencies always seems to be written primarily from the standpoint of one side, the counterinsurgents. We therefore briefly looked at what the other side was saying to see if there were any theoretical constructs that were proposed or supported by them. They obviously knew as much about insurgencies as the counterinsurgents.
We then examined the writings and interview transcripts of eight practitioners of insurgency and ended up trying to summarize their thoughts in one barely “easy-to-read” table (pages 228-229), the same as we did for ten counterinsurgent theorists (pages 187-201). The conclusion to this discussion was (pages 235-236):
The review of the insurgents shows an entirely different focus as to what is important in an insurgency than one gets from reading the “classical” counterinsurgent theorists. In the end, the insurgent is primarily focused on the cause. The military aspects of the insurgency seem to be secondary concerns…..On the other hand, the majority of the insurgents we reviewed actually won or managed a favorable results from their war in the long run (this certainly applies to Grivas and Itote). Perhaps their focus on the political cause, with the military aspects secondary, is an indication of the correct priorities.
I do have a chapter on Vietnam in the book also (Chapter 22).
It consists of a Romanian brigade of up to 4,000 soldiers, troops from nine other NATO countries (including Poland, Bulgaria, Italy, Portugal, Germany, Britain, Canada). In addition, there is a separate deployment of 900 U.S. troops in the area.
During the cold war, there was only one NATO member on the Black Sea, Turkey, but there were three Warsaw Pact members (Soviet Union, Romania and Bulgaria). Now there are three NATO members (Turkey, Romania and Bulgaria), several countries who have a Russian-supported separatist enclave or two in them (Ukraine, Georgia, Moldova) and, of course, Russia. It has become an interesting area.
The fundamental building blocks of history are primary sources, i.e artifacts, documents, diaries and memoirs, manuscripts, or other contemporaneous sources of information. It has been the availability and accessibility of primary source documentation that allowed Trevor Dupuy and The Dupuy Institute to build the large historical combat databases that much of their analyses have drawn upon. It took uncounted man-hours of time-consuming, pain-staking research to collect and assemble two-sided data sufficiently detailed to analyze the complex phenomena of combat.
Going back to the Civil War, the United States has done a commendable job collecting and organizing captured military documentation and making that material available for historians, scholars, and professional military educators. TDI has made extensive use of captured German documentation from World War I and World War II held by the U.S. National Archives in its research, for example.
The documents date from 1978 up until Operation Desert Storm (1991). The collection includes Iraq operations plans and orders; maps and overlays; unit rosters (including photographs); manuals covering tactics, camouflage, equipment, and doctrine; equipment maintenance logs; ammunition inventories; unit punishment records; unit pay and leave records; handling of prisoners of war; detainee lists; lists of captured vehicles; and other military records. The collection also includes some manuals of foreign, non-Iraqi weapons systems. Some of Saddam Hussein’s Revolutionary Command Council records are in the captured material.
According to Cox, DIA began making digital copies of the documents shortly after the Gulf War ended. After the State Department requested copies, DIA subsequently determined that only 60% of the digital tapes the original scans had been stored on could be read. It was during an effort to rescan the lost 40% of the documents that it was discovered that the entire paper collection had been contaminated by mold.
DIA created a library of the scanned documents stored on 43 compact discs, which remain classified. It is not clear if DIA still has all of the CDs; none had been transferred to the National Archives as of 2012. A set of 725,000 declassifed pages was made available for a research effort at Harvard in 2000. That effort ended, however, and the declassified collection was sent to the Hoover Institution at Stanford University. The collection is closed to researchers, although Hoover has indicated it hopes to make it publicly available sometime in the future.
While the failure to preserve the original paper documents is bad enough, the possibility that any or all of the DIA’s digital collection might be permanently lost would constitute a grievous and baffling blunder. It also makes little sense for this collection to remain classified a quarter of a century after end of the Gulf War. Yet, it appears that failures to adequately collect and preserve U.S. military documents and records is becoming more common in the Information Age.
American troops advance under the cover of M4 Sherman tank ‘Lucky Legs II’ during mop up operations on Bougainville, Solomon Islands, March 1944. [National Archives/ww2dbase]
If you have kids, the conversations sometime wander into strange areas. I was told yesterday that the U.S. Defense budget was 54% of the U.S. budget. I said that not right, even though Siri was telling him otherwise.
It turns out that in 2015 that the U.S. Defense budget was 54% of U.S. discretionary spending, according to Wikipedia. This is a significant distinction. In 2015 the U.S. defense budget was $598 billion. In 2015 the U.S. Federal budget was $3.688 trillion actual (compared to 3.9 Trillion requested). This is 16% of the U.S. budget. As always, have to read carefully.
Just to complete the math, the U.S. GDP in 2015 was 18.037 Trillion (United Nations figures). So, federal budget is 20% of GDP (or 22% is the requested budget figure is used) and defense budget is 3.3% of GDP.
Latest figures are 583 billion for U.S. Defense budget (requested for 2017), 3.854 estimated expenditures for the U.S. Federal Budget for 2016 and 4.2 trillion requested for 2017, and 18.56 trillion for U.S. GDP (2016) and 19.3 trillion (preliminary for 2017).
An Israeli tank unit crosses the Sinai, heading for the Suez Canal, during the 1973 Arab-Israeli War [Israeli Government Press Office/HistoryNet]
It has been noted throughout the history of human conflict that some armies have consistently fought more effectively on the battlefield than others. The armies of Sparta in ancient Greece, for example, have come to epitomize the warrior ideal in Western societies. Rome’s legions have acquired a similar legendary reputation. Within armies too, some units are known to be superior combatants than others. The U.S. 1st Infantry Division, the British Expeditionary Force of 1914, Japan’s Special Naval Landing Forces, the U.S. Marine Corps, the German 7th Panzer Division, and the Soviet Guards divisions are among the many superior fighting forces from history.
Trevor Dupuy found empirical substantiation of this in his analysis of historical combat data. He discovered that in 1943-1944 during World War II, after accounting for environmental and operational factors, the German Army consistently performed more effectively in ground combat than the U.S. and British armies. This advantage—measured in terms of casualty exchanges, terrain held or lost, and mission accomplishment—manifested whether the Germans were attacking or defending, or winning or losing. Dupuy observed that the Germans demonstrated an even more marked effectiveness in battle against the Soviet Army throughout the war.
He found the same disparity in battlefield effectiveness in combat data on the 1967 and 1973 Arab-Israeli wars. The Israeli Army performed uniformly better in ground combat over all of the Arab armies it faced in both conflicts, regardless of posture or outcome.
The clear and consistent patterns in the historical data led Dupuy to conclude that superior combat effectiveness on the battlefield was attributable to moral and behavioral (i.e. human) factors. Those factors he believed were the most important contributors to combat effectiveness were:
Leadership
Training or Experience
Morale, which may or may not include
Cohesion
Although the influence of human factors on combat effectiveness was identifiable and measurable in the aggregate, Dupuy was skeptical whether all of the individual moral and behavioral intangibles could be discreetly quantified. He thought this particularly true for a set of factors that also contributed to combat effectiveness, but were a blend of human and operational factors. These include:
Logistical effectiveness
Time and Space
Momentum
Technical Command, Control, Communications
Intelligence
Initiative
Chance
Dupuy grouped all of these intangibles together into a composite factor he designated as relative combat effectiveness value, or CEV. The CEV, along with environmental and operational factors (Vf), comprise the Circumstantial Variables of Combat, which when multiplied by force strength (S), determines the combat power (P) of a military force in Dupuy’s formulation.
P = S x Vf x CEV
Dupuy did not believe that CEVs were static values. As with human behavior, they vary somewhat from engagement to engagement. He did think that human factors were the most substantial of the combat variables. Therefore any model or theory of combat that failed to account for them would invariably be inaccurate.