I will be giving two presentations at the October meeting of The Military Conflict Institute (TMCI) and Shawn will be making one presentation there.
On Monday, 3 October, I will be doing a presentation on my book War by Numbers: Understanding Conventional Combat, that is going to published in June/August 2017. This presentation will describe the book. In addition, I will be discussing four or five other book projects that are on-going or I am considering.
The same day I will being making presentation called “Data for Wargames.” This was a course developed for a USMC White Team for a wargaming exercise.
On Tuesday Shawn Woodford will be presenting “Studying Combat: Where to Go from Here.” As he describes it:
Studying Combat: Where To Go From Here?
With Deputy Under Secretary of Defense Robert Work’s recent call for a revitalized war gaming effort to support development of a new national military strategy, it is worth taking stock of the present state of empirical research on combat. I propose to briefly survey work on the subject across relevant fields to get a sense of how much progress has been since TMCI published The Concise Theory of Combat in 1997. This is intended to frame a discussion of where the next steps should be taken and possibilities for promoting work on this subject in the defense and academic communities.
The Military Conflict Institute (the website has not been recently updated) will hold it’s 58th General Working Meeting from 3-5 October 2016, hosted by the Institute for Defense Analysis in Alexandria, Virginia. It will feature discussions and presentations focused on war termination in likely areas of conflict in the near future, such as Egypt, Turkey, North Korea, Iran, Saudi Arabia, Kurdistan, and Israel. There will also be presentations on related and general military topics as well.
TMCI was founded in 1979 by Dr. Donald S. Marshall and Trevor Dupuy. They were concerned by the inability of existing Defense Department combat models to produce results that were consistent or rooted in historical experience. The organization is a non-profit, interdisciplinary, informal group that avoids government or institutional affiliation in order to maintain an independent perspective and voice. It’s objective is to advance public understanding of organized warfare in all its aspects. Most of the initial members were drawn from the ranks of operations analysts experienced in quantitative historical study and military operations research, but it has grown to include a diverse group of scholars, historians, students of war, soldiers, sailors, marines, airmen, and scientists. Member disciplines range from military science to diplomacy and philosophy.
For agenda information, contact Roger Mickelson TMCI6@aol.com. For joining instructions, contact Rosser Bobbitt rbobbitt@ida.org. Attendance is subject to approval.
I have not posted book reviews to this site, and do not really plan to in the future. But, there was a book review of America’s Modern Wars in the Military Review by Brig. Gen. John C. Hanley, who I am not familiar with. The review ended with a paragraph that I thought was meaningful. He said:
Lawrence’s book shows that reliable outcome estimates are determined through quantitative reasoning. Being able to anticipate the outcomes of any military operation, through reliable means, can greatly assist in strategic and operational level leaders’ decision-making processes. These results are what the book brings to light for military leaders and their staffs. Staff members who develop course-of-action recommendations can use the techniques described by Lawrence to provide quality analysis. Commanders will have the confidence from their staff estimates to choose the best courses of action for future military operations. Logically estimating the outcomes of future military operations, as the author writes, is what U.S. citizens should expect and demand from their leaders who take this country to war.
Now, they do choose the headlines, and sometimes that gives a different feel to the article. So for example, one of my blog posts was titled “Russian Revolutions.” The exact same article on the HNN is titled “Are Russians Really Long-Suffering.” This apparently got a couple of people up in arms because the article did not talk about all the famines and oppression in Russia and the Soviet Union. It did not, because it was about revolutions, and in particular was about revolutions that succeeded. The famines in the 1890s, 1920s and 1930s did not directly lead to a successful revolution (a point that I think is pretty significant).
The article “Did I Just Write…” is actually a shorter version of an article I posted on the Aberdeen Bookstore website: Long version of “Did I Just Write…” Part of the reason that I wrote that article was to see if someone would come out of the woodwork and post that there was a larger book published (usually these postings start with something like “the author is an idiot because….”). I did not get that for this article. This does sort of confirm my suspicion that this is indeed the largest single volume history book ever written (no disrespect intended for the 11-volumes done by the Durants…which were four million words and 10,000 pages). I wonder if this is something I should submit to the Guinness Book of World Records? Will I get free beer for that?
Just to reinforce Shawn Woodford’s point below, let me quote from Chapter Twenty-Four, pages 294-295, of my book America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam:
Many years ago, I had the pleasure of having a series of meetings with Professor Ivo Feierabend. I was taking a graduate course in Econometrics at San Diego State University (SDSU). I decided that for my class paper, I would do something on the causes of revolution. The two leading efforts on this, both done in the 1960s, were by Ted Gurr and the husband and wife team of Feierabend and Feierabend. I reviewed their work, and for a variety of reasons, got interested in the measurements and analyses done by the Feierabends, vice the more known work by Ted Gurr. This eventually led me to Dr. Feierabend, who still happened to be at San Diego State University much to my surprise. This was some 20 years after he had done what I consider to be ground-breaking work on revolutions. I looked him up and had several useful and productive meetings with him.
In the 1960s, he had an entire team doing this work. Several professors were involved, and he had a large number of graduate students coding events of political violence. In addition, he had access to mainframe computers, offices, etc. The entire effort was shut down in the 1960s, and he had not done anything further on this in almost 20 years. I eventually asked him why he didn’t continue his work. His answer, short and succinct was, “I had no budget.”
This was a difficult answer for a college student to understand. But, it is entirely understood by me now. To do these types of analytical projects requires staff, resources, facilities, etc. They cannot be done by one person, and even if they could, that one person usually needs a paycheck. So, the only way one could conduct one of these large analytical projects is to be funded. In the case of the Feierabends, that funding came from the government, as did ours. Their funding ended after a few years, as has ours. Their work could be described as a good start, but there was so much more that needed to be done. Intellectually, one is mystified why someone would not make sure that this work was continued. Yet, in the cases of Ted Gurr and the Feierabends, it did not.
The problem lies in that the government (or at least the parts that I dealt with) sometimes has the attention span of a two-year-old. Not only that, it also has the need for instant gratification, very much like a two-year-old. Practically, what that means is that projects that can answer an immediate question get funding (like the Bosnia and Iraq casualty estimates). Larger research efforts that will produce an answer or a product in two to three years can also get funding. On the other hand, projects that produce a preliminary answer in two to three years and then need several more years of funding to refine, check, correct and develop that work, tend to die. This has happened repeatedly. The analytical community is littered with many clever, well thought-out reports that look to be good starts. What is missing is a complete body of analysis on a subject.
The New York Timespublished a very interesting article addressing the inability of government-sponsored scholars and researchers to provide policymakers with an analytical basis for identifying potential terrorists. For anyone who has worked with U.S. government patrons on basic research, much of this will sound familiar.
“After all this funding and this flurry of publications, with each new terrorist incident we realize that we are no closer to answering our original question about what leads people to turn to political violence,” Marc Sageman, a psychologist and a longtime government consultant, wrote in the journal Terrorism and Political Violence in 2014. “The same worn-out questions are raised over and over again, and we still have no compelling answers.”
Ample government resourcing and plenty of research attention appears to yield little in advanced knowledge and insight. Why is this? For some, the way the government responds to research findings is the problem.
When researchers do come up with possible answers, the government often disregards them. Not long after the attacks of Sept. 11, 2001, for instance, Alan B. Krueger, the Princeton economist, tested the widespread assumption that poverty was a key factor in the making of a terrorist. Mr. Krueger’s analysis of economic figures, polls, and data on suicide bombers and hate groups found no link between economic distress and terrorism.
More than a decade later, law enforcement officials and government-funded community groups still regard money problems as an indicator of radicalization.
There is also the demand for simple, definitive answers to immediately pressing questions (also known as The Church of What’s Happening Now).
Researchers, too, say they have been frustrated by both the Bush and Obama administrations because of what they say is a preoccupation with research that can be distilled into simple checklists… “They want to be able to do things right now,” said Clark R. McCauley Jr., a professor of psychology at Bryn Mawr College who has conducted government-funded terrorism research for years. “Anybody who offers them something right now, like to go around with a checklist — right now — is going to have their attention.
“It’s demand driven,” he continued. “The people with guns and badges are so eager to have something. The fact that they could actually do harm? This doesn’t deter them.”
There is also the problem of research that leads to conclusions that are at odds with the prevailing political sentiment or run contrary to institutional interests.
Mr. McCauley said many of his colleagues and peers conducted smart research and drew narrow conclusions. The problem, he said, is that studies get the most attention when they suggest warning signs. Research linking terrorism to American policies, meanwhile, is ignored.
However, the more honest researchers also admit that their inability to develop effective modes of inquiry into what are certainly complicated problems plays a role as well.
In 2005, Jeff Victoroff, a University of Southern California psychologist, concluded that the leading terrorism research was mostly just political theory and anecdotes. “A lack of systematic scholarly investigation has left policy makers to design counterterrorism strategies without the benefit of facts,” he wrote in The Journal of Conflict Resolution.
This state of affairs would be problematic enough considering it has been a decade-and-a-half since the events of 11 September 2001 made understanding political violence a national imperative. But it is even more perplexing given that the U.S. government began sponsoring basic research on this topic in the 1950s and 60s. The pioneering work of scholars Ted Gurr and Ivo and Rosalind Feierabend started with U.S. government funding. Gurr published his seminal work Why Men Rebel in 1970. Nearly a half century later, why are we still asking the same questions?
— Preface 6
One Understanding War 8
Two Force Ratios 15
Three Attacker versus Defender 22
Four Human Factors 24
Five Measuring Human Factors in Combat: Italy 27
Six Measuring Human Factors in Combat: Ardennes & Kursk 40
Seven Measuring Human Factors in Combat: Modern Wars 55
Eight Outcome of Battles 67
Nine Exchange Ratios 75
Ten The Combat Value of Superior Situational Awareness 83
Eleven The Combat Value of Surprise 113
Twelve The Nature of Lower Level Combat 135
Thirteen The Effects of Dispersion on Combat 150
Fourteen Advance Rates 164
Fifteen Casualties 171
Sixteen Urban Legends 197
Seventeen The Use of Case Studies 248
Eighteen Modeling Warfare 270
Nineteen Validation of the TNDM 286
Twenty Conclusions 313
Page numbers are based upon the manuscript and will certainly change. The book is 342 pages and 121,095 words. Definitely a lot shorter than the Kursk book.
What is it about (these two paragraphs are from my proposal):
War by Numbers looks at the basic nature of conventional warfare based upon extensive analysis of historical combat. Never passé, conventional combat capability has been a feature of the current growth of Islamic State in Iraq and the Levant (ISIL) and has returned as a threat in Eastern Europe. This book attempts to establish what we actually know about conventional combat and why we know it. It also provides an indication of how much impact various factors have on combat. It is the next step in analysis of combat that goes one step beyond what was addressed by theorists like Clausewitz.
It is the nature of the scientific process that hypothesis and theories do need to be tested and challenge. In a sense, we are attempting to add that rigor to a field that often does not operate with such rigor. In a profession where errors in judgment can result in the loss of lives, a rigorous understanding of warfare should be desired. War by Numbers attempts to provide such an understanding.
I have signed a contract with Potomac Books (and imprint of University of Nebraska Press) to publish War by Numbers: Understanding Conventional Combat. The book is already complete (as I now write books first and find publishers later). Publication date will be spring 2017.
Is the United States Army turning its back on the experience it gained in Iraq and Afghanistan? Retired Brigadier General Robert Scales fears so. After recounting his personal experience with the U.S. Army’s neglect of counterinsurgency lessons following the Vietnam War, Scales sees the pattern repeating itself.
The Army as an institution loves the image of the big war: swift maneuver, tanks, heavy artillery, armed helicopters overhead, mounds of logistics support. The nitty-gritty of working with indigenous personnel to common ends, small unit patrols in civilian-infested cities, quick clashes against faceless enemies that fade back into the populace — not so much. Lessons will fade, and those who earned their PhDs in small wars will be passed over and left by the wayside.
The U.S. government appears to be repeating the pattern insofar as its support for basic research on insurgency and counterinsurgency. During the early years of the Vietnam conflict, the U.S. government invested significant resources to support research and analysis efforts. This led to some very interesting and promising lines of inquiry by organizations such as the Special Operations Research Office, and scholars like Ted Gurr and Ivo and Rosalind Feierabend, among others. However, as Chris Lawrence recently pointed out, this funding was cut by the end of the 1960s, years before the war ended. After, the fruits of this initial research was published in the early 1970s, further research on the subject slowed considerably.
The emergence of insurgencies in Iraq and Afghanistan led to another round of research and analysis funding by the U.S. government in the mid-2000s. This resulted in renewed interest in the foundations built during the 1960s, as well as new analytical work of considerable promise. Despite the fact that these conflicts remain unresolved, this resourcing dried up once more by 2009 and government sponsored basic research has once more ground to a crawl. As Chris has explained, this boom-or-bust approach also carries a cost:
The problem lies in that the government (or at least the parts that I dealt with) sometimes has the attention span of a two-year-old. Not only that, it also has the need for instant gratification, very much like a two-year-old. Practically, what that means is that projects that can answer an immediate question get funding (like the Bosnia and Iraq casualty estimates). Larger research efforts that will produce an answer or a product in two to three years can also get funding. On the other hand, projects that produce a preliminary answer in two to three years and then need several more years of funding to refine, check, correct and develop that work, tend to die. This has happened repeatedly. The analytical community is littered with many clever, well thought out reports that look to be good starts. What is missing is a complete body of analysis on a subject. [America’s Modern Wars, 295]
The ambivalent conduct and outcomes of the recent counterinsurgencies generated hotly contested debates that remain unresolved. This is at least partly due to a lack of a detailed and comprehensive understanding of the phenomenon of insurgency and counterinsurgency. This state of affairs appears to be a matter of choice.