Category Methodologies

Economics of Warfare 8

Examining the eighth lecture from Professor Michael Spagat’s Economics of Warfare course that he gives at Royal Holloway University. It is posted on his blog Wars, Numbers and Human Losses at: https://mikespagat.wordpress.com/

This lecture is focused on estimating deaths in a conflict, which is a subject that has produced some unusually high figures through faulty analysis. For example, he starts with a figure of 6.9 million killed in the Congo (slide 2). This leads Dr. Spagat into a discussion of “excess death rate,” which is basically estimating how many additional deaths occur if there is a war going on vice if there is peace (starting slide 4). From slide 4-12 he questions the Congo estimate. He does not offer an alternative figure, but it is clear that the real figure might be considerably lower, in particular as the IRC (International Refugee Committee) used an estimated baseline rate of deaths figure that was probably too low for Congo.

On slide 13 he starts discussing Iraq, where he has done such an estimate (provided on slide 14). His figure is 160,000 excess deaths for Iraq in 2003-2011, which is a lot lower than some of other estimates out there (I think I have seen them as high as 600,000). He then discusses the problems with his estimate (slide 15). From slides 16-30 he discusses further aspects of estimating excess deaths, including looking at regional variation and the impact of war on health (child height) . This may be getting a little bit to much into the weeds for most of our readers.

Anyhow, the main takeaway from all this is to be wary of over-estimates of total losses in wars. Sometimes they can be way too high.

The link to the lecture is here: http://personal.rhul.ac.uk/uhte/014/Economics%20of%20Warfare/Lecture%208.pdf

 

 

Economics of Warfare 7

Examining the seventh lecture from Professor Michael Spagat’s Economics of Warfare course that he gives at Royal Holloway University. It is posted on his blog Wars, Numbers and Human Losses at: https://mikespagat.wordpress.com/

This lecture, which starts by discussing the “Dirty War Index”, remains focused on civilian casualties. He presents on slide 4 the “Dirty War Index” (DWI), which is actually something we could have used for our insurgency work.

The link to the lecture is here: http://personal.rhul.ac.uk/uhte/014/Economics%20of%20Warfare/Lecture%207.pdf

We did something very similar in pages 88-92 in the section on “Use of Firepower” in America’s Modern Wars. On page 89 we have a chart with three columns tracking civilians casualties. They are 1) (civilians killed)/(CI/INS killed); 2) civilians killed/insurgents killed; and 3) total civilians killed/100,000 population. We only have data for nine cases (nine insurgencies). The first two formulations are ratios but the same data could be used to calculate an ersatz DWI. We then discussed the problem with Irish Loyalists Militias on pages 89-90 (using the exact same data as Dr. Spagat used on slide 6) and then we looked at 35 insurgencies compared to 1) rules of engagements, 2) civilians killed/insurgents kills, and 3) total civilians killed/100,00 population (pages 90-91). Our conclusions were (page 92):

In general, there does seem to be a pattern where insurgencies win more often if the number of civilians killed compared to the number of insurgents killed is greater than 10, but there is no statistical support for such an assumption.

This was a case where we needed to do a lot more work, but never got back to it (read: defense budget cuts and sequestration).

Slides 5 and 6 of Dr. Spagat’s lecture are worth looking at. You will note that in Colombia that while the guerrillas and government forces were responsible for their share of civilian casualties, it was the paramilitaries who were doing a lot of the bloodletting. Government ties to some of these paramilitaries have been an issue. As Dr. Spagat puts it (slide 7) “Their relationship with government forces is murky and controversial.” Slide 6 is from Northern Ireland. Again the “Loyalist Paramilities” are the worst offenders. Probably good policy to keep the Shiite militias out of Mosul.

On slide 10, Dr. Spagat switches from the rather depressing discussion of civilian casualties (which is a subject that needs to be discussed and analyzed more than it has) to a discussing of the “Benefits of Peace.” Because of the nature of our customers, we haven’t done a lot of work on peace…not that we don’t want to. He ends up looking at housing prices in Northern Ireland. Slide 13 has the total killings in Northern Ireland by quarter, although only from 1983 and on. The war was far more bloody in the early 1970s and the violence declined notably after that. The figures on slide 14 catches my attention because at one point in our insurgency studies we also looked at distribution of casualties by region in Northern Ireland, compared to Vietnam and compared to two other wars. We noted at the time that unequal distribution of casualties by geography was at a similar ratio between Northern Ireland and Vietnam. We did not go any further with this effort, because we needed a whole lot more cases and we could not see a pattern with what we had examined (and it took a lot of time). This effort was discussed in our report on terrain (Report I-12: http://www.dupuyinstitute.org/tdipub3.htm) but I am pretty sure I left it out of my book.

Anyhow, Figure 15 shows housing prices in Northern Ireland. Not particularly surprising, peace is good for housing prices. You probably could have guessed that without a statistical analysis. The rest of the slides just go into more depth on the statistics behind this (slides 17-19). Then there is a discussion on “sampling rare events” (slides 20-29). Note the mention of bootstrapping on slide 29: Bootstrapping_(statistics)

Economics of Warfare 6

Examining the sixth lecture from Professor Michael Spagat’s Economics of Warfare course that he gives at Royal Holloway University. It is posted on his blog Wars, Numbers and Human Losses at: https://mikespagat.wordpress.com/

In this lecture, Dr. Spagat works from three existing database from the Uppsala Conflict Data Program (run by a group in Sweden). We were aware of these when we were doing our work on insurgencies, but never tapped them. We probably would have at some point, if the work had continued.

Anyhow, Dr. Spagat continues with his analysis of civilian casualties in conflict. We certainly could have done something useful with his Civilian Targeting Index (CTI — defined on slide 3) and looking at whether it effected the outcome of an insurgency. Slide 4 is worth noting, as is slide 8.

The link to the lecture is here: http://personal.rhul.ac.uk/uhte/014/Economics%20of%20Warfare/Lecture%206.pdf

On slide 6 is his four “key take-home” points. They are:

  1. “First, the majority (61%) of all formally organized actors in armed conflict during 2002-2007 refrained from killing civilians in deliberate, direct targeting…”
  2. “Second, actors were more likely to have carried out some degree of civilian targeting (CTI > 0), as opposed to none (CTI = 0), if they participated in armed conflict for three or more years rather than for one year….”
  3. “Third, among actors that targeted civilians (there were 88 of them), those that engaged in great scales of armed conflict concentrated less of their lethal behavior into civilian targeting and more into involvement with battle fatalities…”
  4. “Fourth, an actor’s likelihood and degree of targeting civilians was unaffected by whether it was a state or a non-state group.”

Now, granted this is a snap-shot of only five years, but it is one with more than 88 cases in it, but it is still interesting to note. None of the work we did support nor contradicts any of these results.

Slides 9 to 13 is a discussion of logistic regression and linear regression, which is something that I think everyone should understand, but won’t be surprised if our readers choose to skip it. There are some interesting (as always) Slides are pages 14, 16, 17 and 21. In fact, slide 21 is a pretty good to use in an argument with someone who thinks things are only getting worse. It is worth your while to look at it.

Starting on slide 22 to the end (slide 34), Dr. Spagat takes on counter-arguments developed as a result of examining World Health Surveys (WHS), which is a point worth noting. Lots of people like to throw around figures. These figures are not always very accurate.

Anyhow, these lectures are great to flip through, and if you actually carefully (and painfully) read through them, it is probably a better use of your time than most things you will do this week.

Economics of Warfare 5

Examining the fifth lecture from Professor Michael Spagat’s Economics of Warfare course that he gives at Royal Holloway University. It is posted on his blog Wars, Numbers and Human Losses at: https://mikespagat.wordpress.com/

This lecture is about regressions and logistics regressions. Now, I think everyone should take a econometrics course….but just a warning, this is all pretty dry stuff. So, if you choose to skip it, don’t blame you.

The link to the lecture is here: http://personal.rhul.ac.uk/uhte/014/Economics%20of%20Warfare/Lecture%205.pdf

On the other hand, what he is discussing is using regression models to analyze the nature of the civilian casualties, including in the Rwandan genocide. This gets a little hard to discuss. On slide 11, you can learn that in the Kibuye Prefecture in 1994 there were 31,117 people killed by machete, 9,779 killed by clubs and 442 burned alive. Not exactly relaxing reading.

Slide 20 tracks Israeli and Palestinian deaths from 2000-2005, which is a lot less.

Anyhow, Dr. Spagat’s work often focuses on civilian casualties. These are often a significant part of warfare, even if we don’t particularly like to address it. For example,. the United States lost over 4,000 troops in Iraq 2003-2011. Iraq lost over 150,000 people during that time. The same pattern for Vietnam, where the United States lost over 58,000 people in what was the third bloodiest war in our history. Vietnam lost one to two million people !

I did attempt to address civilian casualties in our insurgency work. It is also addressed in my book America’s Modern Wars in Chapter 9 “Rules of Engagement and Measurements of Brutality” and Chapter 15 “The Burden of War.” I am not sure that this attention to civilian casualties was fully appreciated by our DOD customers, but it was there because sadly, it is always a significant part of warfare. Tragically, sometimes so is genocide, as recently demonstrated by ISIL. Dr. Spagat, in a course on the “Economics of Warfare,” is quite correct to focus on civilian casualties.

P.S. I have been informed by Dr. Spagat that he still has another ten lectures to post up on his blog.

 

Economics of Warfare 3

Examining the third lecture from Professor Michael Spagat’s Economics of Warfare course that he gives at Royal Holloway University. It is posted on his blog Wars, Numbers and Human Losses at: https://mikespagat.wordpress.com/

The link to the lecture is here: http://personal.rhul.ac.uk/uhte/014/Economics%20of%20Warfare/Lecture%203.pdf

This one starts with the war in Kosovo (1998-1999), which was actually a successful invention although very poorly done. It does pick on a constant theme of Dr. Spagat’s, which is how to get the correct counts of actual people killed in the conflicts, including civilians. For those of us who actually try to do things like quantitative analysis of insurgencies (for example America’s Modern Wars)….this is very useful. A lot of other people don’t particularly care, sometimes because a particularly high or low number serves their political agenda (or cosmology).

Starting on slide 11, Dr. Spagat discusses Iraq casualty estimates. This, along with Colombia, were the two areas we discussed with him when we were working on our Iraq and insurgency material (2004-2010). He was one of the few people out there doing work similar to ours. He points out that there were two estimates of deaths in Iraq, one of 150,000 and one of 600,000. Needless to say, the lower one was closer to correct. The higher number got heavily broadcast. This whole section is worth reviewing and remembering for any future conflicts. I like the picture on slide 14.

Sorry about this abstract look at some very sad and gruesome statistics.

P.S. Merry Christmas

Do Senior Decisionmakers Understand the Models and Analyses That Guide Their Choices?

Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)
Group of English gentlemen and soldiers of the 25th London Cyclist Regiment playing the newest form of wargame strategy simulation called “Bellum” at the regimental HQ. (Google LIFE Magazine archive.)

Over at Tom Ricks’ Best Defense blog, Brigadier General John Scales (U.S. Army, ret.) relates a personal story about the use and misuse of combat modeling. Scales’ tale took place over 20 years ago and he refers to it as “cautionary.”

I am mindful of a time more than twenty years ago when I was very much involved in the analyses leading up to some significant force structure decisions.

A key tool in these analyses was a complex computer model that handled detailed force-on-force scenarios with tens of thousands of troops on either side. The scenarios generally had U.S. Amy forces defending against a much larger modern army. As I analyzed results from various runs that employed different force structures and weapons, I noticed some peculiar results. It seemed that certain sensors dominated the battlefield, while others were useless or nearly so. Among those “useless” sensors were the [Long Range Surveillance (LRS)] teams placed well behind enemy lines. Curious as to why that might be so, I dug deeper and deeper into the model. After a fair amount of work, the answer became clear. The LRS teams were coded, understandably, as “infantry”. According to model logic, direct fire combat arms units were assumed to open fire on an approaching enemy when within range and visibility. So, in essence, as I dug deeply into the logic it became obvious that the model’s LRS teams were compelled to conduct immediate suicidal attacks. No wonder they failed to be effective!

Conversely, the “Firefinder” radars were very effective in targeting the enemy’s artillery. Even better, they were wizards of survivability, almost never being knocked out. Somewhat skeptical by this point, I dug some more. Lo and behold, the “vulnerable area” for Firefinders was given in the input database as “0”. They could not be killed!

Armed with all this information, I confronted the senior system analysts. My LRS concerns were dismissed. This was a U.S. Army Training and Doctrine Command-approved model run by the Field Artillery School, so infantry stuff was important to them only in terms of loss exchange ratios and the like. The Infantry School could look out for its own. Bringing up the invulnerability of the Firefinder elicited a different response, though. No one wanted to directly address this and the analysts found fascinating objects to look at on the other side of the room. Finally, the senior guy looked at me and said, “If we let the Firefinders be killed, the model results are uninteresting.” Translation: None of their force structure, weapons mix, or munition choices had much effect on the overall model results unless the divisional Firefinders survived. We always lost in a big way. [Emphasis added]

Scales relates his story in the context of the recent decision by the U.S. Army to deactivate all nine Army and Army National Guard LRS companies. These companies, composed of 15 six-man teams led by staff sergeants, were used to collect tactical intelligence from forward locations. This mission will henceforth be conducted by technological platforms (i.e. drones). Scales makes it clear that he has no personal stake in the decision and he does not indicate what role combat modeling and analyses based on it may have played in the Army’s decision.

The plural of anecdote is not data, but anyone familiar with Defense Department combat modeling will likely have similar stories of their own to relate. All combat models are based on theories or concepts of combat. Very few of these models make clear what these are, a scientific and technological phenomenon known as “black boxing.” A number of them still use Lanchester equations to adjudicate combat attrition results despite the fact that no one has been able to demonstrate that these equations can replicate historical combat experience. The lack of empirical knowledge backing these combat theories and concepts was identified as the “base of sand” problem and was originally pointed out by Trevor Dupuy, among others, a long time ago. The Military Conflict Institute (TMCI) was created in 1979 to address this issue, but it persists to this day.

Last year, Deputy Secretary of Defense Bob Work called on the Defense Department to revitalize its wargaming capabilities to provide analytical support for development of the Third Offset Strategy. Despite its acknowledged pitfalls, wargaming can undoubtedly provide crucial insights into the validity of concepts behind this new strategy. Whether or not Work is also aware of the base of sand problem and its potential impact on the new wargaming endeavor is not known, but combat modeling continues to be widely used to support crucial national security decisionmaking.

Quoting David Irving

denial_2016_film

There is a new movie being released this month called Denial. It is about the libel lawsuit pursued by controversial historian David Irving against U.S. academic historian Deborah Lipstadt. David Irving was a British historian who specialized in the military history of the Third Reich. His writings downplayed the Holocaust and made the claim that there was no evidence that Hitler knew about it. She took David Irving to task in her 1993 book, Denying the Holocaust. David Irving took her to court in the UK, where their libel laws place the burden of proof on her. She had the legal requirement to prove that the Holocaust actually occurred and that Hitler ordered it.

Spoiler alert: He lost.

I did reference David Irving’s work twice in my book Kursk: The Battle of Prokhorovka. I was well aware of this controversy. On 4 May 1943 there was a meeting called by Adolf Hitler and attended by Colonel General Heinz Guderian, Field Marshal Erich von Manstein, Colonel General Hans Jeschonnek and many others. This rather famous meeting, part of the German planning for the Battle of Kursk, has been discussed in most books on Kursk.

The problem is that there are only three accounts of the meeting. There is a detailed account of the meeting in Heinz Guderian’s book, Panzer Leader, which provides a three-page narrative of the meeting. There is only a brief discussion in Manstein’s book, Lost Victories, which mentions the meeting but provides no details. These are the two sources that most people have used. Many historians have just accepted the Guderian account. One Kursk book started with the narrative of this meeting based upon Guderian’s account.

But there is third source. This is an entry in Field Marshal Wolfram Baron von Richthofen’s diary based upon a conversation he had with General Jeschonnek, the Chief of Staff of the Luftwaffe. It stated:

“[On 27 April] General Model declared he was not strong enough and would probably get bogged down or take too long. The Fuehrer took the view that the attack must be punched through without fail in shortest time possible. [Early in May] General Guderian offered to furnish enough tank units within six weeks to guarantee this. The Fuehrer thus decided on a postponement of six weeks. To get the blessing of all sides on this decision, he called a conference [on 4 May] with Field Marshals von Kluge and von Manstein. At first they agreed on a postponement; but when they heard that the Fuehrer had already made his mind up to that effect, they spoke out for an immediate opening of the attack—apparently in order to avoid the odium of being blamed for the postponement themselves.”

This account directly contradicts the Guderian account. The problem is that this reference to the diary entry and its translation from German was done by David Irving in his controversial book, Hitler’s War, originally published in 1977. Wolfram von Richthofen was a cousin of the famous World War I ace Manfred von Richthofen, the highest scoring ace in World War I with 80 claimed kills. Wolfram Richthofen served with the German air unit, the Condor Legion, in the Spanish Civil War and planned the bombing of Guernica. He had led a number of German air formations throughout the war and in May 1943 was the commander of the VIII Air Corps which was to participate in the Battle of Kursk. The command of this air corps was to be taken over by General Jeschonnek for the upcoming battle (this never happened). Richthofen’s diary has been quoted from extensively by David Irving. To date, I do not know of anyone else that has translated it.

So, before I used the quote, I wrote to David Irving in 2002. I specifically asked him about the diary and where it was located. It noted this in the footnote to this passage:

“David Irving, page 514 (or pages 583–584 in his 2001 version of the book that is available on the web). According to emails received from David Irving in 2002 and 2008, this passage is a directly translated quote from the diary, and the diary was stored at the Militargeschichtliches Forschungsamt at Freiburg im Breisgau, Germany. A xerox of the page in question is stored in the Irving Collection at the Institut fur Zeitgeschechte in Munich, Germany. We have not checked these files and cannot confirm the translation.”

 So, I could access the files. It was possible to see the diary and check the translation. This was considered. Of course, to do so would have required me to travel to Germany, with a translator, to examine the diary. This would have taken at least a week of my time and cost a few thousand dollars. For fairly obvious reasons, I choose not to do this. Instead, I stated in the footnote that we had not confirmed the translation.

As David Irving headed to trial, I continued to wonder about this passage. There was no reason to assume that it was faked, or deliberately grossly mistranslated. On the other hand, it was something I was not 100% sure of. But, there was also no reason to assume that Guderian’s or Manstein’s account was 100% correct either, and people had freely used them without much question. So, do I build my narrative on those two well-known first-person accounts and ignore the contradictory second-hand account from Richthofen’s dairy just because it came from David Irving? I decided that all accounts needed to be presented and I left it to the reader to decide which they believed.

As I noted in my book (on page 69):

As no stenogram exists of this conference, one is left only with the memoirs of two generals and the diary of a person who did not participate. There also are what appear to be the Inspector General of Armored Troops’ notes for the meeting for 3 May. These notes clearly show the beneficial effects on tank strength of a six-week delay in the offensive. Guderian’s memoirs are quite explicit as to what happened at the conference but appear to be confused as to attendees and dates. Manstein mentions the conference and the issues in a very general sense. The Richthofen entry contradicts the other two memoirs, claiming that Guderian was the source of the six-week delay and that Manstein and Kluge supported the delay. It is impossible to resolve these differences.

The problem is that both Guderian’s and Manstein’s memoirs were written after the war. It is hard to say what passages may have been self-serving or written with an eye towards future historians. The story by Jeschonnek was recorded at the time by Richthofen. Jeschonnek committed suicide in August 1943 and Richthofen died from a brain tumor in July 1945. Which account is more “real?”

Saigon, 1965

The American RAND staff and Vietnamese interviewers on the front porch of the villa on Rue Pasteur. Courtesy of Hanh Easterbrook. [Revisionist History]

Although this blog focuses on quantitative historical analysis, it is probably a good idea to consider from time to time that the analysis is being done by human beings. As objective as analysts try to be about the subjects they study, they cannot avoid interpreting what they see through the lenses of their own personal biases, experiences, and perspectives. This is not a bad thing, as each analyst can bring something new to the process and find things that other perhaps cannot.

The U.S. experience in Vietnam offers a number of examples of this. Recently, journalist and writer Malcolm Gladwell presented a podcast exploring an effort by the RAND Corporation initiated in the early 1960s to interview and assess the morale of captured Viet Cong fighters and defectors. His story centers on two RAND analysts, Leon Gouré and Konrad Kellen, and one of their Vietnamese interpreters, Mai Elliott. The podcast traces the origins and history of the project, how Gouré, Kellen, and Elliott brought very different perspectives to their work, and how they developed differing interpretations of the evidence they collected. Despite the relevance of the subject and the influence the research had on decision-making at high levels, the study ended inconclusively and ambivalently for all involved. (Elliott would go on to write an account of RAND’s activities in Southeast Asia and several other books.)

Gladwell presents an interesting human story as well as some insight into the human element of social science analysis. It is a unique take on one aspect of the Vietnam War and definitely worth the time to listen to. The podcast is part of his Revisionist History series.

Book Review

I have not posted book reviews to this site, and do not really plan to in the future. But, there was a book review of America’s Modern Wars in the Military Review by Brig. Gen. John C. Hanley, who I am not familiar with. The review ended with a paragraph that I thought was meaningful. He said:

Lawrence’s book shows that reliable outcome estimates are determined through quantitative reasoning. Being able to anticipate the outcomes of any military operation, through reliable means, can greatly assist in strategic and operational level leaders’ decision-making processes. These results are what the book brings to light for military leaders and their staffs. Staff members who develop course-of-action recommendations can use the techniques described by Lawrence to provide quality analysis. Commanders will have the confidence from their staff estimates to choose the best courses of action for future military operations. Logically estimating the outcomes of future military operations, as the author writes, is what U.S. citizens should expect and demand from their leaders who take this country to war.

Anyhow the link to his review is:

Military Review

His review is back on page 131.

 

P.S. Then there was the book review that started:  “An excel spreadsheet masquerading as a book”