Category Insurgency & Counterinsurgency

Long Protests

A memorial in the Polish city of Wroclaw of the Tiananmen Square protests

We are looking at a rather extended series of protests in Venezuela now. Sometimes the successful street protests or people power protests that overthrow governments are fairly brief and sudden. For example, the street protests that ended the attempted coup, saved Boris Yeltsin as president of Russia, and eventually resulted in the dissolution of the 74-year old Soviet Union lasted only 3 days and resulted in only 3 deaths. Many other of the people power protests in Eastern Europe in 1990/1991 were also brief and not very bloody.

But often these things last a little longer with a lot more blood shed. For example, the Romanian protests of 1991 lasted 12 days, and involved considerable violence, with snipers firing on the protesting crowds as foreign (Libyan) soldiers tried to protect the regime. When it was done 689 to 1,290 people were dead but the government was overthrown (and executed). The more recent “successful” street protests that overthrew the 29-year Egyptian government of Mubarak in 2011 lasted 17 days.  Some 846 people died in the violence during the protests. One of the more extended efforts, conducted in the freezing winter of Ukraine, and also under sniper fire, was the Euromaiden Protests of 2013/2014 that lasted a little more than three months. When it was done, the government of Yanukovych was overthrown (for a second time), but at a cost of 104-780 people’s lives, and the loss of territory due to political protests and seizure by Russia. On the other hand, there is the Tiananmen Square protests on 1989, which went on for about a month and half before the government sent in the tanks. This failed protest cost at least 1,045 lives, and some claim thousands.

Now, we have never done a survey of people power protests and attempts to remove governments by protest. This would be useful. I do not know if longer protests have a higher or lower success rate than shorter protests. Right now we are looking at the most recent round of protests in Venezuela that started on 10 January 2019 and that have now gone on for four+ months. One could make the claim that the protests started in 2017 or 2014. They have also been bloody with at least 107 people killed in 2019.

The question is, as these protests extend, does this mean that Maduro has a greater chance of hanging on to power? This may be the lesson of Syria, which started as a series of protests in March 2011 that then morphed into a bloody civil war (over 200,000 dead) that is still going on today.

I would be sorely tempted to assemble a data base of people power protests since WWII (which is not a small effort) and then see if I could find some patterns there (like we did in our insurgency studies), including success rate, duration, size, and the reasons for successful versus unsuccessful protests.

Million Dollar Books

Most of our work at The Dupuy Institute involved contracts from the U.S. Government. These were often six digit efforts. So for example, the Kursk Data Base was funded for three years (1993-1996) and involved a dozen people. The Ardennes Campaign Simulation Data Base (ACSDB) was actually a larger effort (1987-1990). Our various combat databases like DLEDB, BODB and BaDB were created by us independent of any contractual effort. They were originally based upon the LWDB (that became CHASE), the work we did on Kursk and Ardennes, the engagements we added because of our Urban Warfare studies, our Enemy Prisoner of War Capture Rates studies, our Situational Awareness study, our internal validation efforts, several modeling  related contracts from Boeing, etc. All of these were expanded and modified bit-by-bit as a result of a series of contracts from different sources. So, certainly over time, hundreds of thousands have been spent on each of these efforts, and involved the work of a half-dozen or more people.

So, when I sit down to write a book like Kursk: The Battle of Prokhorovka (based off of the Kursk Data Base) or America’s Modern Wars (based on our insurgency studies) or War by Numbers (which used our combat databases and significant parts of our various studies), these are books developed from an extensive collection of existing work. Certainly hundreds of thousands of dollars and the work of at least 6 to 12 people were involved in the studies and analysis that preceded these books. In some cases, like our insurgency studies, it was clearly more than a million dollars.

This is a unique situation, for me to be able to write a book based upon a million dollars of research and analysis. It is something that I could never have done as a single scholar or a professor or a teacher somewhere. It is not work I could of done working for the U.S. government. These are not books that I could have written based upon only my own work and research.

In many respects, this is what needs to be norm in the industry. Research and analysis efforts need to be properly funded and conducted by teams of people. There is a limit to what a single scholar, working in isolation, can do. Being with The Dupuy Institute allowed me to conduct research and analysis above and beyond anything I could have done on my own.

Other TDI Data Bases

What we have listed in the previous articles is what we consider the six best databases to use for validation. The Ardennes Campaign Simulation Data Base (ACSDB) was used for a validation effort by CAA (Center for Army Analysis). The Kursk Data Base (KDB) was never used for a validation effort but was used, along with Ardennes, to test Lanchester equations (they failed).

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base to date has not been used for anything that we are aware of. As the program we were supporting was classified, then they may have done some work with it that we are not aware of, but I do not think that is the case.

The Battle of Britain Data Base

Our three battles databases, the division-level data base, the battalion-level data base and the company-level data base, have all be used for validating our own TNDM (Tactical Numerical Deterministic Model). These efforts have been written up in our newsletters (here: http://www.dupuyinstitute.org/tdipub4.htm) and briefly discussed in Chapter 19 of War by Numbers. These are very good databases to use for validation of a combat model or testing a casualty estimation methodology. We have also used them for a number of other studies (Capture Rate, Urban Warfare, Lighter-Weight Armor, Situational Awareness, Casualty Estimation Methodologies, etc.). They are extremely useful tools analyzing the nature of conflict and how it impacts certain aspects. They are, of course, unique to The Dupuy Institute and for obvious business reasons, we do keep them close hold.

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

We do have a number of other database that have not been used as much. There is a list of 793 conflicts from 1898-1998 that we have yet to use for anything (the WACCO – Warfare, Armed Conflict and Contingency Operations database). There is the Campaign Data Base (CaDB) of 196 cases from 1904 to 1991, which was used for the Lighter Weight Armor study. There are three databases that are mostly made of cases from the original Land Warfare Data Base (LWDB) that did not fit into our division-level, battalion-level, and company-level data bases. They are the Large Action Data Base (LADB) of 55 cases from 1912-1973, the Small Action Data Base (SADB) of 5 cases and the Battles Data Base (BaDB) of 243 cases from 1600-1900. We have not used these three database for any studies, although the BaDB is used for analysis in War by Numbers.

Finally, there are three databases on insurgencies, interventions and peacekeeping operations that we have developed. This first was the Modern Contingency Operations Data Base (MCODB) that we developed to use for Bosnia estimate that we did for the Joint Staff in 1995. This is discussed in Appendix II of America’s Modern Wars. It then morphed into the Small Scale Contingency Operations (SSCO) database which we used for the Lighter Weight Army study. We then did the Iraq Casualty Estimate in 2004 and significant part of the SSCO database was then used to create the Modern Insurgency Spread Sheets (MISS). This is all discussed in some depth in my book America’s Modern Wars.

None of these, except the Campaign Data Base and the Battles Data Base (1600-1900), are good for use in a model validation effort. The use of the Campaign Data Base should be supplementary to validation by another database, much like we used it in the Lighter Weight Armor study.

Now, there have been three other major historical validation efforts done that we were not involved in. I will discuss their supporting data on my next post on this subject.

Some More Statistics on Afghanistan (March 2019)

Tank park of Soviet tanks near Kunduz, 4 May 2008. These were left over ordnance from the previous war (photo by William A. Lawrence II).

Just making a small update to my last posts on Afghanistan. Using the Secretary General quarterly reports on Afghanistan. Those reports are here:

https://unama.unmissions.org/secretary-general-reports

The report was posted 6 March, even though it is dated 28 February. Always worth reading.

  1. “In 2018, the United Nations recorded 22,478 security-related incidents, a 5 per cent reduction as compared with the historically high 23,744 security-related incidents recorded in 2017.”
  2. “The Mission documented 10,993 civilian casualties (3,804 people killed and 7.189 injured between 1 January and 31 December 2018, the highest number of civilian deaths records in a single year since UNAMA began systematic documentation in 2009, and an overall increase of 5 per cent compared with 2017.”
  3. “UNAMA attributed 63 percent of all civilian casualties to anti-government elements (37 per cent to the Taliban, 20 per cent to ISIL-KP and 6 per cent to unidentified anti-government elements, including self-proclaimed ISIL-KP), 24 per cent to pro-government forces (14 per cent to Afghan national defense and security forces, 6 per cent to international military forces, 2 per cent to pro-government militias, and 2 per cent to undermined or multiple pro-government forces), 10 per cent to unattributed crossfire during ground engagements between anti-government elements and pro-government forces and 3 per cent to other incidents, including explosive remnants of war and cross-border shelling.”
  4. “Between 1 November and 10 January 49,001 people were newly displaced by the conflict, brining the total number of displaced in 2018 to 364,883 people.”

              Security           Incidences      Civilian

Year      Incidences       Per Month       Deaths

2008        8,893                  741

2009      11,524                  960

2010      19,403               1,617

2011      22,903               1,909

2012      18,441?             1,537?                             *

2013      20,093               1,674               2,959

2014      22,051               1,838               3,699

2015      22,634               1,886               3,545

2016      23,712               1,976               3,498

2017      23,744               1,979               3,438

2018      22,478               1,873               3,804

 

As I noted in my last post: “This war does appear to be flat-lined, with no end in sight.” I choose not to comment at the moment on the on-going peace negotiations.

 

Some Statistics on Afghanistan (Jan 2019)

 

Has The Army Given Up On Counterinsurgency Research, Again?

Mind-the-Gap

[In light of the U.S. Army’s recent publication of a history of it’s involvement in Iraq from 2003 to 2011, it may be relevant to re-post this piece from from 29 June 2016.]

As Chris Lawrence mentioned yesterday, retired Brigadier General John Hanley’s review of America’s Modern Wars in the current edition of Military Review concluded by pointing out the importance of a solid empirical basis for staff planning support for reliable military decision-making. This notion seems so obvious as to be a truism, but in reality, the U.S. Army has demonstrated no serious interest in remedying the weaknesses or gaps in the base of knowledge underpinning its basic concepts and doctrine.

In 2012, Major James A. Zanella published a monograph for the School of Advanced Military Studies of the U.S. Army Command and General Staff College (graduates of which are known informally as “Jedi Knights”), which examined problems the Army has had with estimating force requirements, particularly in recent stability and counterinsurgency efforts.

Historically, the United States military has had difficulty articulating and justifying force requirements to civilian decision makers. Since at least 1975, governmental officials and civilian analysts have consistently criticized the military for inadequate planning and execution. Most recently, the wars in Afghanistan and Iraq reinvigorated the debate over the proper identification of force requirements…Because Army planners have failed numerous times to provide force estimates acceptable to the President, the question arises, why are the planning methods inadequate and why have they not been improved?[1]

Zanella surveyed the various available Army planning tools and methodologies for determining force requirements, but found them all either inappropriate or only marginally applicable, or unsupported by any real-world data. He concluded

Considering the limitations of Army force planning methods, it is fair to conclude that Army force estimates have failed to persuade civilian decision-makers because the advice is not supported by a consistent valid method for estimating the force requirements… What is clear is that the current methods have utility when dealing with military situations that mirror the conditions represented by each model. In the contemporary military operating environment, the doctrinal models no longer fit.[2]

Zanella did identify the existence of recent, relevant empirical studies on manpower and counterinsurgency. He noted that “the existing doctrine on force requirements does not benefit from recent research” but suggested optimistically that it could provide “the Army with new tools to reinvigorate the discussion of troops-to-task calculations.”[3] Even before Zanella published his monograph, however, the Defense Department began removing any detailed reference or discussion about force requirements in counterinsurgency from Army and Joint doctrinal publications.

As Zanella discussed, there is a body of recent empirical research on manpower and counterinsurgency that contains a variety of valid and useful insights, but as I recently discussed, it does not yet offer definitive conclusions. Much more research and analysis is needed before the conclusions can be counted on as a valid and justifiably reliable basis for life and death decision-making. Yet, the last of these government sponsored studies was completed in 2010. Neither the Army nor any other organization in the U.S. government has funded any follow-on work on this subject and none appears forthcoming. This boom-or-bust pattern is nothing new, but the failure to do anything about it is becoming less and less understandable.

NOTES

[1] Major James A. Zanella, “Combat Power Analysis is Combat Power Density” (Ft. Leavenworth, KS: School of Advanced Military Studies, U.S. Army Command and General Staff College, 2012), pp. 1-2.

[2] Ibid, 50.

[3] Ibid, 47.

Afghan Security Forces Deaths Top 45,000 Since 2014

The President of Afghanistan, Ashraf Ghani, speaking with CNN’s Farid Zakiria, at the World Economic Forum in Davos, Switzerland, 25 January 2019. [Office of the President, Islamic Republic of Afghanistan]

Last Friday, at the World Economic Forum in Davos, Switzerland, Afghan President Ashraf Ghani admitted that his country’s security forces had suffered over 45,000 fatalities since he took office in September 2014. This total far exceeds the total of 28,000 killed since 2015 that Ghani had previously announced in November 2018. Ghani’s cryptic comment in Davos did not indicate how the newly revealed total relates to previously released figures, whether it was based on new accounting, a sharp increase in recent casualties, or more forthrightness.

This revised figure casts significant doubt on the validity of analysis based on the previous reporting. Correcting it will be difficult. At the request of the Afghan government in May 2017, the U.S. military has treated security forces attrition and loss data as classified and has withheld it from public release.

If Ghani’s figure is, in fact, accurate, then it reinforces the observation that the course of the conflict is tilting increasingly against the Afghan government.

 

An Administrative Weakness

Another post is response the comments to this blog post:

The Afghan Insurgents

The comment was “…the insurgents are one side of the coin and the other is the credibility of the government we are trying to create in Afghanistan…If the central government is seen as corrupt and self serving then this also inspires the insurgents and may in fact be the decisive factor….”

This immediately brought to mind David Galula’s construct, which was based upon four major points (see pages 210-211 of America’s Modern Wars):

  1. Insurgents need a cause
  2. A police and administrative weakness
  3. A non-hostile geographic environment
  4. Outside support in the middle to late states.

He specifically state that: “the first two are musts. The last is a help that may become a necessity.”

Now, the problem is that we never took the time to measure an “administrative weakness” or even define what it was. Nor did David Galula. Furthermore, there is also probably an “administrative weakness” or two on the guerilla side. If the culture of Iraq/Afghanistan/Vietnam make it difficult to create government structures and armed forces that are highly motivated, unified and not corrupted, well I suspect some of those same problems exist among the guerillas drawn from that same culture. Therefore, to measure this requires some way of defining what these “administrative weaknesses” are, but also quantifying them, and then determining how they affected both (or more) sides. Needless to say, this was not going to be done in the initial phase of our analysis. We were never funded to conduct follow-up analysis.

This is the problem with David Galula’s construct. There is no easy way to measure it or analyze it. Galula offers no definition of what an “administrative weakness” is. If he does not define it, then how do I define it for his “theory?”

One does note that Galula in his description of the Viet Cong in 1963 states that:

The insurgent has really no cause at all: he is exploiting the counterinsurgent’s weaknesses and mistakes….The insurgent’s program is simply: “Throw the rascals out.: If the “rascals” (whoever is in power in Saigon) amend their ways, the insurgents would lose his cause.

As I note on page 48 of my book:

This was a war that eventually resulted in over 2 million deaths and insurgent force in excess of 300,000. As it is, one could infer from Galula’s statement that he felt that the insurgency could be easily defeated since it was based upon “no real cause.”  We believe that this view has been proven incorrect by historical events.

Clearly identifying insurgent cause and administrative weakness was also a challenge for David Galula.

Bernard Fall Quote

We have gotten several interesting comments to this blog post:

The Afghan Insurgents

One comment stated in part that “….I am thinking the road building, school building, and all that has zero impact on winning the people…..”

This reminds me of a Bernard Fall quote related to the Vietnam War. I used it as the introduction to Chapter 14 (page 147) of my book America’s Modern Wars:

Civic action is not the construction of privies or the distribution of anti-malaria sprays. One can’t fight an ideology; one can’t fight a militant doctrine with better privies. Yet this is done constantly. One side says, “Land reform,” and the other side says, “Better culverts.” One side says, “We are going to kill all of those nasty village chiefs and landlords.” The other side says, “Yes, but look, we want to give you prize pigs to improve your strain.” These arguments just do not match.  Simple but adequate appeals will have to be found sooner or later.

 Bernard Fall, 1967

 

U.S. Army Releases New Iraq War History

On Thursday, the U.S. Army released a long-awaited history of its operational combat experience in Iraq from 2003 to 2011. The study, titled The U.S. Army in the Iraq War – Volume 1: Invasion – Insurgency – Civil War, 2003-2006 and The U.S. Army in the Iraq War – Volume 2: Surge and Withdrawal, 2007-2011, was published under the auspices of the U.S. Army War College’s Strategic Studies Institute.

This reflects its unconventional origins. Under normal circumstances, such work would be undertaken by either the U.S. Army Combat Studies Institute (CSI), which is charged with writing quick-turnaround “instant histories,” or the U.S. Army Center of Military History (CMH), which writes more deeply researched “official history,” years or decades after the fact.[1] Instead, these volumes were directly commissioned by then-Chief of the Staff of the Army, General Raymond Odierno, who created an Iraq Study Group in 2013 to research and write them. According to Odierno, his intent was “to capture key lessons, insights, and innovations from our more than 8 years of conflict in that country.[I]t was time to conduct an initial examination of the Army’s experiences in the post-9/11 wars, to determine their implications for our future operations, strategy, doctrine, force structure, and institutions.”

CSI had already started writing contemporary histories of the conflict, publishing On Point: The United States Army in Operation IRAQI FREEDOM (2004) and On Point II: Transition to the New Campaign (2008), which covered the period from 2003 to January 2005. A projected third volume was advertised, but never published.

Although the Iraq Study Group completed its work in June 2016 and the first volume of the history was scheduled for publication that October, its release was delayed due to concerns within the Army historical community regarding the its perspective and controversial conclusions. After external reviewers deemed the study fair and recommended its publication, claims were lodged after its existence was made public last autumn that the Army was suppressing it to avoid embarrassment. Making clear that the study was not an official history publication, current Army Chief of Staff General Mark Milley added his own forward to Odierno’s, and publicly released the two volumes yesterday.

NOTES

[1] For a discussion of the roles and mission of CSI and CMH with regard to history, see W. Shane Story, “Transformation or Troop Strength? Early Accounts of the Invasion of IraqArmy History, Winter 2006; Richard W. Stewart, “‘Instant’ History and History: A Hierarchy of NeedsArmy History, Winter 2006; Jeffrey J. Clarke, “The Care and Feeding of Contemporary History,” Army History, Winter 2006; and Gregory Fontenot, “The U.S. Army and Contemporary Military History,” Army History, Spring 2008.

 

The Afghan Insurgents

Suicide bomber in Baghlan Jadid, April 2009. Photo by William A. Lawrence II

The charts looking at force ratios created by our regression analysis of 83 cases were very much based on insurgent cause, a subject that a lot of counterinsurgency analysts gloss over. The question is whether the insurgency is based upon a central political idea (like nationalism), an overarching idea (an ideology like communism) and a limited developed political thought (a regional or factional insurgency). This very much changes the difficulty of suppressing the insurgency. It also changes the odds of winning. The force levels and sometimes duration of insurgencies were significantly different for these cases. In my book America’s Modern Wars I end up spending three chapters on this subject: Chapter 4: Force Ratios Really Do Matter, Chapter 5: Cause Really is Important and Chapter 6: The Two Together Seem Really Important.

Now, this came up when we were doing our estimate in 2004 of U.S. casualties and the  duration of an insurgency in Iraq (which is in Chapter 1 of my book). In this case we have a country that was maybe 60% Shiite Muslim and an insurgency that was centered around the population of around 20% Sunni Muslim. Was this a regional or factional insurgency? Probably. We built that estimate on only 28 cases (because, you know, research takes time). In those cases that were based upon a central political idea, the insurgents won 75% of the time. In those cases that were based upon a limited political idea, the insurgents did not win in any of those cases. This is a big, and very noticeable difference. It was the one bright spot in my briefings (as people weren’t too excited about my conclusions that we would loose 5,000+ and it would take 10+ years…as that was not what was being promised by our political leaders in 2004).

The challenge is sorting out which applies to Afghanistan. There is no question that when they were fighting the Soviet Union, it was based upon a central political idea (nationalism). The question is, what is this insurgency based upon?

Part of the problem in sorting out what is happening in Afghanistan is that the country’s demographics are very complex. For example 42% of the population is Pashtun, 33% is Tajik, 9% is Hazara (who are usually Shiite Muslims), 9% are Uzbek, 4% Aimek, 3% Trukmen, 2% Baloch and 4% others (source World Factbook, 2013 estimate, courtesy of Wikipedia).

Language is a little better with 80% speaking Dari, which is Persian or Farsi. 47% speak Pashto, the native tongue of Pashtuns. 5% speak English.

The country is usually considered 85-90% Sunni Muslim and 7-15% Shiite Muslim.

A 2018 population estimate for Afghanistan is 31,575,018 (pretty precise for an estimate).

The insurgents tend to also be separated in a bewildering array of groups (as was also the case when they were fighting the Soviet Union). Some of the insurgent groups are:

Taliban: These are the previous rulers of Afghanistan. Was close to Al-Qaeda.

Haqqani Network: Offshoot of the Taliban. Al-Qaeda affiliate.

Fidal Mahaz: Splinter group from the Taliban

IEHCA: Splinter group from the Taliban

HIG: Gulbuddin Hekmatyar group, who has been doing this since 1980s. He signed a peace agreement with the Afghan government in 2016.

IMU: Originally an Uzbek movement.

Islamic Jihad Union (IJU): Militant Islamist organization. Split off from IMU. Al-Qaeda affiliate

ETIM: Uyghurs from China.

LeJ: anti-Shiite group

Pakistani Taliban or TTP: Primarily focused on Pakistan

Lel: Primarily focused on Pakistan

ISIL-KP: Islamic state affilliate.

 

This is a quickly cobbled together list. Some with more expertise are welcome to add or modify this list.

Wikipedia does give strengths for some of these groups. Have no idea how accurate they are:

Taliban: 60,000

Haqqani Network: 4,000-15,000

Fidai Mahaz: 8,000

IEHCA: 3,00-3,500

HIG: 1,500-2,200+

al-Qaeda: 50-100

 

So….when I was coding the over 100 cases that we now have in our database, it was relatively easy to determine if an insurgency was based upon a central idea, or an overarching idea or was regional or factional. There was very little debate in most cases.

On the other…..it is a little harder to tell what it should be in this particular case.

Interesting enough, I stumbled across an article last week discussing the same issue: https://nationalinterest.org/feature/taliban-and-changing-nature-pashtun-nationalism-41182