Mystics & Statistics

A blog on quantitative historical analysis hosted by The Dupuy Institute

The Venezuelan Military

Caracas (Venezuela), 05 de Marzo del 2014. El Canciller del Ecuador, Ricardo Patiño, participó en los actos de conmemoración de la muerte del Comandante Hugo Chávez Frías. Foto: Xavier Granja Cedeño / Cancilleria Ecuador

Our government claims that all options are on the table in response to the situation developing in Venezuela. I gather this includes military options, which according to news reports, the U.S. had yet to actually mobilize for. So, if military options are a possibility, what does the Venezuelan military actually look like?

First, Venezuela is not a small country. It is over 32 million people and almost a million square kilometers in area. Population wise, this is more people than were in Vietnam in 1965, Afghanistan in 2001 or Iraq in 2003. Area wise, it is several hundred thousand square kilometers bigger than Afghanistan, Iraq or Vietnam.

The Venezuelan Army has 128,000 troops of six divisions. They have 192 T-72s, 84 AMX-30s, 78 Scorpion light tanks, and 111+ AMX-13s, several hundred armored personnel carriers and over 100 armored cars. They also have 48 Hind Mi-35 attack helicopters. The Venezuelan Air Force has 10+ F-16s and 23 Sukhoi Su-30s. The Venezuelan Navy is 60,000 personnel including 12,000 marines. It has 2 submarines, 3 missile frigates, 3 corvettes, 10 large patrol boats and gunboats, 19 smaller patrol boats and 4 LSTs (landing ship tank). Added to that is a National Guard with police functions of around 70,000 troops. They have up to at least 191 (and eventually up to 656) of the white Chinese-built APCs that were running over people a couple of days ago (see picture). There is also a National Militia and a Presidential Honor Guard brigade. So we are looking at 258,000+ people under arms. All data is from Wikipedia.

Added to that, the source of Chavez (Maduro’s predecessor) power and popular support was the military. He was a career military officer for 17 years, He was a captain when he attempted two violent coups in 1992. To date, the government of Maduro has maintained the support of the military. This is probably the key to his ability to hold onto power.

Now, retired General Jack Keene recently did discuss three military options 1) move forces to Colombia and threaten, 2) move a coalition of forces (Colombia and Brazil) into Venezuela to provide humanitarian aid and 3) invade with the purpose of conducting regime change. See: Keene Interview

I suspect that any form of direct intervention, like we did in Vietnam, Afghanistan and Iraq is not being seriously considered. So, one wonders what other military options is the United States considering, if any.

The Battle of Prokhorovka book is published

Arriving at my door today (this Monday) was my new book The Battle of Prokhorovka published by Stackpole Books. It is based upon my original mega-book but is primarily focused on the operations leading up to and including the Battle of Prokhorovka.

It can obtained from Stackpole at: Stackpole Books

Or from Amazon.com at: Buy from Amazon

Long Protests

A memorial in the Polish city of Wroclaw of the Tiananmen Square protests

We are looking at a rather extended series of protests in Venezuela now. Sometimes the successful street protests or people power protests that overthrow governments are fairly brief and sudden. For example, the street protests that ended the attempted coup, saved Boris Yeltsin as president of Russia, and eventually resulted in the dissolution of the 74-year old Soviet Union lasted only 3 days and resulted in only 3 deaths. Many other of the people power protests in Eastern Europe in 1990/1991 were also brief and not very bloody.

But often these things last a little longer with a lot more blood shed. For example, the Romanian protests of 1991 lasted 12 days, and involved considerable violence, with snipers firing on the protesting crowds as foreign (Libyan) soldiers tried to protect the regime. When it was done 689 to 1,290 people were dead but the government was overthrown (and executed). The more recent “successful” street protests that overthrew the 29-year Egyptian government of Mubarak in 2011 lasted 17 days.  Some 846 people died in the violence during the protests. One of the more extended efforts, conducted in the freezing winter of Ukraine, and also under sniper fire, was the Euromaiden Protests of 2013/2014 that lasted a little more than three months. When it was done, the government of Yanukovych was overthrown (for a second time), but at a cost of 104-780 people’s lives, and the loss of territory due to political protests and seizure by Russia. On the other hand, there is the Tiananmen Square protests on 1989, which went on for about a month and half before the government sent in the tanks. This failed protest cost at least 1,045 lives, and some claim thousands.

Now, we have never done a survey of people power protests and attempts to remove governments by protest. This would be useful. I do not know if longer protests have a higher or lower success rate than shorter protests. Right now we are looking at the most recent round of protests in Venezuela that started on 10 January 2019 and that have now gone on for four+ months. One could make the claim that the protests started in 2017 or 2014. They have also been bloody with at least 107 people killed in 2019.

The question is, as these protests extend, does this mean that Maduro has a greater chance of hanging on to power? This may be the lesson of Syria, which started as a series of protests in March 2011 that then morphed into a bloody civil war (over 200,000 dead) that is still going on today.

I would be sorely tempted to assemble a data base of people power protests since WWII (which is not a small effort) and then see if I could find some patterns there (like we did in our insurgency studies), including success rate, duration, size, and the reasons for successful versus unsuccessful protests.

Million Dollar Books

Most of our work at The Dupuy Institute involved contracts from the U.S. Government. These were often six digit efforts. So for example, the Kursk Data Base was funded for three years (1993-1996) and involved a dozen people. The Ardennes Campaign Simulation Data Base (ACSDB) was actually a larger effort (1987-1990). Our various combat databases like DLEDB, BODB and BaDB were created by us independent of any contractual effort. They were originally based upon the LWDB (that became CHASE), the work we did on Kursk and Ardennes, the engagements we added because of our Urban Warfare studies, our Enemy Prisoner of War Capture Rates studies, our Situational Awareness study, our internal validation efforts, several modeling  related contracts from Boeing, etc. All of these were expanded and modified bit-by-bit as a result of a series of contracts from different sources. So, certainly over time, hundreds of thousands have been spent on each of these efforts, and involved the work of a half-dozen or more people.

So, when I sit down to write a book like Kursk: The Battle of Prokhorovka (based off of the Kursk Data Base) or America’s Modern Wars (based on our insurgency studies) or War by Numbers (which used our combat databases and significant parts of our various studies), these are books developed from an extensive collection of existing work. Certainly hundreds of thousands of dollars and the work of at least 6 to 12 people were involved in the studies and analysis that preceded these books. In some cases, like our insurgency studies, it was clearly more than a million dollars.

This is a unique situation, for me to be able to write a book based upon a million dollars of research and analysis. It is something that I could never have done as a single scholar or a professor or a teacher somewhere. It is not work I could of done working for the U.S. government. These are not books that I could have written based upon only my own work and research.

In many respects, this is what needs to be norm in the industry. Research and analysis efforts need to be properly funded and conducted by teams of people. There is a limit to what a single scholar, working in isolation, can do. Being with The Dupuy Institute allowed me to conduct research and analysis above and beyond anything I could have done on my own.

Summation of our Validation Posts

This extended series of posts about validation of combat models was originally started by Shawn Woodford’s post on future modeling efforts and the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

This post apparently irked some people at TRADOC and they wrote an article in the December issue of the Phalanx referencing his post and criticizing it. This resulted in the following seven responses from me:

Engaging the Phalanx

Validation

Validating Attrition

Physics-based Aspects of Combat

Historical Demonstrations?

SMEs

Engaging the Phalanx (part 7 of 7)

This was probably overkill…..but guys who write 1,662 page books sometimes tend to be a little wordy.

While it is very important to identify a problem, it is also helpful to show the way forward. Therefore, I decided to discuss what data bases were available for validation. After all, I would like to see the modeling and simulation efforts to move forward (and right now, they seem to be moving backward). This led to the following nine posts:

Validation Data Bases Available (Ardennes)

Validation Data Bases Available (Kursk)

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base

Battles versus Campaigns (for Validation)

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

Other TDI Data Bases

Other Validation Data Bases

There were also a few other validation issues that had come to mind while I was writing these blog posts, so this led to the following series of three posts:

Face Validation

Validation by Use

Do Training Models Need Validation?

Finally, there were a few other related posts that were scattered through this rather extended diatribe. It includes the following six posts:

Paul Davis (RAND) on Bugaboos

Diddlysquat

TDI Friday Read: Engaging The Phalanx

Combat Adjudication

China and Russia Defeats the USA

Building a Wargamer

That kind of ends this discussion on validation. It kept me busy for while. Not sure if you were entertained or informed by it. It is time for me to move onto another subject, not that I have figured out yet what that will be.

Dupuy’s Verities: The Inefficiency of Combat

The “Mud March” of the Union Army of the Potomac, January 1863.

The twelfth of Trevor Dupuy’s Timeless Verities of Combat is:

Combat activities are always slower, less productive, and less efficient than anticipated.

From Understanding War (1987):

This is the phenomenon that Clausewitz called “friction in war.” Friction is largely due to the disruptive, suppressive, and dispersal effects of firepower upon an aggregation of people. This pace of actual combat operations will be much slower than the progress of field tests and training exercises, even highly realistic ones. Tests and exercises are not truly realistic portrayals of combat, because they lack the element of fear in a lethal environment, present only in real combat. Allowances must be made in planning and execution for the effects of friction, including mistakes, breakdowns, and confusion.

While Clausewitz asserted that the effects of friction on the battlefield could not be measured because they were largely due to chance, Dupuy believed that its influence could, in fact, be gauged and quantified. He identified at least two distinct combat phenomena he thought reflected measurable effects of friction: the differences in casualty rates between large and small sized forces, and diminishing returns from adding extra combat power beyond a certain point in battle. He also believed much more research would be necessary to fully understand and account for this.

Dupuy was skeptical of the accuracy of combat models that failed to account for this interaction between operational and human factors on the battlefield. He was particularly doubtful about approaches that started by calculating the outcomes of combat between individual small-sized units or weapons platforms based on the Lanchester equations or “physics-based” estimates, then used these as inputs for brigade and division-level-battles, the results of which in turn were used as the basis for determining the consequences of theater-level campaigns. He thought that such models, known as “bottom up,” hierarchical, or aggregated concepts (and the prevailing approach to campaign combat modeling in the U.S.), would be incapable of accurately capturing and simulating the effects of friction.

Building a Wargamer

Interesting article from Elizabeth Bartels of RAND from November 2018. It is on the War on the Rocks website. Worth reading: Building a Pipeline of Wargaming Talent

Let me highlight a few points:

  1. “On issues ranging from potential conflicts with Russia to the future of transportation and logistics, senior leaders have increasingly turned to wargames to imagine potential futures.”
  2. “The path to becoming a gamer today is modeled on the careers of the last generation of gamers — most often members of the military or defense analysts with strong roots in the hobby gaming community of the 1960s and 1970s.”
    1. My question: Should someone at MORS (Military Operations Research Society) nominate Charles S. Roberts and James F. Dunnigan for the Vance R. Wanner or the Clayton J. Thomas awards? (see: https://www.mors.org/Recognition).
  3. One notes that there is no discussion of the “Base of Sand” problem.
  4. One notes there is no discussion of VVA (Verification, Validation and Accreditation)
  5. The picture heading her article is of a hex board overlaid by acetate.

Do Training Models Need Validation?

Do we need to validate training models? The argument is that as the model is being used for training (vice analysis), it does not require the rigorous validation that an analytical model would require. In practice, I gather this means they are not validated. It is an argument I encountered after 1997. As such, it is not addressed in my letters to TRADOC in 1996: See http://www.dupuyinstitute.org/pdf/v1n4.pdf

Over time, the modeling and simulation industry has shifted from using models for analysis to using models for training. The use of models for training has exploded, and these efforts certainly employ a large number of software coders. The question is, if the core of the analytical models have not been validated, and in some cases, are known to have problems, then what are the models teaching people? To date, I am not aware of any training models that have been validated.

Let us consider the case of JICM. The core of the models attrition calculation was the Situational Force Scoring (SFS). Its attrition calculator for ground combat is based upon a version of the 3-to-1 rule comparing force ratios to exchange ratios. This is discussed in some depth in my book War by Numbers, Chapter 9, Exchange Ratios. To quote from page 76:

If the RAND version of the 3 to 1 rule is correct, then the data should show a 3 to 1 force ratio and a 3 to 1 casualty exchange ratio. However, there is only one data point that comes close to this out of the 243 points we examined.

That was 243 battles from 1600-1900 using our Battles Data Base (BaDB). We also tested it to our Division Level Engagement Data Base (DLEDB) from 1904-1991 with the same result. To quote from page 78 of my book:

In the case of the RAND version of the 3 to 1 rule, there is again only one data point (out of 628) that is anywhere close to the crossover point (even fractional exchange ratio) that RAND postulates. In fact it almost looks like the data conspire to leave a noticeable hole at that point.

So, does this create negative learning? If the ground operations are such that an attacking ends up losing 3 times as many troops as the defender when attacking at 3-to-1 odds, does this mean that the model is training people not to attack below those odds, and in fact, to wait until they have much more favorable odds? The model was/is (I haven’t checked recently) being used at the U.S. Army War College. This is the advanced education institute that most promotable colonels attend before advancing to be a general officer. Is such a model teaching them incorrect relationships, force ratios and combat requirements?

You fight as you train. If we are using models to help train people, then it is certainly valid to ask what those models are doing. Are they properly training our soldiers and future commanders? How do we know they are doing this. Have they been validated?