Category Conventional warfare

Measuring The Effects Of Combat In Cities, Phase I

“Catalina Kid,” a M4 medium tank of Company C, 745th Tank Battalion, U.S. Army, drives through the entrance of the Aachen-Rothe Erde railroad station during the fighting around the city viaduct on Oct. 20, 1944. [Courtesy of First Division Museum/Daily Herald]

In 2002, TDI submitted a report to the U.S. Army Center for Army Analysis (CAA) on the first phase of a study examining the effects of combat in cities, or what was then called “military operations on urbanized terrain,” or MOUT. This first phase of a series of studies on urban warfare focused on the impact of urban terrain on division-level engagements and army-level operations, based on data drawn from TDI’s DuWar database suite.

This included engagements in France during 1944 including the Channel and Brittany port cities of Brest, Boulogne, Le Havre, Calais, and Cherbourg, as well as Paris, and the extended series of battles in and around Aachen in 1944. These were then compared to data on fighting in contrasting non-urban terrain in Western Europe in 1944-45.

The conclusions of Phase I of that study (pp. 85-86) were as follows:

The Effect of Urban Terrain on Outcome

The data appears to support a null hypothesis, that is, that the urban terrain had no significantly measurable influence on the outcome of battle.

The Effect of Urban Terrain on Casualties

Overall, any way the data is sectioned, the attacker casualties in the urban engagements are less than in the non-urban engagements and the casualty exchange ratio favors the attacker as well. Because of the selection of the data, there is some question whether these observations can be extended beyond this data, but it does not provide much support to the notion that urban combat is a more intense environment than non-urban combat.

The Effect of Urban Terrain on Advance Rates

It would appear that one of the primary effects of urban terrain is that it slows opposed advance rates. One can conclude that the average advance rate in urban combat should be one-half to one-third that of non-urban combat.

The Effect of Urban Terrain on Force Density

Overall, there is little evidence that combat operations in urban terrain result in a higher linear density of troops, although the data does seem to trend in that direction.

The Effect of Urban Terrain on Armor

Overall, it appears that armor losses in urban terrain are the same as, or lower than armor losses in non-urban terrain. And in some cases it appears that armor losses are significantly lower in urban than non-urban terrain.

The Effect of Urban Terrain on Force Ratios

Urban terrain did not significantly influence the force ratio required to achieve success or effectively conduct combat operations.

The Effect of Urban Terrain on Stress in Combat

Overall, it appears that urban terrain was no more stressful a combat environment during actual combat operations than was non-urban terrain.

The Effect of Urban Terrain on Logistics

Overall, the evidence appears to be that the expenditure of artillery ammunition in urban operations was not greater than that in non-urban operations. In the two cases where exact comparisons could be made, the average expenditure rates were about one-third to one-quarter the average expenditure rates expected for an attack posture in the European Theater of Operations as a whole.

The evidence regarding the expenditure of other types of ammunition is less conclusive, but again does not appear to be significantly greater than the expenditures in non-urban terrain. Expenditures of specialized ordnance may have been higher, but the total weight expended was a minor fraction of that for all of the ammunition expended.

There is no evidence that the expenditure of other consumable items (rations, water or POL) was significantly different in urban as opposed to non-urban combat.

The Effect of Urban Combat on Time Requirements

It was impossible to draw significant conclusions from the data set as a whole. However, in the five significant urban operations that were carefully studied, the maximum length of time required to secure the urban area was twelve days in the case of Aachen, followed by six days in the case of Brest. But the other operations all required little more than a day to complete (Cherbourg, Boulogne and Calais).

However, since it was found that advance rates in urban combat were significantly reduced, then it is obvious that these two effects (advance rates and time) are interrelated. It does appear that the primary impact of urban combat is to slow the tempo of operations.

This in turn leads to a hypothetical construct, where the reduced tempo of urban operations (reduced casualties, reduced opposed advance rates and increased time) compared to non-urban operations, results in two possible scenarios.

The first is if the urban area is bounded by non-urban terrain. In this case the urban area will tend to be enveloped during combat, since the pace of battle in the non-urban terrain is quicker. Thus, the urban battle becomes more a mopping-up operation, as it historically has usually been, rather than a full-fledged battle.

The alternate scenario is that created by an urban area that cannot be enveloped and must therefore be directly attacked. This may be caused by geography, as in a city on an island or peninsula, by operational requirements, as in the case of Cherbourg, Brest and the Channel Ports, or by political requirements, as in the case of Stalingrad, Suez City and Grozny.

Of course these last three cases are also those usually included as examples of combat in urban terrain that resulted in high casualty rates. However, all three of them had significant political requirements that influenced the nature, tempo and even the simple necessity of conducting the operation. And, in the case of Stalingrad and Suez City, significant geographical limitations effected the operations as well. These may well be better used to quantify the impact of political agendas on casualties, rather than to quantify the effects of urban terrain on casualties.

The effects of urban terrain at the operational level, and the effect of urban terrain on the tempo of operations, will be further addressed in Phase II of this study.

My Response To My 1997 Article

Shawn likes to post up on the blog old articles from The International TNDM Newsletter. The previous blog post was one such article I wrote in 1997 (he posted it under my name…although he put together the post). This is the first time I have read it since say….1997. A few comments:

  1. In fact, we did go back in systematically review and correct all the Italian engagements. This was primarily done by Richard Anderson from German records and UK records. All the UK engagements were revised as were many of the other Italian Campaign records. In fact, we ended up revising at least half of the WWII engagements in the Land Warfare Data Base (LWDB).
  2. We did greatly expand our collection of data, to over 1,200 engagements, including 752 in a division-level engagement database. Basically we doubled the size of the database (and placed it in Access).
  3. Using this more powerful data collection, I then re-shot the analysis of combat effectiveness. I did not use any modeling structure, but simply just used basic statistics. This effort again showed a performance difference in combat in Italy between the Germans, the Americans and the British. This is discussed in War by Numbers, pages 19-31.
  4. We did actually re-validate the TNDM. The results of this validation are published in War by Numbers, pages 299-324. They were separately validated at corps-level (WWII), division-level (WWII) and at Battalion-level (WWI, WWII and post-WWII).
  5. War by Numbers also includes a detailed discussion of differences in casualty reporting between nations (pages 202-205) and between services (pages 193-202).
  6. We have never done an analysis of the value of terrain using our larger more robust databases, although this is on my short-list of things to do. This is expected to be part of War by Numbers II, if I get around to writing it.
  7. We have done no significant re-design of the TNDM.

Anyhow, that is some of what we have been doing in the intervening 20 years since I wrote that article.

U.S. Army Force Ratios

People do send me some damn interesting stuff. Someone just sent me a page clipped from U.S. Army FM 3-0 Operations, dated 6 October 2017. There is a discussion in Chapter 7 on “penetration.” This brief discussion on paragraph 7-115 states in part:

7-115. A penetration is a form of maneuver in which an attacking force seeks to rupture enemy defenses on a narrow front to disrupt the defensive system (FM 3-90-1) ….The First U.S. Army’s Operation Cobra (the breakout from the Normandy lodgment in July 1944) is a classic example of a penetration. Figure 7-10 illustrates potential correlation of forces or combat power for a penetration…..”

This is figure 7-10:

So:

  1. Corps shaping operations: 3:1
  2. Corps decisive operations: 9-1
    1. Lead battalion: 18-1

Now, in contrast, let me pull some material from War by Numbers:

From page 10:

European Theater of Operations (ETO) Data, 1944

 

Force Ratio                       Result                          Percent Failure   Number of cases

0.55 to 1.01-to-1.00            Attack Fails                          100%                     5

1.15 to 1.88-to-1.00            Attack usually succeeds        21%                   48

1.95 to 2.56-to-1.00            Attack usually succeeds        10%                   21

2.71-to-1.00 and higher      Attacker Advances                   0%                   42

 

Note that these are division-level engagements. I guess I could assemble the same data for corps-level engagements, but I don’t think it would look much different.

From page 210:

Force Ratio…………Cases……Terrain…….Result

1.18 to 1.29 to 1        4             Nonurban   Defender penetrated

1.51 to 1.64               3             Nonurban   Defender penetrated

2.01 to 2.64               2             Nonurban   Defender penetrated

3.03 to 4.28               2             Nonurban   Defender penetrated

4.16 to 4.78               2             Urban         Defender penetrated

6.98 to 8.20               2             Nonurban   Defender penetrated

6.46 to 11.96 to 1      2             Urban         Defender penetrated

 

These are also division-level engagements from the ETO. One will note that out of 17 cases where the defender was penetrated, only once was the force ratio as high as 9 to 1. The mean force ratio for these 17 cases is 3.77 and the median force ratio is 2.64.

Now the other relevant tables in this book are in Chapter 8: Outcome of Battles (page 60-71). There I have a set of tables looking at the loss rates based upon one of six outcomes. Outcome V is defender penetrated. Unfortunately, as the purpose of the project was to determine prisoner of war capture rates, we did not bother to calculate the average force ratio for each outcome. But, knowing the database well, the average force ratio for defender penetrated results may be less than 3-to-1 and is certainly is less than 9-to-1. Maybe I will take few days at some point and put together a force ratio by outcome table.

Now, the source of FM 3.0 data is not known to us and is not referenced in the manual. Why they don’t provide such a reference is a mystery to me, as I can point out several examples of this being an issue. On more than one occasion data has appeared in Army manuals that we can neither confirm or check, and which we could never find the source for. But…it is not referenced. I have not looked at the operation in depth, but don’t doubt that at some point during Cobra they had a 9:1 force ratio and achieved a penetration. But…..this is different than leaving the impression that a 9:1 force ratio is needed to achieve a penetration. I do not know it that was the author’s intent, but it is something that the casual reader might infer. This probably needs to be clarified.

Response 3 (Breakpoints)

This is in response to long comment by Clinton Reilly about Breakpoints (Forced Changes in Posture) on this thread:

Breakpoints in U.S. Army Doctrine

Reilly starts with a very nice statement of the issue:

Clearly breakpoints are crucial when modelling battlefield combat. I have read extensively about it using mostly first hand accounts of battles rather than high level summaries. Some of the major factors causing it appear to be loss of leadership (e.g. Harald’s death at Hastings), loss of belief in the units capacity to achieve its objectives (e.g. the retreat of the Old Guard at Waterloo, surprise often figured in Mongol successes, over confidence resulting in impetuous attacks which fail dramatically (e.g. French attacks at Agincourt and Crecy), loss of control over the troops (again Crecy and Agincourt) are some of the main ones I can think of off hand.

The break-point crisis seems to occur against a background of confusion, disorder, mounting casualties, increasing fatigue and loss of morale. Casualties are part of the background but not usually the actual break point itself.

He then states:

Perhaps a way forward in the short term is to review a number of first hand battle accounts (I am sure you can think of many) and calculate the percentage of times these factors and others appear as breakpoints in the literature.

This has been done. In effect this is what Robert McQuie did in his article and what was the basis for the DMSI breakpoints study.

Battle Outcomes: Casualty Rates As a Measure of Defeat

Mr. Reilly then concludes:

Why wait for the military to do something? You will die of old age before that happens!

That is distinctly possible. If this really was a simple issue that one person working for a year could produce a nice definitive answer for…..it would have already been done !!!

Let us look at the 1988 Breakpoints study. There was some effort leading up to that point. Trevor Dupuy and DMSI had already looked into the issue. This included developing a database of engagements (the Land Warfare Data Base or LWDB) and using that to examine the nature of breakpoints. The McQuie article was developed from this database, and his article was closely coordinated with Trevor Dupuy. This was part of the effort that led to the U.S. Army’s Concepts Analysis Agency (CAA) to issue out a RFP (Request for Proposal). It was competitive. I wrote the proposal that won the contract award, but the contract was given to Dr. Janice Fain to lead. My proposal was more quantitative in approach than what she actually did. Her effort was more of an intellectual exploration of the issue. I gather this was done with the assumption that there would be a follow-on contract (there never was). Now, up until that point at least a man-year of effort had been expended, and if you count the time to develop the databases used, it was several man-years.

Now the Breakpoints study was headed up by Dr. Janice B. Fain, who worked on it for the better part of a year. Trevor N. Dupuy worked on it part-time. Gay M. Hammerman conducted the interview with the veterans. Richard C. Anderson researched and created an additional 24 engagements that had clear breakpoints in them for the study (that is DMSI report 117B). Charles F. Hawkins was involved in analyzing the engagements from the LWDB. There were several other people also involved to some extent. Also, 39 veterans were interviewed for this effort. Many were brought into the office to talk about their experiences (that was truly entertaining). There were also a half-dozen other staff members and consultants involved in the effort, including Lt. Col. James T. Price (USA, ret), Dr. David Segal (sociologist), Dr. Abraham Wolf (a research psychologist), Dr. Peter Shapiro (social psychology) and Col. John R. Brinkerhoff (USA, ret). There were consultant fees, travel costs and other expenses related to that. So, the entire effort took at least three “man-years” of effort. This was what was needed just get to the point where we are able to take the next step.

This is not something that a single scholar can do. That is why funding is needed.

As to dying of old age before that happens…..that may very well be the case. Right now, I am working on two books, one of them under contract. I sort of need to finish those up before I look at breakpoints again. After that, I will decide whether to work on a follow-on to America’s Modern Wars (called Future American Wars) or work on a follow-on to War by Numbers (called War by Numbers II…being the creative guy that I am). Of course, neither of these books are selling well….so perhaps my time would be better spent writing another Kursk book, or any number of other interesting projects on my plate. Anyhow, if I do War by Numbers II, then I do plan on investing several chapters into addressing breakpoints. This would include using the 1,000+ cases that now populate our combat databases to do some analysis. This is going to take some time. So…….I may get to it next year or the year after that, but I may not. If someone really needs the issue addressed, they really need to contract for it.

C-WAM 4 (Breakpoints)

A breakpoint or involuntary change in posture is an essential part of modeling. There is a breakpoint methodology in C-WAM. According to slide 18 and rule book section 5.7.2 is that ground unit below 50% strength can only defend. It is removed at below 30% strength. I gather this is a breakpoint for a brigade.

C-WAM 2

Let me just quote from Chapter 18 (Modeling Warfare) of my book War by Numbers: Understanding Conventional Combat (pages 288-289):

The original breakpoints study was done in 1954 by Dorothy Clark of ORO [which can be found here].[1] It examined forty-three battalion-level engagements where the units “broke,” including measuring the percentage of losses at the time of the break. Clark correctly determined that casualties were probably not the primary cause of the breakpoint and also declared the need to look at more data. Obviously, forty-three cases of highly variable social science-type data with a large number of variables influencing them are not enough for any form of definitive study. Furthermore, she divided the breakpoints into three categories, resulting in one category based upon only nine observations. Also, as should have been obvious, this data would apply only to battalion-level combat. Clark concluded “The statement that a unit can be considered no longer combat effective when it has suffered a specific casualty percentage is a gross oversimplification not supported by combat data.” She also stated “Because of wide variations in data, average loss percentages alone have limited meaning.”[2]

Yet, even with her clear rejection of a percent loss formulation for breakpoints, the 20 to 40 percent casualty breakpoint figures remained in use by the training and combat modeling community. Charts in the 1964 Maneuver Control field manual showed a curve with the probability of unit break based on percentage of combat casualties.[3] Once a defending unit reached around 40 percent casualties, the chance of breaking approached 100 percent. Once an attacking unit reached around 20 percent casualties, the chance of it halting (type I break) approached 100% and the chance of it breaking (type II break) reached 40 percent. These data were for battalion-level combat. Because they were also applied to combat models, many models established a breakpoint of around 30 or 40 percent casualties for units of any size (and often applied to division-sized units).

To date, we have absolutely no idea where these rule-of-thumb formulations came from and despair of ever discovering their source. These formulations persist despite the fact that in fifteen (35%) of the cases in Clark’s study, the battalions had suffered more than 40 percent casualties before they broke. Furthermore, at the division-level in World War II, only two U.S. Army divisions (and there were ninety-one committed to combat) ever suffered more than 30% casualties in a week![4] Yet, there were many forced changes in combat posture by these divisions well below that casualty threshold.

The next breakpoints study occurred in 1988.[5] There was absolutely nothing of any significance (meaning providing any form of quantitative measurement) in the intervening thirty-five years, yet there were dozens of models in use that offered a breakpoint methodology. The 1988 study was inconclusive, and since then nothing further has been done.[6]

This seemingly extreme case is a fairly typical example. A specific combat phenomenon was studied only twice in the last fifty years, both times with inconclusive results, yet this phenomenon is incorporated in most combat models. Sadly, similar examples can be pulled for virtually each and every phenomena of combat being modeled. This failure to adequately examine basic combat phenomena is a problem independent of actual combat modeling methodology.

Footnotes:

[1] Dorothy K. Clark, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, Johns Hopkins University, 1954).

 [2] Ibid, page 34.

[3] Headquarters, Department of the Army, FM 105-5 Maneuver Control (Washington, D.C., December, 1967), pages 128-133.

[4] The two exceptions included the U.S. 106th Infantry Division in December 1944, which incidentally continued fighting in the days after suffering more than 40 percent losses, and the Philippine Division upon its surrender in Bataan on 9 April 1942 suffered 100% losses in one day in addition to very heavy losses in the days leading up to its surrender.

[5] This was HERO Report number 117, Forced Changes of Combat Posture (Breakpoints) (Historical Evaluation and Research Organization, Fairfax, VA., 1988). The intervening years between 1954 and 1988 were not entirely quiet. See HERO Report number 112, Defeat Criteria Seminar, Seminar Papers on the Evaluation of the Criteria for Defeat in Battle (Historical Evaluation and Research Organization, Fairfax, VA., 12 June 1987) and the significant article by Robert McQuie, “Battle Outcomes: Casualty Rates as a Measure of Defeat” in Army, issue 37 (November 1987). Some of the results of the 1988 study was summarized in the book by Trevor N. Dupuy, Understanding Defeat: How to Recover from Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

 [6] The 1988 study was the basis for Trevor Dupuy’s book: Col. T. N. Dupuy, Understanding Defeat: How to Recover From Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

Also see:

Battle Outcomes: Casualty Rates As a Measure of Defeat

[NOTE: Post updated to include link to Dorothy Clark’s original breakpoints study.]

Response 2 (Performance of Armies)

In an exchange with one of readers, he mentioned that about the possibility to quantifiably access the performances of armies and produce a ranking from best to worst. The exchange is here:

The Dupuy Institute Air Model Historical Data Study

We have done some work on this, and are the people who have done the most extensive published work on this. Swedish researcher Niklas Zetterling in his book Normandy 1944: German Military Organization, Combat Power and Organizational Effectiveness also addresses this subject, as he has elsewhere, for example, an article in The International TNDM Newsletter, volume I, No. 6, pages 21-23 called “CEV Calculations in Italy, 1943.” It is here: http://www.dupuyinstitute.org/tdipub4.htm

When it came to measuring the differences in performance of armies, Martin van Creveld referenced Trevor Dupuy in his book Fighting Power: German and U.S. Army Performance, 1939-1945, pages 4-8.

What Trevor Dupuy has done is compare the performances of both overall forces and individual divisions based upon his Quantified Judgment Model (QJM). This was done in his book Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict. I bring the readers attention to pages ix, 62-63, Chapter 7: Behavioral Variables in World War II (pages 95-110), Chapter 9: Reliably Representing the Arab-Israeli Wars (pages 118-139), and in particular page 135, and pages 163-165. It was also discussed in Understanding War: History and Theory of Combat, Chapter Ten: Relative Combat Effectiveness (pages 105-123).

I ended up dedicating four chapters in my book War by Numbers: Understanding Conventional Combat to the same issue. One of the problems with Trevor Dupuy’s approach is that you had to accept his combat model as a valid measurement of unit performance. This was a reach for many people, especially those who did not like his conclusions to start with. I choose to simply use the combined statistical comparisons of dozens of division-level engagements, which I think makes the case fairly convincingly without adding a construct to manipulate the data. If someone has a disagreement with my statistical compilations and the results and conclusions from it, I have yet to hear them. I would recommend looking at Chapter 4: Human Factors (pages 16-18), Chapter 5: Measuring Human Factors in Combat: Italy 1943-1944 (pages 19-31), Chapter 6: Measuring Human Factors in Combat: Ardennes and Kursk (pages 32-48), and Chapter 7: Measuring Human Factors in Combat: Modern Wars (pages 49-59).

Now, I did end up discussing Trevor Dupuy’s model in Chapter 19: Validation of the TNDM and showing the results of the historical validations we have done of his model, but the model was not otherwise used in any of the analysis done in the book.

But….what we (Dupuy and I) have done is a comparison between forces that opposed each other. It is a measurement of combat value relative to each other. It is not an absolute measurement that can be compared to other armies in different times and places. Trevor Dupuy toyed with this on page 165 of NPW, but this could only be done by assuming that combat effectiveness of the U.S. Army in WWII was the same as the Israeli Army in 1973.

Anyhow, it is probably impossible to come up with a valid performance measurement that would allow you to rank an army from best to worse. It is possible to come up with a comparative performance measurement of armies that have faced each other. This, I believe we have done, using different methodologies and different historical databases. I do believe it would be possible to then determine what the different factors are that make up this difference. I do believe it would be possible to assign values or weights to those factors. I believe this would be very useful to know, in light of the potential training and organizational value of this knowledge.

Response

A fellow analyst posted an extended comment to two of our threads:

C-WAM 3

and

Military History and Validation of Combat Models

Instead of responding in the comments section, I have decided to respond with another blog post.

As the person points out, most Army simulations exist to “enable students/staff to maintain and improve readiness…improve their staff skills, SOPs, reporting procedures, and planning….”

Yes this true, but I argue that this does not obviate the need for accurate simulations. Assuming no change in complexity, I cannot think of a single scenario where having a less accurate model is more desirable that having a more accurate model.

Now what is missing from many of these models that I have seen? Often a realistic unit breakpoint methodology, a proper comparison of force ratios, a proper set of casualty rates, addressing human factors, and many other matters. Many of these things are being done in these simulations already, but are being done incorrectly. Quite simply, they do not realistically portray a range of historical or real combat examples.

He then quotes the 1997-1998 Simulation Handbook of the National Simulations Training Center:

The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training or units) and a myriad of human and environmental impacts are not modeled in sufficient detail….”

Let’s take their list made around 20 years ago. In the last 20 years, what significant quantitative studies have been done on the impact of leadership on combat? Can anyone list them? Can anyone point to even one? The same with morale or level of training of units. The Army has TRADOC, the Army Staff, Leavenworth, the War College, CAA and other agencies, and I have not seen in the last twenty years a quantitative study done to address these issues. And what of terrain and weather? They have been around for a long time.

Army simulations have been around since the late 1950s. So at the time these shortfalls are noted in 1997-1998, 40 years had passed. By their own admission, these issues had not been adequately addressed in the previous 40 years. I gather they have not been adequately in addressed in the last 20 years. So, the clock is ticking, 60 years of Army modeling and simulation, and no one has yet fully and properly address many of these issues. In many cases, they have not even gotten a good start in addressing them.

Anyhow, I have little interest in arguing these issues. My interest is in correcting them.

Reinventing the Army

Interesting article: 2018 Forecast: Can the Army Reinvent Itself

A few highlights:

  1. They are standing up the Army Futures Command this summer.
    1. Goal is to develop new weapons and new ways to use them.
    2. It has not been announced where it will be located.
  2. They currently have eight “Cross Functional Teams” already set up, lead by general officers.
    1. Army Chief of Staff General Mark Milley has a “Big Six” modernization priorities. They are: 1) Long-range missiles, 2) new armored vehicles, 3) high speed replacements for current helicopters, 4) secure command networks, 5) anti-aircraft and missile defense, 6) soldier equipment.
      1. There is a link for each of these in this article: https://breakingdefense.com/2017/12/army-shifts-1b-in-st-plans-modernization-command-undersec-mccarthy/
    2. This effort will start making their mark “in earnest” with the 2020 budget.
      1. The 2018 and 2019 budgets have been approved. In the current  political environment, hard to say what the 2020 budget will look like [these are my thoughts, not part of the article].
    3. The U.S. Army has approved Active Protection Systems (APS) for their tanks to shoot down incoming missiles, like Russia and Israel are using.
      1. Goal is to get a brigade of M1 Abrams tanks outfitted with Israeli-made Trophy APS systems by 2020 [why do I get the sense from the wording that this date is not going to be met].
      2. They are testing APS for Bradleys and Strykers.
        1. Also testing anti-aircraft versions of these vehicles.
        2. Also testing upgunned Strykers.
      3. Army is building the Mobile Protected Firepower (MPF) light tank to accompany airborne troops.
        1. RPF has been issued, contract award in early 2019.
    4. The Army is the lead sponsor for the Future Verticle Lift (FVL) to replace existing helicopters. Flight testing has started.
    5. This is all part of the Multi-Domain Battle
      1. They are moving the thinkers behind the Multi-Domain Battle from the Training & Doctrine Command (TRADOC) to the Futures Command.
      2. Milley has identified Russia as the No. 1 threat. [We will note that several years ago some influential people were tagging China as the primary threat.]
      3. Still, Milley has stood up two advisor brigades [because we have wars in Afghanistan, Iraq, Syria, Niger/Mali, Somalia, Yemen, etc. that don’t seem to be going away].

Spotted In The New Books Section Of The U.S. Naval Academy Library…

Christopher A. Lawrence, War by Numbers: Understanding Conventional Combat (Lincoln, NE: Potomac Books, 2017) 390 pages, $39.95

War by Numbers assesses the nature of conventional warfare through the analysis of historical combat. Christopher A. Lawrence (President and Executive Director of The Dupuy Institute) establishes what we know about conventional combat and why we know it. By demonstrating the impact a variety of factors have on combat he moves such analysis beyond the work of Carl von Clausewitz and into modern data and interpretation.

Using vast data sets, Lawrence examines force ratios, the human factor in case studies from World War II and beyond, the combat value of superior situational awareness, and the effects of dispersion, among other elements. Lawrence challenges existing interpretations of conventional warfare and shows how such combat should be conducted in the future, simultaneously broadening our understanding of what it means to fight wars by the numbers.

The book is available in paperback directly from Potomac Books and in paperback and Kindle from Amazon.

South Korea Considering Development Of Artillery Defense System

[Mauldin Economics]

In an article I missed on the first go-round from last October, Ankit Panda, senior editor at The Diplomat, detailed a request by the South Korean Joint Chiefs of Staff to the National Assembly Defense Committee to study the feasibility of a missile defense system to counter North Korean long-range artillery and rocket artillery capabilities.

North Korea has invested heavily in its arsenal of conventional artillery. Other than nuclear weapons, this capability likely poses the greatest threat to South Korean security, particularly given the vulnerability of the capital Seoul, a city of nearly 10 million that lies just 35 miles south of the demilitarized zone.

The artillery defense system the South Korean Joint Chiefs seek to develop is not intended to protect civilian areas, however. It would be designed to shield critical command-and-control and missile defense sites. They already considered and rejected buying Israel’s existing Iron Dome missile defense system as inadequate to the magnitude of the threat.

As Panda pointed out, the challenges are formidable for development an artillery defense system capable of effectively countering North Korean capabilities.

South Korea would need to be confident that it would be able to maintain an acceptable intercept rate against the incoming projectiles—a task that may require a prohibitively large investment in launchers and interceptors. Moreover, the battle management software required for a system like this may prove to be exceptionally complex as well. Existing missile defense systems can already have their systems overwhelmed by multiple targets.

It is likely that there will be broader interest in South Korean progress in this area (Iron Dome is a joint effort by the Israelis and Raytheon). Chinese and Russian long-range precision fires capabilities are bulwarks of the anti-access/area denial strategies the U.S. military is currently attempting to overcome via the Third Offset Strategy and multi-domain battle initiatives.