Category Combat Databases

The Division Level Engagement Data Base (DLEDB)

The Division Level Engagement Data Base (DLEDB) is one of eight data bases that make up our DuWar suite of databases: See http://www.dupuyinstitute.org/dbases.htm This data base, of 752 engagements, is described in depth at: http://www.dupuyinstitute.org/data/dledb.htm

It now consists of 752 engagements from 1904 to 1991. It was originally created in 2000-2001 by us independent of any government contracts (so as to ensure it was corporate proprietary). We then used it as an instrumental part of the our Enemy Prisoner of War studies and then our three Urban Warfare studies.

Below is a list of wars/campaigns the engagements are pulled from:

Russo-Japanese War (1904-1905): 3 engagements

Balkan Wars (1912-1913): 1 engagement

World War I (1914-1918): 25 engagements

…East Prussia (1914): 1

…Gallipoli (1915): 2

…Mesopotamia (1915): 2

…1st & 2nd Artois (1915): 7

…Loos (1915): 2

…Somme (1916): 2

…Mesopotamia (1917): 1

…Palestine (1917): 2

…Palestine (1918): 1

…US engagements (1918): 5

World War II (1939-1945): 657 engagements

…Western Front: 295

……France (1940): 2

……North Africa (1941): 5

……Crete (1941): 1

……Tunisia (1943): 5

……Italian Campaign (1943-1944): 141

……France (1944): 61

…,,,Aachen (1944): 23

……Ardennes (1944-1945): 57

…Eastern Front: 267

……Eastern Front (1943-1945): 11

…….Kursk (1943): 192

……Kharkov (1943): 64

…Pacific Campaign: 95

…….Manchuria (1938): 1

…….Malayan Campaign (1941): 1

…….Phillipines (1942): 1

…….Islands (1944-1945): 4

…….Okinawa (1945): 27

…….Manila (1945): 61

Arab-Israeli Wars (1956-1973): 51 engagements

…1956: 2

…1967: 16

…1968: 1

…1973: 32

Gulf War (1991): 15 engagements

 

Now our revised version of the earlier Land Warfare Data Base (LWDB) of 605 engagements had more World War I engagements. But some of these engagements had over a hundred of thousand men on a side and some lasted for months. It was based upon how the battles were defined at the time; but was really not relevant for use in a division-level database. So, we shuffled them off to something called the Large Action Data Base (LADB), were 55 engagements have sat, unused, since then. Some actions in the original LWDB were smaller than division-level. These made up the core of our battalion-level and company-level data bases.

The Italian Campaign Engagements were the original core of this database. An earlier version of the data base has only 76 engagements from Italy in them (around year 2000). We then expanded, corrected and revised them. So the database still has 40 of the original engagements, 22 were revised, and the rest (79) are new.

The original LWDB was used for parts of Trevor Dupuy’s book Understanding War. The DLEDB was a major component of my book War by Numbers.

As can be seen, it is possible to use this database for model development and/or validation. One could start by developing/testing the model to the 141 Italian Campaign engagements, and then further develop it by testing it to the 141 campaigns from France and the Battle of the Bulge. And then, to test the human factors elements of your models (which if you are modeling warfare I would hope you would have), one could then test it to the 267 division-level engagements on the Eastern Front. Then move forward in time with the 51 engagements from the Arab-Israeli Wars and the 15 engagements from the Gulf War. There is not a lack of data available for model development or model testing. It is, of course, a lot of work; and lately it seems that the  industry has been more concerned about making sure their models have good graphics.

Just to beat a dead horse, we remind you of this post that annoyed several people over at TRADOC (the U.S. Army’s Training and Doctrine Command):

Wargaming Multi-Domain Battle: The Base Of Sand Problem

Finally, it is possible to examine changes in warfare over time. This is useful to understand it one is looking at changes in warfare in the future. The DLEDB covers 88 years of warfare. We also have the Battles Data Base (BaDB) of 243 battles from 1600-1900. It is described here: http://www.dupuyinstitute.org/data/badb.htm

Next I will describe our battalion-level and company-level databases.

Battles versus Campaigns (for Validation)

So we created three campaign databases. One of the strangest arguments I have heard against doing validations or testing combat models to historical data, is that this is only one outcome from history. So you don’t know if model is in error or if this was a unusual outcome to the historical event. Someone described it as the N=1 argument. There are lots of reasons why I am not too impressed with this argument that I may enumerate in a later blog post. It certainly might apply to testing the model to just one battle (like the Battle of 73 Easting in 1991), but these are weeks-long campaign databases with hundreds of battles. One can test the model to these hundreds of points in particular in addition to testing it to the overall result.

In the case of the Kursk Data Base (KDB), we have actually gone through the data base and created from it 192 division-level engagements. This covers every single combat action by every single division during the two week offensive around Belgorod. Furthermore, I have listed each and every one of these as an “engagement sheet’ in my book on Kursk. The 192 engagement sheets are a half-page or page-long tabulation of the strengths and losses for each engagement for all units involved. Most sheets cover one day of battle. It took considerable work to assemble these. First one had to figure out who was opposing whom (especially as unit boundaries never match) and then work from there. So, if someone wants to test a model or model combat or do historical analysis, one could simply assemble a database from these 192 engagements. If one wanted more details on the engagements, there are detailed breakdowns of the equipment in the Kursk Data Base and detailed descriptions of the engagements in my Kursk book. My new Prokhorovka book (release date 1 June), which only covers the part of the southern offensive around Prokhorovka from the 9th of July, has 76 of those engagements sheets. Needless to say, these Kursk engagements also make up 192 of the 752 engagements in our DLEDB (Division Level Engagement Data Base).  A picture of that database is shown at the top of this post.

So, if you are conducting a validation to the campaign, take a moment and check the results to each division to each day. In the KDB there were 17 divisions on the German side, and 37 rifle divisions and 10 tank and mechanized corps (a division-sized unit) on the Soviet side. The data base covers 15 days of fighting. So….there are around 900 points of daily division level results to check the results to. I drawn your attention to this graph:

There are a number of these charts in Chapter 19 of my book War by Numbers. Also see:

Validating Attrition

The Ardennes database is even bigger. There was one validation done by CAA (Center for Army Analysis) of its CEM model (Concepts Evaluation Model) using the Ardennes Campaign Simulation Data Bases (ACSDB). They did this as an overall comparison to the campaign. So they tracked the front line trace at the end of the battle, and the total tank losses during the battle, ammunition consumption and other events like that. They got a fairly good result. What they did not do was go into the weeds and compare the results of the engagements. CEM relies on inputs from ATCAL (Attrition Calculator) which are created from COSAGE model runs. So while they tested the overall top-level model, they really did not test ATCAL or COSAGE, the models that feed into it. ATCAL and COSAGE I gather are still in use. In the case of Ardennes you have 36 U.S. and UK divisions and 32 German divisions and brigades over 32 days, so over 2,000 division days of combat. That is a lot of data points to test to.

Now we have not systematically gone through the ACSDB and assembled a record for every single engagement there. There would probably be more than 400 such engagements. We have assembled 57 engagements from the Battle of the Bulge for our division-level database (DLEDB). More could be done.

Finally, during our Battle of Britain Data Base effort, we recommended developing an air combat engagement database of 120 air-to-air engagements from the Battle of Britain. We did examine some additional mission specific data for the British side derived from the “Form F” Combat Reports for the period 8-12 August 1940. This was to demonstrate the viability of developing an engagement database from the dataset. So we wanted to do something similar for the air combat that we had done with division-level combat. An air-to-air engagement database would be very useful if you are developing any air campaign wargame. This unfortunately was never done by us as the project (read: funding) ended.

As it is we actually have three air campaign databases to work from, the Battle of Britain data base, the air component of the Kursk Data Base, and the air component of the Ardennes Campaign Simulation Data Base. There is a lot of material to work from. All it takes it a little time and effort.

I will discuss the division-level data base in more depth in my next post.

The Battle of Britain Data Base

The Battle of Britain data base came into existence at the request of OSD PA&E (Office of the Secretary of Defense, Program Analysis and Evaluation). They contacted us. They were working with LMI (Logistics Management Institute, on of a dozen FFRDCs) to develop an air combat model. They felt that the Battle of Britain would be perfect for helping to develop, test and validate their model. The effort was led by a retired Air Force colonel who had the misfortune of spending part of his career in North Vietnam.

The problem with developing any air campaign database is that, unlike the German army, the Luftwaffe actually followed their orders late in the war to destroy their records. I understand from conversations with Trevor Dupuy that Luftwaffe records were stored in a train and had been moved to the German countryside (to get them away from the bombing and/or advancing armies). They then burned all the records there at the rail siding.

So, when HERO (Trevor Dupuy’s Historical Evaluation Research Organization) did their work on the Italian Campaign (which was funded by the Air Force), they had to find records on the German air activity with the Luftwaffe liaison officers of the German armies involved. The same with Kursk, where one of the few air records we had was with the air liaison officer to the German Second Army. This was the army on the tip of the bulge that was simply holding in place during the battle. It was the only source that gave us a daily count of sorties, German losses, etc. Of the eight or so full wings that were involved in the battle from the VIII Air Corps, we had records for one group of He-111s (there were usually three groups to a wing). We did have good records from the Soviet archives. But it hard to assemble a good picture of the German side of the battle with records from only 1/24th of the units involved. So the very limited surviving files of the Luftwaffe air liaison officers was all we had to work with for Italy and Kursk. We did not even have that for the Ardennes. Luckily the German air force simplified things by flying almost no missions until the disastrous Operation Bodenplatte on 1 January 1945. Of course, we had great records from the U.S. and the UK, but….hard to develop a good database without records from both sides. Therefore, one is left with few well-documented air battles anywhere for use in developing, evaluating and validating an air campaign model.

The exception is the Battle of Britain, which has been so well researched, and extensively written about, that it is possible to assemble an accurate and detailed daily account for both sides for every day of the battle. There are also a few surviving records that can be tapped, including the personal kill records of the pilots, the aircraft loss reports of the quartermaster, and the ULTRA reports of intercepted German radio messages. Therefore, we (mostly Richard Anderson) assembled the Battle of Britain data base from British unit records and the surviving records and the extensive secondary sources for the German side. We have already done considerable preliminary research covering 15 August to 19 September 1940 as a result of our work on DACM (Dupuy Air Combat Model)

The Dupuy Air Campaign Model (DACM)

The database covered the period from 8 August to 30 September 1940. It was programmed in Access by Jay Karamales.  From April to July 2004 we did a feasibility study for LMI. We were awarded a contract from OSD PA&E on 1 September to start work on the database. We sent a two-person research team to the British National Archives in Kew Gardens, London. There we examined 249 document files and copied 4,443 pages. The completed database and supporting documentation was delivered to OSD PA&E in August 2005. It was certainly the easiest of our campaign databases to do.

We do not know if OSD PA&E or LMI ever used the data base, but we think not. The database was ordered while they were still working on the model. After we delivered the database to them, we do not know what happened. We suspect the model was never completed and the effort was halted. The database has never been publically available. PA&E became defunct in 2009 and was replaced by CAPE (Cost Assessment and Program Evaluation). We may be the only people who still have (or can find) a copy of this database.

I will provide a more detailed description of this database in a later post.

The Use of the Two Campaign Data Bases

The two large campaign data bases, the Ardennes Campaign Simulation Data Base (ACSDB) and the Kursk Data Base (KDB) were designed to use for validation. Some of the data requirements, like mix of personnel in each division and the types of ammunition used, were set up to match exactly the categories used in the Center for Army Analysis’s (CAA) FORCEM campaign combat model. Dr. Ralph E. Johnson, the program manager for FORCEM was also the initial contract manager for the ACSDB.

FORCEM was never completed. It was intended to be an improvement to CAA’s Concepts Evaluation Model (CEM) which dated back to the early 1970s. So far back that my father had worked with it. CAA ended up reverting back to CEM in the 1990s.

They did validate the CEM using the ACSDB. Some of their reports are here (I do not have the link to the initial report by the industrious Walt Bauman):

https://apps.dtic.mil/dtic/tr/fulltext/u2/a320463.pdf

https://apps.dtic.mil/dtic/tr/fulltext/u2/a489349.pdf

It is one of the few actual validations ever done, outside of TDI’s (The Dupuy Institute) work. CEM is no longer used by CAA. The Kursk Data Base has never used for validation. Instead they tested Lanchester equations to the ACSDB and KDB. They failed.

Lanchester equations have been weighed….

But the KDB became the darling for people working on their master’s thesis for the Naval Post-Graduate School. Much of this was under the direction of Dr. Tom Lucas. Some of their reports are listed here:

http://www.dupuyinstitute.org/links.htm

Both the ACSDB and KDB had a significant air component. The air battle over the just the German offensive around Belgorod to the south of Kursk was larger than the Battle of Britain. The Ardennes data base had 1,705 air files. The Kursk data base had 753. One record, from the old Dbase IV version of the Kursk data base, is the picture that starts this blog post. These files basically track every mission for every day, to whatever level of detail the unit records allowed (which were lacking). The air campaign part of these data bases have never been used for any analytical purpose except our preliminary work on creating the Dupuy Air Campaign Model (DACM).

The Dupuy Air Campaign Model (DACM)

This, of course, leads into our next blog post on the Battle of Britain data base.

Validation Data Bases Available (Kursk)

The second large campaign validation database created was the Kursk Data Base (KDB), done 1993-1996. I was also the program manager for this one and it ran a lot smoother than the first database. There was something learned in the process. This database involved about a dozen people, including a Russian research team led by Col. (Dr.) Fyodor Sverdlov, WWII veteran, author and Frunze Military academy; and ably assisted by Col. (Dr.) Anatoli Vainer, ditto. It also involved was the author Dr. Richard Harrison, and of course, Richard Anderson and Jay Karamales. Col. David Glantz helped with the initial order of battle as a consultant.

The unique aspect of the database is that we obtained access to the Soviet archives and was able to pull from it the unit records at the division, corps and army-level for every Soviet unit involved. This was a degree of access and research never done before for an Eastern Front battle. We were not able to access the Voronezh Front files and other higher command files as they were still classified.

The KDB tracked the actions of all divisions and division-sized units on both sides for every day of the German offensive in the south for 4 July 1943 to 18 July 1943. Kursk is a huge battle (largest battle of WWII) and consists of four separate portions. This database covered only one of the four parts, and that part was similar in size to the Battle of the Bulge and the air battle was larger than the Battle of Britain. On the German side were 17 panzer, panzer grenadier and infantry divisions while on the Soviet side were 37 rifle divisions and 10 tank and mechanized corps. There were 9 attacking German armored divisions versus 10 Soviet tank and mechanized corps at the Belgorod Offensive at Kursk. At the Battle of the Bulge there were 8 attacking (engaged) German armored divisions versus 9 U.S. armored divisions. The database design and what data was tracked was almost the same as the Ardennes Campaign Simulation Data Base (ACSDB). The stats on the data are here: http://www.dupuyinstitute.org/data/kursk.htm

The database was programmed in Dbase IV and is DOS based. Dbase IV has the advantage that it allowed text fields. Dbase III did not, so we were limited to something like 256 characters for our remarks fields. With Dbase IV, the remarks fields sometimes grew to a page or two as we explained what data was available and how they were used to assemble daily counts of strengths and losses. Sometimes they were periodic (vice daily) reports and sometimes contradictory reports. It was nice to be able to fully explain for each and every case how we analyzed the data. The Dbase IV version of the KDB is publicly available through NTIS (National  Technical Information Service). The pictures in this blog post are screen shots from the Dbase IV version.

We also re-programmed the data base into Access and rather extensively and systematically updated it. This was in part because we took every single unit for every single day of the battle and assembled it into 192 different division-on-division engagements for use in our Division Level Engagement Data Base (DLEDB). This was done over a period of 11 years. We did the first 49 engagements in 1998-99 to support the Enemy Prisoner of War (EPW) Capture Rate Study for CAA (Center for Army Analysis), report E-4 (see http://www.dupuyinstitute.org/tdipub3.htm). Some of the other engagements were done later to support the study on Measuring the Value of Situational Awareness for OSD Net Assessment (Andy Marshall’s shop), reports SA-1. We (meaning me) then finished up the rest of the engagements in 2004 and 2009. In the end we had assembled an engagement record for every single division-on-division level engagement for the Belgorod Offensive. Added to that, in 1999 I began working on my Kursk book, which I had mostly finished in 2003 (but was not published until 2015). So over time, we rather systematically reviewed and revised the data in the database. This is not something we were able to do to the same extent for the ACSDB. The 192 engagements in DLEDB were then summarized as 192 separate “engagement sheets” in my Kursk book. There are also 76 of these engagement sheets available in my new Kursk book coming out in June: The Battle of Prokhorovka. This new book covers the part of the Belgorod offensive centered around the Battle of Prokhorovka.

Validation Data Bases Available (Ardennes)

We never seem to stop discussing validation at The Dupuy Institute even though it seems like most of the military operations research community only pays lip service to it. Still, it is something we encourage, even though it has only been very rarely done. A major part of our work over the years has been creation of historical databases for use in combat model validation, combat model testing, and combat model development. Over a series of posts, let me describe what databases are available.

First there are two big campaign databases. These are fairly well known. It is the Ardennes Campaign Simulation Data Base (ACSDB) and the Kursk Data Base (KDB). I was the program manager for both of them.

The ACSDB is a database tracking the actions of all divisions on both sides for every day of the Battle of the Bulge, from 16 December 1944 to 16 January 1945. That was 36 U.S. and British Divisions and 32 German Divisions and Brigades. It tracked the strength, equipment, losses, ammunition and oil for each of these units. The stats on the database are here: http://www.dupuyinstitute.org/data/ardennes.htm

The ACSDB was done from 1987 to 1990 at Trevor Dupuy’s old company, DMSI. There was around 16 or so people involved with it, including consultants like Hugh Cole and Charles MacDonald. We pulled up the units records for all the units on both sides from the U.S. Archives, UK Public Records Office, and the German achives in Freiburg. It was the largest historical database ever created (I do seem to be involved in creating some large things, like my Kursk book).

The database was programmed in Dbase III and is DOS based. The data base was delivered to CAA (Center for Army Analysis). It is publicly available through NTIS (National Technical Information Service). The pictures in this blog post are screen shots from the DBase III version. We do have our own corporate proprietary version re-programmed into Access and with some updates done by Richard Anderson (coauthor of Hitler’s Last Gamble).

Historical Demonstrations?

Photo from the 1941 Louisiana Maneuvers

Continuing my comments on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 5 of 7; see Part 1, Part 2, Part 3, Part 4).

The authors of the Phalanx article then make the snarky statement that:

Combat simulations have been successfully used to replicate historical battles as a demonstration, but this is not a requirement or their primary intended use.

So, they say in three sentences that combat models using human factors are difficult to validate, they then say that physics-based models are validated, and then they say that running a battle through a model is a demonstration. Really?

Does such a demonstration show that the model works or does not work? Does such a demonstration show that they can get a reasonable outcome when using real-world data? The definition of validation that they gave on the first page of their article is:

The process of determining the degree to which a model or simulation with its associated data is an accurate representation of the real world from the perspective of its intended use is referred to as validation.

This is a perfectly good definition of validation. So where does one get that real-world data? If you are using the model to measure combat effects (as opposed to physical affects) then you probably need to validate it to real-world combat data. This means historical combat data, whether it is from 3,400 years ago or 1 second ago. You need to assemble the data from a (preferably recent) combat situation and run it through the model.

This has been done. The Dupuy Institute does not exist in a vacuum. We have assembled four sets of combat data bases for use in validation. They are:

  1. The Ardennes Campaign Simulation Data Base
  2.  The Kursk Data Base
  3. The Battle of Britain Data Base
  4. Our various division-level, battalion-level and company-level engagement database bases.

Now, the reason we have mostly used World War II data is that you can get detailed data from the unit records of both sides. To date….this is not possible for almost any war since 1945. But, if your high-tech model cannot predict lower-tech combat….then you probably also have a problem modeling high-tech combat. So, it is certainly a good starting point.

More to the point, this was work that was funded in part by the Center for Army Analysis, the Deputy Secretary of the Army (Operations Research) and Office of Secretary of Defense, Planning, Analysis and Evaluation. Hundreds of thousands of dollars were spent developing some of these databases. This was not done just for “demonstration.” This was not done as a hobby. If their sentence was meant to be-little the work of TDI, which is how I do interpret that sentence, then is also belittles the work of CAA, DUSA(OR) and OSD PA&E. I am not sure that is the three author’s intent.

Validating Attrition

Continuing to comment on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 3 of 7; see Part 1, Part 2)

On the first page (page 28) in the third column they make the statement that:

Models of complex systems, especially those that incorporate human behavior, such as that demonstrated in combat, do not often lend themselves to empirical validation of output measures, such as attrition.

Really? Why can’t you? If fact, isn’t that exactly the model you should be validating?

More to the point, people have validated attrition models. Let me list a few cases (this list is not exhaustive):

1. Done by Center for Army Analysis (CAA) for the CEM (Concepts Evaluation Model) using Ardennes Campaign Simulation Study (ARCAS) data. Take a look at this study done for Stochastic CEM (STOCEM): https://apps.dtic.mil/dtic/tr/fulltext/u2/a489349.pdf

2. Done in 2005 by The Dupuy Institute for six different casualty estimation methodologies as part of Casualty Estimation Methodologies Studies. This was work done for the Army Medical Department and funded by DUSA (OR). It is listed here as report CE-1: http://www.dupuyinstitute.org/tdipub3.htm

3. Done in 2006 by The Dupuy Institute for the TNDM (Tactical Numerical Deterministic Model) using Corps and Division-level data. This effort was funded by Boeing, not the U.S. government. This is discussed in depth in Chapter 19 of my book War by Numbers (pages 299-324) where we show 20 charts from such an effort. Let me show you one from page 315:

 

So, this is something that multiple people have done on multiple occasions. It is not so difficult that The Dupuy Institute was not able to do it. TRADOC is an organization with around 38,000 military and civilian employees, plus who knows how many contractors. I think this is something they could also do if they had the desire.

 

Validation

Continuing to comment on the article in the December 2018 issue of the Phalanx by Jonathan Alt, Christopher Morey and Larry Larimer (this is part 2 of 7; see part 1 here).

On the first page (page 28) top of the third column they make the rather declarative statement that:

The combat simulations used by military operations research and analysis agencies adhere to strict standards established by the DoD regarding verification, validation and accreditation (Department of Defense, 2009).

Now, I have not reviewed what has been done on verification, validation and accreditation since 2009, but I did do a few fairly exhaustive reviews before then. One such review is written up in depth in The International TNDM Newsletter. It is Volume 1, No. 4 (February 1997). You can find it here:

http://www.dupuyinstitute.org/tdipub4.htm

The newsletter includes a letter dated 21 January 1997 from the Scientific Advisor to the CG (Commanding General)  at TRADOC (Training and Doctrine Command). This is the same organization that the three gentlemen who wrote the article in the Phalanx work for. The Scientific Advisor sent a letter out to multiple commands to try to flag the issue of validation (letter is on page 6 of the newsletter). My understanding is that he received few responses (I saw only one, it was from Leavenworth). After that, I gather there was no further action taken. This was a while back, so maybe everything has changed, as I gather they are claiming with that declarative statement. I doubt it.

This issue to me is validation. Verification is often done. Actual validations are a lot rarer. In 1997, this was my list of combat models in the industry that had been validated (the list is on page 7 of the newsletter):

1. Atlas (using 1940 Campaign in the West)

2. Vector (using undocumented turning runs)

3. QJM (by HERO using WWII and Middle-East data)

4. CEM (by CAA using Ardennes Data Base)

5. SIMNET/JANUS (by IDA using 73 Easting data)

 

Now, in 2005 we did a report on Casualty Estimation Methodologies (it is report CE-1 list here: http://www.dupuyinstitute.org/tdipub3.htm). We reviewed the listing of validation efforts, and from 1997 to 2005…nothing new had been done (except for a battalion-level validation we had done for the TNDM). So am I now to believe that since 2009, they have actively and aggressively pursued validation? Especially as most of this time was in a period of severely declining budgets, I doubt it. One of the arguments against validation made in meetings I attended in 1987 was that they did not have the time or budget to spend on validating. The budget during the Cold War was luxurious by today’s standards.

If there have been meaningful validations done, I would love to see the validation reports. The proof is in the pudding…..send me the validation reports that will resolve all doubts.

Panzer Battalions in LSSAH in July 1943 – II

This is a follow-up to this posting:

Panzer Battalions in LSSAH in July 1943

The LSSAH Panzer Grenadier Division usually had two panzer battalions. Before July the I Panzer Battalion had been sent back to Germany to arm up with Panther tanks. This had lead some authors to conclude that in July 1943, the LSSAH had only the II Panzer Battalion. Yet the unit’s tank strength is so high that this is hard to justify. Either the LSSAH Division in July 1943 had:

  1. Over-strength tank companies
  2. A 4th company in the II Panzer Battalion
  3. A temporary I Panzer Battalion

I have found nothing in the last four months to establish with certainly what was the case, but additional evidence does indicate that they had a temporary I Panzer Battalion.

The first piece of evidence is drawn from a division history book, called Liebstandarte III, by Rudolf Lehmann, who was the chief of staff of the Panzer Regiment. It states that they had around 33 tanks at hill 252.2 on the afternoon or evening of the 11th. It has been reported that the entire II Panzer Battalion moved up there on the 11th, and then pulled back their 5th and 7th companies, leaving the 6th company in the area of hill 252.2. The 6th Panzer Company was reported to have only 7 tanks operational on the morning of the 12th. So, II Panzer Battalion may have had three companies of 7-12 tanks each, and the battalion staff, and maybe some or all of the regimental staff there. The LSSAH Division according to the Kursk Data Base had as of the end of the day on 11 July 1943: 2 Panzer Is, 4 Panzer IIs, 1 Panzer III short, 4 Panzer III longs, 7 Panzer III Command tanks, 47 Panzer IV longs and 4 Panzer VIs for a total of 69 tanks in the panzer regiment. Ignoring the 4 Tiger tanks, this leaves 32 tanks unaccounted for. This could well be the complement of a temporary I Panzer Battalion.

The second unresolved issue is that the Soviet XVIIII Tank Corps is reported to have encountered dug-in tanks as they tried to push beyond Vasilyevka along the Psel River. They reported that their advance was halted by tank fire from the western outskirts of Vasilyevka. They also report at 1400 (Moscow time) repulsing a German counterattack by 50 tanks from the Bogoroditskoye area (just west of Vasilyevka, south of the Psel).

With the II Panzer Battalion being opposite the XXIX Tank Corps, then one wonders who and where those “dug-in tanks” were from. It is reported in some sources that the Tiger company, which was in the rear when the fighting started, moved to the left flank, but most likely there was another tank formation there. If the II Panzer Battalion was covering the right half of the LSSAH’s front, then it would appear that the rest of the front would have been covered by a temporary I Panzer Battalion of at least three companies.

This leads to me lean even more so to the conclusion that the LSSAH had a temporary I Panzer Battalion of at least three companies, the II Panzer Battalion of three companies, and the Tiger company, which was assigned to the II Panzer Battalion.