There are many disturbing images from the 1950s days of the Cold War, not the least of which are the films of school children crawling under their desks as a means of surviving the blast of an atomic bomb. Radioactive fallout was just one of the perils presented by a potential war with godless Communism in the form of America’s recent ally the Soviet Union. Another, of lesser concern to the general public at the time, was the use of chemical or biological weapons by an enemy, either by polluting the water Americans drank or the air that they breathed.
The United States military, in the form of the newly created United States Air Force and the venerable United States Army, stood at the forefront of defense against such attacks. Or so it seemed. Research released in the last decade reveals that the United States and longtime allies such as the United Kingdom were focused instead on the offensive use of chemical and biological weapons, and both nations established programs to develop and improve their offensive capability.
Both nations came to the realization that the efficient use of chemical and biological weapons required a marriage of the toxin used to the environment in which it was deployed. Prevailing weather systems could be exploited to gain widespread distribution of toxins in the air, and the area in which they would be most effective could be plotted in advance. Doing so required study of a toxin’s distribution in enemy areas which mimicked the meteorological conditions found elsewhere. In other words, an American city with similar wind patterns to say, Kirkoz, Russia, and with similar population density, could be used in testing to determine if a chemical weapon attack against the North Koreans was feasible. St. Louis, MO was one such city.
Following the use of gas weapons in the First World War the civilized nations of the world agreed at Geneva to outlaw the use of “asphyxiating, poisonous or other gases…” and “…bacteriological methods…” of killing one’s enemies during military conflicts of the future. The agreement was not entirely new, prior to the First World War, the civilized nations had agreed to ban chemical warfare, relying instead on the more tried and true methods of killing one another’s troops with bullets, bombs and artillery shells.
Prior to World War II, several violations of the ban occurred, usually on groups unable to retaliate in kind or with conventional firepower. The Spanish used mustard gas during the Rif War. (A rebellion of Berber tribes in Morocco). Likewise, the Japanese resorted to the use of mustard gas against ethnic uprisings in Taiwan, and the Italians found gas warfare useful when deployed against the Abyssinians just prior to World War II. (Americans first used biological weapons during the French and Indian Wars, through the distribution of smallpox-infested blankets.)
During the Second World War, the possibility of an enemy deploying chemical weapons encouraged all the warring powers to stockpile them as a threat of retaliation. It is one thing to use weapons of mass destruction against a helpless enemy, yet another when the enemy has the ability to strike back. Despite the scorched earth policies adopted by nearly all of the major combatants during the war, chemical weapons remained unused.
Unused, but far from undeveloped. Throughout the war the fear that the enemy may be developing a chemical capability which might be devastating motivated both the Axis and Allied military to explore new and more deadly chemical capabilities. Winston Churchill actively lobbied for the use of chemical weapons, both to defend against invasion and to destroy German population centers, eliminating the enemy workforce. A perception developed that chemical weapons eliminated enemy troops and populations without destroying infrastructure – a post-war advantage over destructive bombs, including nuclear weapons.