There is growing interest in a Green New Deal, but far too little discussion among supporters about the challenging nature of the required economic transformation, the necessary role of public planning and ownership in shaping it, or the strategies necessary to institutionalize a strong worker-community voice in the process and final outcome. In this two-part series I draw on the experience of World War II, when the state was forced to direct a rapid transformation from civilian to military production, to help encourage and concretize that discussion.
In this post, I first discuss the need for a rapid Green New Deal-inspired transformation and the value of studying the U.S. experience during World War II to help us achieve it. Next, I examine the evolution, challenges, and central role of state planning in the wartime conversion of the U.S. economy to alert us to the kind of state agencies and capacities we will need to develop. Finally, I highlight two problematic aspects of the wartime conversion and postwar reconversion which must be avoided if we hope to ensure a conversion to a more democratic and solidaristic economy.
In the post to follow, I will highlight the efforts of labor activists to democratize the process of transformation during the war period in order to sharpen our thinking about how best to organize a labor-community mass movement for a Green New Deal.
The challenge of transformation
We are already experiencing a climate crisis, marked by extreme weather conditions, droughts, floods, warming oceans, rising sea levels, fires, ocean acidification, and soil deterioration. The Special Report on Global Warming of 1.5°C by the Intergovernmental Panel on Climate Change underscores the importance of limiting the increase in the global mean temperature to 1.5 degrees Celsius above pre-industrial levels by 2100 if we are to avoid ever worsening climate disasters and “global scale degradation and loss of ecosystems and biodiversity.” The report makes clear that achieving this goal requires reducing global net carbon dioxide emissions by 45 per cent by 2030 and then reaching net zero emissions by 2050.
Tragically, despite the seriousness of the crisis, we are on track for a far higher global mean temperature. Even big business is aware of what is at stake. Two researchers employed by JP Morgan, the world’s largest financer of fossil fuels, recently published an internal study that warns of the dangers of climate inaction. According to the Guardian, which obtained a copy of the report, “the authors say policymakers need to change direction because a business-as-usual climate policy ‘would likely push the earth to a place that we haven’t seen for many millions of years,’ with outcomes that might be impossible to reverse.”
It is easy to see why growing numbers of people are attracted to the idea of a Green New Deal. The Green New Deal promises a rapid and dramatic curtailing of fossil fuel use as part of a broader transformation to a more sustainable, egalitarian, and socially responsive economy. Such a transformation will, by necessity, involve massive new investments to promote the production and distribution of clean renewable energy, expand energy efficient public transit systems, support regenerative agriculture, and retrofit existing homes, offices, and factories. The Green New Deal also promises new, publicly funded programs designed to ensure well-paid and secure employment for all; high-quality universal health care; affordable, safe public housing; clean air; and healthy and affordable food.
Unfortunately, the proposed Green New Deal investments and programs, as attractive and as needed as they may be, are unlikely on their own to achieve the required reduction in carbon emissions. It is true that many Green New Deal investments and programs can be expected to lower overall energy demand, thereby making it easier for rapidly growing supplies of clean energy to support economic activity. But even though renewable energy production is growing rapidly in the US, it still accounts for less than 15 percent of total U.S. energy consumption and less than 20 percent of electricity generation. And based on the experience of other countries, increasing the production of renewable energy does not, by itself, guarantee a significant decline in the production and use of fossil fuels, especially when they remain relatively cheap and plentiful.
Rapid decarbonization will also require direct government action to force down the production of fossil fuels and make their use prohibitively expensive. And this action will have significant consequences. For example, limiting fossil fuel production will leave fossil fuel companies with enormous unused and therefore worthless assets. Raising the price of fossil fuels will sharply increase the cost of flying, with negative consequences for the large manufacturers of airplanes and their subcontractors. It will also increase the cost of gasoline, with negative consequences for automobile companies that produce gas guzzling cars. Other major industries will also be affected, for example, the home building industry that specializes in large suburban homes, and the financial sector that has extended loans to firms in all these industries.
Thus, any serious attempt to rapidly force down fossil fuel use can be expected to negatively affect important sectors of the economy. Proposed Green New Deal investments and social policy initiatives will lay the foundation for a new economy, helping to boost employment and absorb some of the newly created excess capacity, but given the need for a speedy transformation to head off climate catastrophe, the process, if left unplanned, could easily end up dragging the economy down.
As difficult as this process appears, we do have historical experience to draw upon that can help us prepare for some of the challenges we can expect to face: the experience of World War II, when the U.S. government was forced to initiate a rapid transformation of the U.S. economy from civilian to military production. New planning bodies were created to direct resources away from civilian use, retrain workers, encourage retooling of parts of the civilian economy to produce military goods and services, and direct massive investments to build new facilities to expand production or produce new goods needed for the war effort. While far from a model to be recreated, advocates of a Green New Deal can learn much from studying the U.S. war-time experience.
World War II planning
The shift to a war economy began gradually in 1939, some two years before the U.S. actually entered the war. In June 1939, the Congress passed the Strategic and Critical Materials Stockpiling Act, which called for establishing reserves of strategic materials necessary for defense. In August 1939, President Roosevelt established the War Resources Board to help the Joint Army and Navy Munitions Board develop plans for mobilizing the economic resources of the country in the event of war.
In June 1940, a National Roster of Scientific and Specialized Personnel was created. In August 1940, the Defense Plant Corporation was created and charged with planning how to expand the nation’s ability to produce military equipment. And in September 1940, the Congress approved the Selective Training and Service Act of 1940, which required all men between the ages of 21 and 45 to register for the draft.
In January 1941, President Roosevelt created the Office of Production Management to centralize all federal procurement programs concerned with the country’s preparation for war. Shortly after the U.S. entered the war, this office was replaced by the War Production Board (WPB), which was tasked with directing the conversion of industries from civilian to military work; the allocation of scare materials; and the establishment of priorities for the distribution of goods and services, including those to be rationed.
The conversion to a war economy, and the end of the depression, roughly dates to the second half of 1941, when defense spending sharply accelerated. Federal spending on goods and services for national defense rose from 2.2 percent of GNP in 1940 to 11 percent of GNP in 1941. This was the last year that military-generated activity was compatible with growing civilian production. In 1942, military spending soared to 31 percent of GNP. From then to the end of the war, civilian production was suppressed in order to secure the desired growth in military production.
For example, real consumer durable expenditures reached $24.7 billion (in 1972 dollars) or 6.2 percent of GNP in 1941. The following year they fell to $16.3 billion or 3.6 percent of GNP. Real personal consumption which grew by 6.2 percent in 1941, fell absolutely the following year. Between 1940 and 1944, the total production of non-war goods and services fell from $180 billion to $164 billion (in 1950 dollars). In contrast, real federal purchases of military commodities grew from $18 billion in 1941 to $88 billion in 1944 (in 1947 dollars), accounting for approximately one-half of all commodities produced that year.
No doubt, the high level of unemployment that existed at the start of the conversion made it easier to ramp up military production—but the military itself soon absorbed a large share of the male working age population. Moreover, the challenge facing planners was not just that of ramping up production in a depressed economy, but of converting the economy to produce different goods, often in new locations. This required the recruitment, training, and placement of millions of workers in accordance with ever changing industrial, occupational, and geographic requirements.
In the period of preparation for war, perhaps the biggest challenge was training. It was largely met thanks to vocational training programs organized by the Employment Division. These training programs made use of ongoing New Deal programs such as the Civilian Conservation Corps, Works Progress Administration, and National Youth Administration; the existing network of schools and colleges, and a Training-Within-Industry program. Once the war began, the War Manpower Commission continued the effort. Altogether, some 7 million people went through training programs, almost half through Training-Within-Industry programs.
The hard shift from a civilian driven economy into a military dominated one was, to a large degree, forced on the government by corporate concerns over future profitability. In brief, most large corporations were reluctant to expand their productive capacity for fear that doing so would leave them vulnerable to a post-war collapse in demand and depression. Among the most resistant were leading firms in the following industries: automobile, steel, oil, electric power, and railroads. At the same time, these firms also opposed the establishment of government owned enterprises; they feared they might become post-war competitors or even worse, encourage popular interest in socialism.
Unwilling to challenge business leaders, the government took the path of least resistance—it agreed to support business efforts to convert their plant and equipment from civilian to military production; offer businesses engaged in defense work cost plus contracting; and suppress worker wages and their right to strike. And, if the government did find it necessary to invest and establish new firms to produce critical goods, it agreed to allow private businesses to run them, with the option to purchase the new plant and its equipment at a discounted price at the war’s conclusion. As a consequence, big business did quite well during the war and was well position to be highly profitable in the years following the end of the war.
Business reluctance to invest in expanding capacity, including in industries vital to the military, meant that the government had to develop a number of powerful new planning bodies to ensure that the limited output was allocated correctly and efficiently across the military-industrial supply chain. For example, raw steel production grew only 8 percent from 1941 to the 1944 wartime peak. Crude petroleum refining capacity grew only 12 percent between 1941 and 1945. Leading firms in the auto industry were also reluctant to give up sales or engage in conversion to military production, initially claiming that no more than 15 percent of its machine tools were convertible. But, once the war started and U.S. planners regulated steel use, giving priority to military production, the auto industry did retool and produce a range of important military goods, including tanks, jeeps, trucks, and parts and subassemblies for the aircraft industry, including engines and propellers.
In many cases, corporate foot-dragging forced the government to establish its own production. Thus, while steel ingot capacity expanded by a modest 17 percent from 1940 to 1945, almost half of that increase came from government owned firms. The role of government production was probably greatest in the case of synthetic rubber. The U.S. had relied on imports for some 90 percent of its supply of natural rubber, mostly from countries that fell under Japanese control. Desperate for synthetic rubber to maintain critical civilian and military production, the government pursued a massive facility construction program. Almost all of the new capacity was financed and owned by the government and then leased to private operators for $1 per year. Thanks to this effort, synthetic rubber output rose from 22,434 long tons in 1942 to 753,111 long tons in 1944. The Defense Plant Corporation ended up financing and owning approximately one-third of all the plant and equipment built during the war.
The War Production Board, created by presidential executive order in January 1942, was the country’s first major wartime planning agency.. Roosevelt choose Donald M. Nelson, a Sears Roebuck executive, to be its chairperson. Other members of the board were the Secretaries of War, Navy, and Agriculture, the lieutenant general in charge of War Department procurement, the director of the Office of Price Administration, the Federal Loan Administrator, the chair of the Board of Economic Warfare, and the special assistant to the President for the defense aid program.
The WPB managed twelve regional offices, and operated some one hundred twenty field offices throughout the country. Their work was supported by state-level war production boards, which were responsible for keeping records on the firms engaged in war production in their respective states, including whether they operated under government contract.
However, despite its vast information gathering network, the WPB was never able to take command of the conversion of the economy. To some extent that was because Nelson proved to be a weak leader. But a more important reason was that the WPB had to contend with a number of other powerful agencies that were each authorized to direct the output of a specific critical industry. The result was a kind of free-for-all when it came to developing and implementing a unified plan.
Perhaps the most powerful independent agency was the Army-Navy Munitions Board. And early on the WPB ceded its authority over the awarding of military contracts to it. The Army and Navy awarded more contracts then could be fulfilled, creating problems in the supply chain as firms competed to obtain needed materials. Turf fights among government agencies led to other problems. For example, the Office of Defense Transportation and the Petroleum Administration for War battled over who could decide petroleum requirements for transportation services. And the Office of Price Administration fought the Solid Fuels Administration over who would control the rationing of coal.
A Bureau of the Budget history of the period captures some of the early chaos:
Locomotive plants went into tank production when locomotives were more necessary than tanks . . . Truck plants began to produce airplanes, a change that caused shortages of trucks later on . . . Merchant ships took steel from the Navy, and the landing craft cut into both. The Navy took aluminum from aircraft. Rubber took valves from escort vessels, from petroleum, from the Navy. The pipe-lines took steel from ships, new tools, and the railroads. And at every turn there were foreign demands to be met as well as requirements for new plants.
In response to the chaos, Roosevelt established another super agency in May 1943, the Office of War Mobilization (OWM). This agency, headed by James F. Byrnes, a former politician and Supreme Court justice, was given authority over the WPB and the other agencies. In fact, Byrnes’ authority was so great, he was often called the “assistant President.”
The OWM succeeded in installing a rigorous system of materials control and bringing order to the planning process. As a result, civilian production was efficiently suppressed and military production steadily increased. Over the period 1941 to 1945, the U.S. was responsible for roughly 40 percent of the world’s production of weapons and supplies, and with little increase in the nation’s capital stock.
The experience highlighted above shows the effectiveness of planning, and that a contemporary economic conversion based on Green New Deal priorities, in which fossil fuel dependent industries are suppressed in favor of more sustainable economic activity, can be achieved. It also shows that a successful transformation will require the creation of an integrated, multi-level system of planning, and that the process of transformation can be expected to generate challenges that will need to be handled with flexibility and patience.
World War II planning: cautionary lessons
The war-time conversion experience also holds two important cautionary lessons for a Green New Deal-inspired economic transformation. The first is the need to remain vigilant against the expected attempt by big business to use the planning process to strengthen its hold on the economy. If we are to achieve our goal of creating a sustainable, egalitarian, and solidaristic economy, we must ensure a dominant and ongoing role for public planning of economic activity and an expansive policy of public ownership, both taking over firms that prove resistant to the transformation and retaining ownership of newly created firms.
Unfortunately, the federal government was all too willing to allow big corporations to dominate the war-time conversion process as well as the peacetime reconversion, thereby helping them boost their profits and solidify their post-war economic domination. For example, the Army and Navy routinely awarded their defense contracts to a very few large companies. And these companies often chose other big companies as their prime subcontractors. Small and medium sized firms also struggled to maintain their production of civilian goods because planning agencies often denied them access to needed materials.
Harold G. Vatter highlights the contract preference given to big firms during the war, noting that:
of $175 billion of primary contracts awarded between June 1940 and September 1944, over one-half went to the top 33 corporations (with size measured by value of primary supply contracts received). The smallest 94 percent of prime supply contract corporations (contracts of $9 million or less) got 10 percent of the value of all prime contracts in that period.
The same big firms disproportionally benefited from the reconversion process. In October 1944, the OWM was converted into the Office of War Mobilization and Reconversion (OWMR), with Byrnes remaining as head. The OWMR embraced its new role and moved quickly to achieve the reconversion of the economy. It overcame opposition from the large military contractors, who were reluctant to give up their lucrative business, by granting them early authorization to begin production of civilian goods, thereby helping them dominate the emerging consumer markets.
The OWMR was also generous in its post-war distribution of government assets. The government, at war’s end, owned approximately $17 billion of plant and equipment. These holdings, concentrated in the chemical, steel, aluminum, copper, shipbuilding, and aircraft industries, were estimated to be about 15 percent of the country’s total postwar plant capacity. The government also owned “surplus” war property estimated to be worth some $50 and $70 billion.
Because of the way government wartime investment had been structured, there was little question about who would get the lion’s share of these public assets. Most government owned plants were financed under terms specifying that the private firms operating them would be given the right to purchase them at war’s end if desired. Thus, according to one specialist, roughly two-thirds of the $17 billion of government plant and equipment was sold to 87 large firms. The “bulk of copolymer synthetic rubber plants went to the Big Four in rubber; large chemical plants were sold to the leading oil companies, and U.S. Steel received 71 percent of government-built integrated steel plants.”
The second cautionary lesson is the need to resist efforts by the government, justified in the name of efficiency, to minimize the role of unions, and working people more generally, in the planning and organization of the economic conversion. The only way to guarantee that a Green New Deal-inspired transformation will create an economy responsive to the needs of working people and their communities is to create institutional arrangements that secure popular participation in decision-making at all levels of economic activity.
While organized labor had at least an advisory role in prewar planning agencies, once the war began, it was quickly marginalized, and its repeated calls for more participation rejected. For example, Sidney Hillman (head of the Amalgamated Clothing Workers) was appointed to be one of two chairs of the Office of Production Management, which was established in January 1941 to oversee federal efforts at national war preparation. The other was William S. Knudsen (president of General Motors). The OPM also included a Labor Bureau, also led by Hillman, which was to advise it on labor recruitment, training, and mobilization issues, as well as Labor Advisory Committees attached to the various commodity and industry branches that reported to the OPM.
The labor presence was dropped from the War Production Board, which replaced the OPM in January 1942; Roosevelt appointed a businessman, Donald M. Nelson, to be its sole chair. Hillman was appointed director of the board’s Labor Division, but that division was soon eliminated and its responsibilities transferred to the newly created War Manpower Commission in April 1942.
More generally, as organized labor found itself increasingly removed from key planning bodies, workers found themselves increasingly asked to accept growing sacrifices. Prices began rising in 1940 and 1941 as the economy slowly recovered from the depression and began its transformation to war production. In response, workers pushed for significant wage increases which the government, concerned about inflation, generally opposed. In 1940, there were 2500 strikes producing 6.7 million labor-days idle. The following year there were 4300 strikes with 23.1 million labor-days idle.
Hillman called for a national policy of real wage maintenance based on inflation indexing that would also allow the greatest wage gains to go to those who earned the least, but the government took no action. As war mobilization continued, the government sought a number of concessions from the unions. For example, it wanted workers to sacrifice their job rights, such as seniority, when they were transferred from nondefense to defense work. Union leaders refused. Union leaders also demanded, unsuccessfully, that military contracts not be given to firms found to violate labor laws.
Worried about disruptions to war production, Roosevelt established the War Labor Board by executive order in January 1942. The board was given responsibility for stabilizing wages and resolving disputes between workers and managers at companies considered vital to the war effort. The board’s hard stand on wage increases was set in July, when it developed its so-called “Little Steel Formula.” Ruling in a case involving the United Steelworkers and the four so-called “Little Steel” companies, the board decided that although steelworkers deserved a raise, it had to be limited to the amount that would restore their real earnings to their prewar level, which they set as January 1, 1941. Adding insult to injury, the board relied on a faulty price index that underestimated the true rate of inflation since the beginning of 1941.
Thus, while corporations were able to pursue higher profits, workers would have to postpone their “quest for an increasing share of the national income.. Several months later, Roosevelt instructed the War Labor Board to use a similar formula, although with a different baseline, in all its future rulings. Not surprisingly, the number of strikes continued to rise throughout the war years despite a December 1941 pledge by AFL and CIO leaders not to call strikes for the duration of the war.
In June 1943, with strikes continuing, especially in the coal fields, Congress passed the War Labor Disputes Act. The act gave the president the power to seize and operate privately owned plants when an actual or threatened strike interfered with war production. Subsequent strikes in plants seized by the government were prohibited. The act was invoked more than 60 times during the war. The act also included a clause that made it illegal for unions to contribute to candidates for office in national elections, clearly an attempt to weaken labor’s political influence.
Although wage struggles drew most attention, union demands were far more expansive. As Vatter describes:
Organized labor wanted wartime representation and participation in production decision-making at all levels, not merely the meaningless advisory role allotted to it during the preparedness period. But from the outset, management maintained a chronic hostile stance on the ground that management-labor industry councils such as proposed by Walter Reuther and CIO President Philip Murray in 1940 would, under cover of patriotism, undermine managements prerogatives and inaugurate a postwar “sovietization” of American industry.
Unions often pointed to the chaos of early planning, as captured by the Budget Bureau history, arguing that their active participation in production decisions would greatly improve overall efficiency. The government’s lack of seriousness about union involvement is best illustrated by the WPB’s March 1942 decision to establish a special War Production Drive Division that was supposed to encourage the voluntary creation of labor-management plant committees. However, the committees were only allowed to address specific physical production problems, not broader labor-management issues or production coordination across firms. Most large firms didn’t even bother creating committees.
Significantly, there was only one time that the government encouraged and supported popular participation in wartime decision-making, and that effort proved a great success. Inflation was a constant concern of the government throughout the war years, largely because it was a trigger for strikes which threatened war time production. The Office of Price Administration tried a variety of voluntary and bureaucratic controls to limit price increases on consumer goods and services, especially food, with little success. Finally, beginning in mid-1943, and over the strong opposition of business, it welcomed popular participation in the operation of its price control system.
Tens of thousands of volunteers were formally authorized to visit retail locations throughout the country to monitor business compliance with the controls and tens of thousands of additional volunteers were chosen to serve on price boards that were empowered to fine retailers found in violation of the controls. As a result, prices remained relatively stable from mid-1943 until early 1946 when the government abruptly ended the system of controls. This was incredible achievement considering that the production of civilian goods and services declined over those years, while consumer purchasing power and the money supply rose.