Chapter 27: From Depression to New Expansion of Capitalism


A Marxist Guide to Capitalist Crises

“A Marxist Guide to Capitalist Crises,” an eBook created from the key posts on the Critique of Crisis Theory blog, is currently in production. We’ll be sharing the completed chapters between our regular postings.


Chapter 27: From Depression to New Expansion of Capitalism

By the end of the 1930s, many people, including non-Marxist progressives and Marxists alike, had come to consider the Depression as the permanent state of capitalism. The idea that the Depression represented the “new normal” even received a name. It was called the theory of “secular stagnation.”

Therefore, many Marxists were taken off guard when a new wave of accelerated capitalist expanded reproduction swept through the capitalist world after World War II. The renewed capitalist prosperity greatly reinforced opportunist trends within the workers’ movement of Europe — including within the ruling parties of the Soviet Union and even more so those of Eastern Europe — and the United States.

Bourgeois economists claimed that the new capitalist prosperity had finally “refuted” Marxism once and for all. It was widely accepted — even among Marxists of that generation — that Keynesian economic policies and/or the “permanent arms economy” were responsible for the new prosperity that Marxist theory supposedly could not explain.

However, if the postwar prosperity were essentially cyclical, it would only be a matter of time before a new era of crisis arrived. The Marxist program of socialist revolution remained fully valid, even if the hopes of a near-term global socialist revolution had been frustrated by the renewed capitalist prosperity. After all, Marx and Engels had faced just such a situation after the 1848 European revolutions.

But if the new prosperity was the result of new policies of the capitalist state unforeseen in Marxist theory, the possibility was opened up that the “laws of motion” of capitalism discovered by Marx that govern the capitalist economy had changed fundamentally. Perhaps Marx’s laws of motion had been correct for 19th-century capitalism and even early 20th-century capitalism. But, it was argued, they were no longer valid for the capitalism of the second half of the 20th century, reshaped as they were by the new Keynesian “demand management” policies.

Demand management, it was claimed, had finally overcome the problem of capitalist overproduction. Thanks to the new Keynesian policies, the argument went that if monetarily effective demand was insufficient to purchase all the commodities produced, the government could simply increase demand to whatever level it desired. This was indeed the conclusion that Paul Baran and Paul Sweezy drew in “Monopoly Capital,” first published in 1966. If this were the case, then the Marxist program of a working-class socialist revolution might need a drastic revision.

In the following chapters, we will examine the changes in capitalism and the industrial cycle, both economic and political, that followed the Second World War of the 20th century.

The industrial cycle on the eve of World War II

The upswing in the industrial cycle that began in 1932-33 — interrupted by the Roosevelt recession — resumed by mid-1938 as the administration quickly reversed its deflationary measures of 1936-37. However, the recovery that began in mid-1938 started at a level well below the “prosperity peak” — it was hardly a boom — that the economy had reached in late 1936 and early 1937.

Then, before the industrial cycle could reach a true boom—or even get very far into the phase of average prosperity—the war economy took over. As we have already seen, a full-scale war economy suppresses the industrial cycle by halting the normal process of capitalist expanded reproduction. Industrial cycles can only occur within expanded capitalist reproduction.

Economic conditions on the eve of WW II different than on the eve of WW I

Let’s first briefly review the economic situation in the years leading up to World War I and then compare it with the radically different economic conditions that preceded the outbreak of World War II.

The years leading up to World War I were years of vigorous expanded capitalist reproduction — or, in economic slang, rapid economic growth. As a result, real capital — productive and commodity capital as well as variable capital that produces surplus value — expanded relative to the quantity of gold bullion in the world.

The era of capitalist prosperity that preceded World War I began around 1896. The decade of the 1890s had seen a big surge in gold production due to low commodity prices and the widespread introduction of cyanide for extracting gold from very poor ores, and then the discovery of gold in the Klondike in 1896.

The defeat of silver Democrat William Jennings Bryan in the U.S. presidential election of 1896 meant that the United States — already on the verge of replacing Britain as the leading industrial nation — would remain on the gold standard. This, along with the rising production of gold and the consequent accelerated expansion of the world market, brought the 1873-1896 “Long Depression” to an end. A prolonged upturn in the general price level — both in terms of gold and in terms of currency, which under the international gold standard was the same thing — set in.

By the eve of World War I, prices were beginning to rise above the values of commodities, as indicated by the developing stagnation in world gold production in the years immediately preceding the war. Therefore, World War I broke out at a particularly bad time as far as the stability of the world capitalist economy was concerned. It took the super-crisis and the currency devaluations that accompanied and followed it to lower commodity prices again to levels below their underlying labor values.

In complete contrast to the pre-World War I situation, the Depression decade preceding World War II saw the virtual complete breakdown of capitalist expanded reproduction. Indeed, the United States — the leading capitalist country — had seen years of contracted reproduction. With expanded reproduction — the very essence of capitalist production — paralyzed, vast amounts of money fell out of circulation and accumulated in idle hoards. The huge expansion of idle reserves in the U.S. banking system during the 1930s was the chief manifestation of this.

The combined effects of the prolonged stagnation of the reproductive process, the 50 percent fall in prices of commodities in terms of gold, and the subsequent rise in gold production meant that by the end of the 1930s, the world was awash in liquidity. The accumulation of capital in the form of gold bullion — money capital — was accelerating, while the accumulation of real capital was largely negative. As we would expect in such a situation, the rate of interest fell to the lowest levels ever seen in the history of capitalism up to that time.

The balance of forces in the money and capital market now favored the industrial and commercial capitalists over the money capitalists. This created the possibility of a rapid growth in the profit of enterprise once the rate of profit began to recover. As we saw in my examination of the ideal industrial cycle, the beginning of a recovery is marked by a low rate of interest combined with a rapidly increasing profit of enterprise.

These conditions were the exact opposite of the situation that prevailed on the eve of World War I. Therefore, even before the war broke out in Europe in 1939, the operation of the basic economic laws that govern the capitalist system ensured that the post-World War II period would economically and, therefore, politically be very different than the post-World War I period.

The politics of World War II

World War I had ended indecisively. During the war, the United States had demonstrated that its huge industrial machine, the biggest in the world, made it by far the most powerful nation in the world militarily. However, after the war, the United States rapidly dismantled the huge military forces it had assembled. The U.S. did not occupy Europe militarily after the war ended.

Instead, the United States relied on its new position as the world’s leading creditor nation. The U.S. insisted that Britain and France repay the loans the United States had made to them during the war — loans that had been used not to finance Europe’s economic development but rather its economic destruction. (Soviet Russia — the Soviet federation had not yet been formed — repudiated the debts the czarist regime owed to the United States.)

This forced Britain and France to finance their war debts by squeezing the needed funds out of defeated Germany in the form of reparations payments. Germany was forced to borrow great amounts of money from the United States, both to meet its reparations payments and rebuild its economy — after the contracted reproduction of World War I. In this way, the “victorious” British and French transferred their war debts onto the shoulders of the defeated Germans.

The only way that Germany could pay these debts was through exports to the United States. The U.S., however, stubbornly refused to open its markets to the Germans, or indeed any other nation. The U.S. behaved as though it was still a developing industrial capitalist nation in debt to Europe rather than the most powerful imperialist-creditor country in the world. Instead of establishing its military and political domination over Europe, the United States’ policy was to subordinate Europe economically.

If during the 1920s the United States had wanted to encourage European recovery, it would have opened its home market to European commodities and forgiven the war debts that Britain and France owed. The U.S. could then have urged that Britain and France cancel the reparations that Germany owed them. But after World War I, the U.S. pursued the exact opposite policies.

After World War II, however, the United States reversed its policies. The U.S insisted on occupying Germany and Japan after the war (in contrast, after World War I, defeated Germany as a whole was not occupied, except for the French occupation of the Rhineland). And the occupation of Germany and Japan has continued well into the 21st century.

In 1949, the United States created the so-called North Atlantic Treaty Organization to institutionalize these occupations as far as Western Europe was concerned. NATO, from the beginning, had a double purpose. The first was to threaten the Soviet Union and its new Eastern European allies and, if necessary, put down any revolutionary movement in Western Europe.

The second purpose was to ensure that neither Germany nor any other European imperialist power would rebel against U.S. domination. During the “Cold War,” the United States and its other supporters in NATO absurdly claimed that its military forces in Europe were necessary to defend these countries against a possible Soviet attack. The last thing that the Soviet Union — after losing at least 20 million people as a result of World War II and facing a United States armed with the atomic bomb that it had actually used against Japan in the closing days of the war — would have considered was a military offensive into Western Europe!

Real purpose of NATO clarified

The real purpose of NATO became clear after the Soviet Union was destroyed under the Gorbachev regime between 1985 and 1991. While the Warsaw Pact, the defensive alliance formed by the Soviet Union in 1955, was abolished, NATO was not. Instead, the United States signed up the Soviet Union’s former Eastern European allies into NATO and then expanded this operation further to include some of the former Soviet Republics.

Its attempt to induct Ukraine into NATO has led to the bloody Russo-Ukrainian war.

Even though Russia was now capitalist and thus represented no “social” threat to any existing capitalist nation, the United States was determined to tighten its encirclement of Russia through NATO. The only thing that would satisfy U.S. imperialism was the full transformation of Russia, with its vast natural wealth, into a semi-colony — with the emphasis on “colony,” not on “semi” — of the United States. This policy did not stem from the “evil” personalities of this or that U.S. president, as non-Marxist political progressives would have it, but rather from the very nature of capitalism and the monopoly capitalism that of necessity grows out of capitalist “free competition.”

It is absurd to think that capitalist Russia represents any threat to either Germany or Japan. Indeed, by maintaining their “alliance” with the United States, Germany and Japan risk being dragged into a future U.S.-Russian war — or, in the case of Japan, into a future U.S.-China war. It wouldn’t take many nuclear weapons to destroy either of these geographically rather small countries.

Why then did the governments of Germany and Japan tolerate the presence of U.S. troops decades after World War II ended? And why did the United States insist on maintaining these forces in Germany and Japan? There is only one reason. They were (and are) there to ensure the defeated Axis powers remain U.S. satellites.

The nature of U.S. domination of modern Germany and Japan

The United States maintained its domination of Germany and Japan not only through the stick — military occupation. It also used the carrot. Unlike after World War I, the United States made major economic concessions to the capitalists of Germany and Japan. The U.S. agreed to allow the Germans and Japanese access to its huge domestic market. The refusal of the United States to do this after World War I was one of the underlying causes of World War II.

This policy, however, proved a costly one for the United States. If the U.S. had used its military rule of Germany and Japan to dismantle the German and Japanese economies, U.S. corporations would have had the world market pretty much to themselves for many decades to come. If the U.S. could have gotten away with such a policy, its corporations would have grown vastly richer than they actually did — not that they were exactly poor as it was!

But as World War II was ending, the United States had its hands full with its struggle against the Soviet Union and its allies — the most important of these allies on the European continent being the revolutionary-minded workers of Western Europe — combined with revolutionary China, Korea, and Vietnam, later joined by Cuba and other restless former colonial or semi-colonial countries. The U.S. was eager to transform the former colonial and semi-colonial countries that had been dominated by European colonial countries and Japan into U.S.-dominated neocolonies.

And perhaps more importantly, the United States was afraid of a head-on collision with the workers of Western Europe. After the war, the U.S. did consider forcibly dismantling the German and Japanese corporations. The so-called Morgenthau Plan — named after Roosevelt’s Secretary of the Treasury Henry Morgenthau — aimed at transforming highly industrialized Germany into an agricultural country. In effect, Germany would have been reduced to “third-world” status. The United States had similar plans for a somewhat less industrialized Japan.

If such plans had been implemented, this would have meant huge chronic unemployment in Germany and Japan. It was extremely unrealistic to think that many German or Japanese industrial workers could be transformed back into peasant farmers as their fathers and grandfathers had been. That mode of production and way of life was gone for good. Shortly after the war, in the face of the growing influence of the Western European Communist parties and the rapid growth of other left parties and labor unions, the United States was forced to renounce such plans.

The postwar world takes shape

Unlike after World War I, the U.S. would allow Germany and Japan, as well as Britain and France, access to its huge home market. German, Japanese, Italian, British, and French corporations would be allowed to compete economically with U.S. corporations within the U.S. home market and the world market as a whole. In addition, the United States would guarantee raw materials-short Germany and Japan access to raw materials and foodstuffs. However, no political or military competition would be tolerated on the part of the defeated Axis nations.

Why didn’t the German and Japanese governments rebel against U.S. domination?

Suppose a government came to power in either Germany or Japan that wanted to compete with the United States, not just economically but militarily and politically as well. Such a government would first have to send the U.S. troops and bases packing. Any attempt to do this would mean a major confrontation with the United States and, in the case of Germany, with NATO.

Even if the U.S. government agreed to peacefully withdraw its forces if requested to do so — which is far from certain — the United States could retaliate by limiting access of the German and or Japanese corporations to the U.S. home market — and other markets policed by the vast U.S. armed forces. The United States could also use its military power to cut off the flow of raw materials and foodstuffs to any anti-U.S. German or Japanese government that might emerge.

For these reasons, the capitalists of Germany and Japan have shown no desire to challenge the U.S. military — and ultimately political — control of their countries that has prevailed since 1945.

Love of profit, not love of country, motivates capitalists

Since the Japanese and German capitalists have made huge profits under these post-1945 arrangements, much greater profits than they ever made when Germany and Japan were fully sovereign countries before World War II, they were reasonably satisfied with these arrangements. In contrast to the situation after World War I, they gave little support to right-wing nationalist forces — which existed in both Germany and Japan — that sought to restore full independence to these countries on an imperialist basis.

While capitalist propaganda constantly attempts to convince the workers that “love of country” is the highest value, the capitalists themselves are governed only by “love of profit.” Since World War II, “love of profit” has so far kept the German and Japanese capitalists loyal supporters of the U.S. empire.

The status of Britain and France after World War II

Though Britain and France were among the “nominal” victors, they found that their military and political independence regarding the United States was also melting away after World War II. This became clear during the Suez Crisis of 1956.

In 1956, the Egyptian government under the Arab nationalist President Gamal Abdel Nasser had nationalized the Suez Canal. London and Paris were determined to take it back and put nationalist Egypt back under their semi-colonial control. They teamed up with the government of Israel — as always eager to serve as a colonial policeman to any imperialist power that desired its services — and invaded Egypt. The United States, however, had no interest in British, French, or Israeli control of the Suez Canal. What was important to Washington was the ability of the U.S. military to seize control of the canal in a crisis.

Therefore, Washington told London, Paris, and Israel to end their invasion of Egypt. The United States used its financial power — threatening to withdraw support for the shaky British pound — and even its military power if it came to that. Realizing they were in no position to resist — financially, not to speak of militarily — Britain, France, and Israel were forced to withdraw.

After Suez — despite French President Charles De Gaulle’s nationalist gestures — it became clear that Britain and France were also unable to seriously compete with the United States either politically or militarily. As was the case with Germany and Japan, they were pretty much limited to competing with the United States economically. Therefore, one of the most important results of World War II was to reduce the other imperialist powers to satellite imperialist powers of the U.S. empire.

The biggest problem the United States faced in 1945 was the survival of the Soviet Union. As long as the Soviet Union existed, there was one major power that was not under the military and political control of the United States. Unlike Germany or Japan during World War II, it represented a rival social system whose existence was made possible by the workers’ revolution of October 1917. In addition, the survival of the Soviet Union, combined with the weakening of the old colonial powers, especially Britain, France, and Japan, created the conditions for many of the colonized countries to struggle to regain their independence.

U.S. imperialism sometimes — though not always — gave a certain amount of support to colonial independence movements from the old colonial imperialist powers when it served to advance its interests. For example, the United States demanded full access to the Indian market after World War II, much to the chagrin of London. This encouraged Britain to give India its independence in 1947 since the costs of holding India in colonial slavery were not justified if Britain could not monopolize the Indian market for itself. And U.S. policy during the Suez Crisis saved Egypt from being turned back into a British semi-colony.

Despite saving Egypt from being re-colonized by Britain during the Suez Crisis, the United States proved to be no friend of the people of the Middle East! If you have any doubts, ask the Palestinians, the Iranians, the Iraqis — or the Egyptian people!

After President Nasser died in 1970, Egypt was transformed into a U.S. neocolony. The same tendencies are visible in India. The United States acted the way it did during the Suez Crisis because it wanted to control the Middle East — it did not want any other imperialist power to control that region. It was willing to share some of the loot it extracts from the Middle East with the other imperialist powers, but only on its terms.

As a result of the destruction of Soviet power between 1985 and 1991, the situation confronting the former colonial or semi-colonial countries changed radically for the worse. Countries that sought to develop independently could no longer look to the Soviet Union and its Eastern European allies for support in their struggle against the imperialist powers dominated by U.S. imperialism.

After the Soviet surrender, the United States imagined that it would be able to rule the world pretty much unopposed in a so-called “New World Order.” It did not, however, quite turn out that way. Since the Soviet Union was destroyed, U.S. imperialism became both more greedy and more parasitic. As a result, the United States faced growing resistance throughout the world under various ideological banners that refused to go away. Far from bringing the peace that Gorbachev and his supporters claimed it would, the first 20 years after Gorbachev saw a series of shooting wars waged by the U.S. world empire against Iraq, Serbia, Afghanistan, and Libya. Eventually, this led to the bloody Russo-Ukrainian war.

A powerful but decaying global empire

By the early 21st century, the U.S. world empire was in decline. Perhaps in the not-too-distant future, a new period of political and military anarchy will succeed U.S. domination, much like it succeeded British domination between 1914 and 1945. Because of the basic contradictions of capitalist production that we have been examining in this work, any capitalist “order” — whether the British-dominated one that prevailed from 1815 to 1914 or the U.S.-dominated one that has prevailed since 1945 — contains the seeds of its own undoing.

But unlike World War I, World War II had a decisive outcome. It established the domination of U.S. imperialism over the globe — though not forever and not unopposed like Washington policymakers dreamed of — but for many decades to come. This is the biggest political difference between the post-World War I and post-World War II periods.

The Bretton Woods Conference and the international monetary system of the U.S. world empire

The largely British-managed international gold standard that prevailed from the 1870s to 1914 had worked reasonably well for the capitalist system. It could not eliminate cyclical economic crises, but no international monetary system can do that. Neither was there a debacle on the scale of the 1929-40 Great Depression under the classical gold standard. The regime of the classical international gold standard, with its fixed exchange rates and “limited” economic crises, had encouraged the rapid growth of global trade and international export of capital — capitalist globalization.

Capitalist Depression phobia

Since the 1929-40 Depression, capitalist economists and governments have been determined to avoid a similar Depression at almost any cost. They feared that the capitalist system might not survive “Depression II.” Even the prolonged retreat of the world workers’ movement that preceded the destruction of the Soviet Union and was then accelerated by it failed to end capitalism’s Depression phobia. Despite the supposed “end of history” that followed the Soviet political collapse, the capitalist class continues to display a curious lack of confidence about its long-term chances of survival.

Micro- & Macroeconomics

As we saw in an earlier chapter, beginning with Keynes, the bourgeois political economy split in two. One part, called microeconomics, is simply the old marginalism. Its purpose remains what it has always been — ideological. Microeconomics presents capitalism as an absolute mode of production based on “fair,” mutually beneficial exchanges without exploitation or contradictions. The microeconomists, therefore, hold that capitalism as a system will last as long as human life persists.

The other wing of modern bourgeois economics, macroeconomics, seeks to understand the workings of the capitalist economy within the limits required to develop policies that provide the capitalist governments and central banks with “tools” designed to stabilize capitalism economically — above all, to prevent cyclical economic crises from turning into Depression-breeding super-crises.

However, the bourgeois macroeconomists have not returned to the concept of labor value since that would open the door to the understanding of profit — surplus value — as arising from the unpaid labor of the working class. Though some marginalist concepts can be tacitly ignored when they get in the way of stabilization policies, there has been no real return to classical political economy and still less to the economic science developed by Karl Marx.

As a result, modern bourgeois macroeconomics is empirical, shallow, and superficial. It analyzes only the outward appearances, like all forms of “vulgar economics” do. And so our modern bourgeois macroeconomists have no real understanding of the nature of such basic economic categories as value, money, price, profit, interest, and rent. The “tools” they have provided the capitalist governments and central banks have a powerful built-in tendency to backfire.

One conclusion that the new discipline of macroeconomics drew was that the general price level must never be allowed to fall. We can see the beginnings of the policy of “permanent inflation” in Roosevelt’s New Deal. The Roosevelt administration, quite frankly, sought ways to increase prices.

This was in contrast to the policy of capitalist governments and central banks during the era of the classical international gold standard, which was to maintain the gold value of the currency — the price of gold in terms of the currency — at a fixed level. This was believed to be the key to economic stability. The policies of the gold standard years assumed that the general price level that emerged from market forces was the optimum one.

The macroeconomists claimed — and claim — that this policy was a huge mistake. When prices fall, they point out, the industrial and commercial capitalists tend to put off new purchases of raw materials and machinery and commodity capital, waiting for prices to fall further. Such a situation, the macroeconomists maintain, tends to drag out and deepen depressions.

In addition, falling prices tend to increase the real wages of employed workers. Of course, if falling prices are accompanied by an economic crisis, the reduction in the quantity of work will still drive down the overall real income of workers. When this happened, there would be far less work available, so the actual standard of living of the workers would still fall despite the rise of real hourly wages. Keynes put great emphasis on the tendency of falling prices to raise hourly real wages in his 1936 “General Theory,” considered the foundation work of macroeconomics.

The macroeconomists noticed that prices had generally risen around 1 to 3 percent a year during the prosperity of 1896-1913 that preceded World War I, as well as other long-term periods of capitalist prosperity, such as the one that followed the gold discoveries of 1848-51. Lacking any understanding of the real nature of value, money, and price, they imagined that correct policies by the governments and central banks could ensure a 1896-1913 — or 1848-1873 — type of “creeping inflation” that would generate permanent capitalist prosperity. This policy was later dubbed “inflation targeting.”

Virtually all bourgeois macro-economists agreed on this. They only differed about the policies that can bring about these — for the capitalists — happy and profitable results. Should the emphasis be on fiscal policy — running deficits during periods of recession or stagnation — or monetary policy? Regarding monetary policy — the policies of the central banks — should the emphasis be on manipulating interest rates — raising them during economic booms and lowering them during periods of recession — the policy generally supported by the followers of John Maynard Keynes and, for the most part, the central bankers themselves? Or, as later suggested by Milton Friedman’s followers, should monetary policy be based instead on maintaining a steady rate of growth of the quantity of money?

Ironically, the new economic policies urged by the macroeconomists proved, in the long run, to be incompatible with the postwar international monetary system, whose foundations were drawn up at the 1944 Bretton Woods Conference. As Marxists who understand the real nature of value, money, and price, we can easily understand why the new international monetary system was doomed to collapse even before its details were hammered out at Bretton Woods.

The Bretton Woods System doomed from the start

Between about 1870 and 1914, the gold standard dominated the international monetary system. The international gold standard had not come into existence through a formal agreement among the major capitalist countries. Instead, the major capitalist nations of the time found it in their interest to define their currencies in terms of a certain weight of gold and maintain the convertibility of their currencies into gold coins of a given weight. They were more or less obliged to do this because they would risk being excluded from the London capital and money markets if they didn’t.

The gold-exchange standard was a variant of the gold standard that existed in the pre-1914 years. Some countries, especially colonized countries such as India — used pound-denominated British government securities alongside gold to back up their currencies. Owners of the Indian currency would demand British pounds instead of gold itself when they wanted to redeem Indian currency.

Even some of the European central banks used British government securities alongside gold as reserves, though as World War I approached, they began to shift all their reserves back into gold itself. This hoarding of and competition for gold reserves among the European central banks as they prepared for the approaching war may have played a role in the 1913-14 world recession — which followed the crisis of 1907 by only six years, considerably less than normal for the industrial cycle — which immediately preceded World War I.

After World War I, there was an attempt to restore the pre-war international gold standard. However, it was undermined by the high prices of commodities relative to underlying values. As we saw in earlier chapters, this expressed itself in a shortage of gold.

Therefore, to a much greater extent than before World War I, the gold-exchange standard was used in place of the “pure” gold standard following World War I. Countries short of gold used dollar-denominated U.S. government securities alongside their scarce gold reserves to back up their currencies.

Also, some countries, such as Britain, used a weaker form of the gold standard called the gold bar standard rather than a gold coin standard. Instead of coining gold, the government and the Bank of England were willing to sell gold bars — bullion — for only large sums of pounds — Bank of England banknotes.

Unlike before the war, a person who owned five-pound notes could no longer go to the nearest branch of the Bank of England and redeem them for five gold sovereign coins.

All these modifications of the classic gold standard were attempts to economize on scarce gold. As we now know, it all ended in the super-crisis of 1929-33 — with its unparalleled unemployment — which finally solved the problem of the world gold shortage by increasing the purchasing power of gold and, at the same time, increasing its supply measured in terms of weight. However, the price for these happy results was the super-crisis of 1929-33 and the Great Depression it created.

As the inevitable U.S. victory approached in 1944, a conference was held in Bretton Woods, New Hampshire, to lay out plans for an international monetary system to underpin the now U.S.-dominated world capitalist system.

Unlike after World War I, gold was plentiful, but its distribution was even more lopsided, with most of the world’s gold held by the U.S. Treasury. This was only one manifestation of U.S. domination. While the European and Japanese economies emerged heavily damaged by the effects of World War II, U.S. industrial might remained completely intact. Not a single bomb had fallen on the 48 states that then made up the North American union, nor had a single shot been fired there. At Bretton Woods, the United States held all the cards.

The Bretton Woods Conference created three new “international institutions” — all dominated by the United States — that survive in the 21st century. They are the International Bank of Reconstruction and Development, the World Bank; the International Monetary Fund; and the General Agreement on Trade and Tariffs, now called the World Trade Organization, or WTO for short. GATT — and its present-day successor, the WTO — are the organizations through which the United States controls access to its huge home market, enforces its access to foreign markets, and imposes “intellectual property” where human knowledge is treated as a commodity. (For an explanation by computer scientist Richard Stallman of what is wrong with the concept of “intellectual property,” see http://www.gnu.org/philosophy.)

The World Bank was initially foreseen as providing “soft” loans for postwar European reconstruction. More important to the operations of the postwar international monetary system was the IMF, which provides loans to meet short-term liquidity crises, often with punitive strings attached.

The new international monetary system was an extension of the gold-exchange standard. The United States promised to exchange dollars held by foreign governments and central banks for gold bullion at the rate of one ounce for every $35 presented to it. As far as governments and central banks were concerned, the dollar would, in the postwar world, be a form of credit money, not the token money it was after 1971. As far as everyone else was concerned, the dollar would be token money, with the dollar price of gold varying on the open market.

However, the promise that the United States made at Bretton Woods to redeem its dollars at the rate of an ounce of gold for every $35 meant that the U.S. could not allow the “free market” price of gold to rise much above $35.

If the dollar price of gold rose significantly above $35 on the open market, foreign governments and central banks would be tempted to “buy” gold with their dollars from the United States for $35 an ounce and then sell the gold on the open market at the higher dollar price. They could then use these dollars to buy more gold at $35 and either hold the gold or sell it for dollars at a higher price. If they did this, they could repeat the operation with the extra dollars “earned” by selling the gold they got from the U.S. Treasury in exchange for their dollars until all the gold in the U.S. Treasury was exhausted.

Other currencies would be linked to the dollar at fixed exchange rates, and through the dollar would also be defined in terms of weights of gold, as had been the case in the days of the classic gold standard. Indeed, the fixed exchange rates allowed smaller fluctuations around the par value than the old gold points of the pre-World War I international gold standard had allowed. To this extent, the Bretton Woods system was a “better” gold standard than the old pre-1914 system had been.

However, under certain circumstances, currencies could be devalued against the dollar — and gold. Suppose a country faced a balance of payments deficit and lost reserves — essentially U.S. dollars. Under the Bretton Woods System, it could raise interest rates, which could mean recession; it could borrow from the International Monetary Fund and hope the balance of payments deficit would go away without a recession — or perhaps raise interest rates more gradually, resulting in a milder recession — or it could devalue its currency. Indeed, major devaluations of European currencies occurred in 1949, and France devalued its currency several times during the late 1950s economic crisis.

The macroeconomists who designed this system hoped this would allow governments and central banks far more flexibility to fight recessions than they had had under the pre-war international gold standard or during the abortive attempt to revive the gold standard after World War I.

By implication, the one currency that could not be devalued was the U.S. dollar. The dollar was defined as 1/35th of a troy ounce of gold and was the anchor of the entire system. If the dollar went — that is, if it was devalued — the entire Bretton Woods System would collapse.

The hopeless contradiction of Bretton Woods

As macroeconomists saw it then and still see it today, the huge decline in prices in the early 1930s brought the capitalist system to the verge of collapse. The key to avoiding a new Depression with a capital “D” was to ensure that no major price deflation would be allowed. Instead, the job of governments and central banks was to ensure a gradual rise in the general price level — about 1 percent to 3 percent a year — much as had occurred between 1896 and 1913 and between 1848 and 1873.

However, these policies conflicted with the basic economic law that governs the capitalist system, the law of the value of commodities. The old international gold standard had been kept “healthy” by periodic drops in the general price level that offset the periodic inflations of the general price level. In this way, prices fluctuated around an axis ultimately determined by their labor values.

But the new economic doctrines held that governments and central banks must do everything they can to prevent such periodic falls in the general price level.

If the Bretton Woods System had survived, this would have meant that over time, prices would have progressively and permanently risen above values. The dollar price of gold would be frozen at $35, but all other prices would gradually drift higher. If this happened, the industrial capitalists who produced gold would experience a gradual but relentless rise in their cost price but would be unable to raise the “price” at which they sold their particular commodity — gold. That would be fixed at $35 a troy ounce.

This implied that the profitability of mining and refining gold would progressively fall and sooner or later disappear altogether. And under capitalism, when it isn’t profitable to produce a commodity — even if the commodity in question is the money commodity itself, whose in use value the value of all other commodities is measured — it isn’t produced.

The combination of the policy of “permanent inflation” and the Bretton Woods System guaranteed that the shortage of gold of the inter-war years would not only sooner or later reemerge, but it would grow progressively worse as gold production fell toward zero. Sooner or later, something had to give. And it did. What has to be explained is not why the Bretton Woods System collapsed but why it lasted as long as it did. We will examine this question in the coming chapters.

In addition to these hopeless contradictions involving the new international monetary system, capitalist governments faced a grave, shorter-term problem. As we saw in the preceding chapters, the decision of the United States to raise tariffs during the 1929-33 super-crisis meant that European countries could not possibly repay their debts to the United States except through bankruptcy. Wall Street moneylenders ended up losing much of the money they had lent to Europe.

As we saw above, right after World War II, Washington was seriously considering the destruction of the German and Japanese economies. Because of this, the men on Wall Street — despite the huge sums of idle money capital burning holes in their very deep pockets — were unwilling to lend money for purposes of European reconstruction. They feared that the Europeans would not be able to repay their debts again.

For its part, Washington was alarmed by the strength of the Communist Parties in Europe — the Italian Communist Party had the majority of the population behind it right after the war. Fearing it was on a headlong collision course not only with the Soviet Union but the workers of Western Europe and Japan as well, Washington was forced to give up any idea of dismantling the economies of Germany and Japan.

The so-called Marshall Plan signaled a definite shift by Washington away from destroying the European and Japanese economies. The Marshall Plan program of “soft” loans and grants to Western Europe was a signal to the Wall Street money lenders that, unlike after World War I, this time it was safe to lend money to war-torn Europe and Japan. The so-called economic “miracles” of the 1950s and 1960s that followed were crucial to the survival of the capitalist system during these critical years. As happened a century earlier, after the 1848 revolutions, the radicalization of the workers in Western Europe and Japan was drowned in a tide of rising capitalist prosperity.

But the newly established U.S. world empire paid a price. As capitalism boomed in Western Europe and then even more so in Japan, the U.S. faced economic competition — though not military or political — on a scale it had never experienced.