Gerald Davis, Managed by the Markets: How Finance Re-Shaped America. Oxford: Oxford University Press, 2009). 304 pp. ISBN: 978-0-19-921661-1.
Judith Stein, Pivotal Decade: How the United States Traded Factories For Finance in the Seventies. New Haven, CT: Yale University Press, 2010. 365 pp. ISBN: 978-0300-11818-6.
Daniel T. Rodgers, Age of Fracture. Cambridge, MA: Harvard University Press, 2011. 352 pp. ISBN: 978-0-674-05744-9.
As the second decade of the twenty-first century begins, Americans seem uncertain, anxious, concerned with economic instability, impatient with themselves and their leaders, fearful of where society and civilization are heading. Part of the problem is that expectations are high: people have come to expect too much and things do not change fast enough to suit their tastes or desires. And part of the problem is that things have not only changed but changed so quickly as to be imperceptible except from hindsight, transforming the world but leaving minds where they were in an earlier time. It is difficult for most Americans, for example, to contemplate, much less understand, why their nation is no longer the undisputed hegemonic power that it was immediately after the Second World War. It is likewise difficult for them to see why their grandparents and parents had full-time well-paid jobs and they do not. Still a third difficulty is that there is so much to be done and not sufficient time to do it in. Without any intimation of apocalypse, Americans are both aware of and confused by the host of problems that plague their own country and the world: climate change; increasing poverty and vast chasms of inequality not only between poor and rich but between the very rich and the middle classes; the scarcity of fossil fuels and the increasing world demand for them, which raise the price of existing forms of energy, whilst new forms are not ready for commercial distribution on a large scale. For some traditional culture is being systematically destroyed by people seen as lacking respect for American “values”; for others traditional culture and mores constitute an impediment to both individual freedom and civil rights. And when Americans look to government for a solution to their myriad problems, it appears both intrusive on the one hand and too weak to change things on the other.
Indeed, America’s problems are rooted in its politics. To be quite blunt about it, what is known as liberalism in America today is for the most part really centrist conservatism, a term indicating politicians who are attempting to hold a broken polity together and also to see if this same polity can be tweaked slightly to yield a bit more reform, either to cut the cost of government or to appease a base in great need of help. Every day the Right in America gets more extreme, more regressive and reactionary, so as to give real conservatives like Edmund Burke, Samuel Coleridge, Benjamin Disraeli, Winston Churchill, and Charles De Gaulle a bad name.
Though the Democratic Party, being less extremist, is not as responsible as the Republicans for the sorry state of American politics, both sides have difficulty in imagining the shape of a new society. In different ways the two sides are irrelevant to substantive change and have become largely vehicles for the expression of impotent anger for ordinary Americans and sources of wealth, power, and status for exploitative elites at the top of the pecking order. The Left is melioristic, instrumental, and overly moderate. The Right affirms the illusion that is possible to find “the real America,” which is a method of denying reality, turning back the clock to a time when the United States was a white man’s land, when there were no cities in which minorities are the majority, where gays and women were kept in their place (the women at home and the gays in the closet) and where there was a free market that operated without governmental supports and always achieved optimal results. Illusions such as these can be dangerous for the simple reason that living with an unreal picture of the world can easily degenerate into nihilism, once the illusions are dispelled by reality. Without a vision of future possibility, electorates cannot be galvanized and movements for reform are non-existent or remain ineffective expressions of popular discontent that never rise to the level of a new democratic politics.
There is an absence of statecraft, by which I mean both willingness and the capacity to build and rebuild the state, to make it strong and deep, not merely to protect Americans but so as to care for and insure the common good. States need sufficient power to restrain interests and to respond affirmatively to the needs of the citizenry as well as to move society forward to meet pressing challenges that result from social and economic transformation. One often hears talk about state building or statecraft as I mean it here when discussing underdeveloped nations, “failed” nations or emerging nations that are not yet “great powers.” But powerful nations such as the United States never apply either the diagnosis or the proscription to themselves.
Americans, of course, have always been suspicious of state power, beginning with their animosity against the British government in the American Revolution. In the United States, compared with Europe, the business community has been much less accepting of restraints imposed by government. Thomas McGraw (1984) has shown, however, that industry profits from regulation. During the Depression the banking industry’s profits and reputation reached a nadir. What McGraw describes as the work of “statecraft” involved in the creation of the Securities and Exchange Commission (SEC) – establishing borders between the respective economic roles of government and business; policy construction; enactment of legislation; pressure and patient persuasion of relevant parties; regulatory implementation – was largely responsible for the industry’s recovery. Yet contemporary American politicians have failed to draw any lessons from historical models of successful statecraft, even though, as I argue here, that many of our problems derive from the fact that the American state is, at least with respect to domestic policy and programs, extremely weak.
I am not, however, asserting that America is a “failed” state. The United States is the oldest democracy in the world and though its political institutions at the moment are dysfunctional, there is no reason to doubt that over the long term it can still use its strong tradition of democracy and the vast talents of its citizenry to repair whatever is broken and in need of reconstruction. If not “failed,” however, it seems to me that America at the beginning of the twenty-first century is “stuck” and the reason for this is that the American state is an inadequate instrument for meeting the challenges and needs of current circumstances. The electorate is dissatisfied with politicians for good reason; they play their political games but they do not do “state work.” Ordinary folk intuitively understand this last statement. The popular dismay over the war in Iraq – one of the reasons for Barack Obama’s victory in 2008 – and the present dismay over the endless war in Afghanistan relate to the deeper concern that while America is “state building” in the Near East and central Asia, it is ignoring “state building” at home. All the above suggests a follow-up question, “How can America get unstuck?” I shall endeavor to respond to this question at some length in this essay.
Was there a moment in the late twentieth century when our society might have become different than it now is? In the last year, several books have appeared which answer this question by exploring the 1970s as the “pivotal decade” in which both America and global society shifted ground. Judy Stein’s history of American politics and economics in the seventies, aptly entitled Pivotal Decade, shows how the United States switched from being a manufacturing society of “factories” to one controlled by financiers. Gerald Davis’ Managed by the Markets discusses the transition from the stable and somewhat socially responsible postwar business world of “managerial capitalism” to the volatility of “shareholder” capitalism since the 1970s. And Daniel Rodgers, in Age of Fracture, provides an extensive intellectual history that records a transformation from “a post-World War II era thick with context, social circumstance, institutions and history” which “gave way” to an individualistic and disaggregated culture that “stressed personal choice, agency, performance and desire”(3). Together, these three books provide some clues as to why, in Trollope’s phrase, “the way we live now” induces in us a strong sense of uncertainty and instability.
In spite of their different subject matters, all three books have much in common: they are all narratives about structural crisis, about the inadequacy or breakdown of economic, social, and political institutions and social and political consciousness. Put another way, each of these books demonstrates how a variety of discourses that once allowed citizens to put their society into historical context and, at the same time, chart its future agenda, have come undone. Both practical forces and perceptions have altered in ways that have made it extremely difficult to understand how American society operates and how it might be improved.
Davis, Stein, and Rodgers mostly discuss American history (economic, political, and intellectual) and consider social, political, economic, and cultural transformation in the late twentieth century largely from an American perspective. Implicit in their arguments are dialectical contradictions relating to American history and to the United States as a nation that are never fully resolved, perhaps because they are not capable of resolution. The first is that America is a nation with its own particular history and development, by which neither I nor the three authors under review mean to suggest that the United States is necessarily an “exceptional” nation. Rather, America is a country with a history that follows its own path: as one speaks of a British model of political economy or culture or a French model or a Scandinavian model, one may also speak intelligibly about an American model.
At the same time, and particularly since 1945 and the end of the Second World War, the role of the United States in the world has been, to say the least, hegemonic. Or, to put this another way, as Gert Schmidt of Erlangen University has said in this journal, America is a “global nation” (GSJ no. 3). What does this mean? I believe it means that the United States is an extremely powerful nation, a nation with great resources, a large and powerful economy, a diverse and often talented population, a nation equipped with a vast, well-trained and technologically sophisticated military apparatus and with enormous political opportunities, yielding freedom of action as well as political constraints, all of which follow from its global hegemonic status. Sometimes Americans use their power with responsibility and discretion, wielding what political scientists call “soft power” and reserving “hard power” for issues of last resort. And sometimes, as in Vietnam and in Iraq, American clout is used unwisely, unilaterally and wastefully, allowing false doctrines like “pre-emption” or the Sino-Soviet conspiracy to justify bad choices and the unnecessary use of “hard power.”
However “global” the United States becomes, it follows the path set forth by the American political model, by the history of the American polity. I emphasize this for a particular reason related to the three books under review. Gerald Davis talks about the transition from managerial capitalism to shareholder capitalism, mainly in the context of what happened in America in the fifties, seventies and then globally in the late twentieth century. Yet once he gets to the seventies, Davis seems to identify global capitalism wholly with American capitalism, an identification which I think is wrong. Rather than identify global capitalism with the United States, it might be more correct to argue that in the late twentieth century successive American governments allowed global capitalism to forsake any connection with American national or local interests. To a certain extent global capitalism, in its restless search for profit, breaks through the borders of national economies and loses its national character. We are not the only capitalist economy in the world and the global corporation not only has multiple or multi-national loci but also promiscuously expands and expropriates territory, market share and labor force without regard to national boundaries or interests, under many auspices and many flags (or simply, in line with Davis’ central argument, under its own “brand” banner).
Automation and robotization displaced many workers, even where American manufacturing was not sent abroad. But deindustrialization has largely to do with globalization and with what I call inter-capitalist competition. For the purposes of efficiency, workers would have been laid off and factories closed both because automation superannuated a labor-intensive workforce and because other places were able to make certain kinds of goods at cheaper cost and higher quality than the same goods made in the United States. In the light of this complex scenario, some elements of American manufacturing were bound to be affected by the advent of new technologies and inter-capitalist competition.
But a different politics and strong statecraft might have somewhat moderated the exodus of good paying jobs for ordinary folk. The historical path of the American model of government, especially in the immediate postwar era, may have simultaneously blinded Americans to their own interests and also failed to provide the institutional means – in the Congress or the presidency – to initiate, much less achieve, the kind of industrial policy and, I would add, workforce training, that would have saved the nation from “deindustrialization” in the seventies and the remainder of the twentieth century. A constitutionally complex and weak state, dependent on growth (production and consumption yielding aggregate demand), could not provide the political wherewithal, once growth diminished, to enforce or sustain an industrial policy or to design an educational system at the national level that would have honed skills required to create and sustain innovative and quality-oriented manufacturing.
Whereas in different ways in Europe and Asia the modern state has been the directing force in the development of society and, at least in Europe, also ultimately a support for democracy, in the United States weak government has resulted in mistrust of government and further constraints on state action. Our Civil War was partly fought over slavery, but it was also fought to ensure that the weak bonds of national and governmental connection were not altogether destroyed by allowing the South to secede. Lincoln was a great statesman, as demonstrated by his having prevented Great Britain from recognizing the South, but it was the Union, not the state, which he wanted to preserve and he had to reinterpret the Declaration of Independence and the Constitution to do so. If J.G.A. Pocock and Quentin Skinner (1975, 1978) are right, the founding fathers understood, better than did most men of their time, democratic statecraft and the importance of a public sector both to limit the power of commerce and, in ways related to the common good, to promote it. But they also built into the Constitution so many impediments to a strong state, largely because of the erstwhile independence of the thirteen colonies under British rule. The American state as known today is the result of the work of late nineteenth century and early twentieth century Progressive politicians who built, for present purposes, an inadequate countervailing power against the rapidly developing power and autonomy of big business, and the threat posed by business to both government and its citizens.
The word statecraft is conventionally associated with the word statesman, which usually refers to some kind of diplomacy relevant to foreign policy. Scholars who write about “emerging nations,” however, use the word statecraft to refer to the actual act of building and rebuilding government. In what follows, I use statecraft to underscore the need for the United to do the same. The Founders created an eighteenth-century state inadequate to the twenty-first century. The statecraft or “state work” of the early twentieth century, the product of the Progressive movement, was, however, never completed, with the result that the United States has big government but not government which can do long-range planning for the public good and not a government which is embedded in the hearts of its citizens as a positive force for democracy and for the interest of the whole of society.
Since the early twentieth century, while there has been much specialized policymaking and the creation of hundreds of “think tanks” devoted to the minutiae of policy, there has been little attention to statecraft. Government has been portrayed as the enemy of freedom and instead of the cooperation and sense of common purpose required in an interdependent global era, a premium has been put, at least by some, on an individualism so extreme as to undermine both the communal institutions of civil society and the state itself. Policymaking without sufficient power may provide ample employ for an intellectual class trained at the best universities, but without a deep state tradition such policymaking scarcely serves those who are its presumed concern – the American people – while permitting the divorce of government from its citizens at the same time that it allows the power of money and status to hold sway over justice.
My motive in choosing to review these three books about the seventies and onward through the late twentieth century is that they roughly discuss the same historiographical period but are nonetheless dissimilar in their disciplinary approach and their narratives. Davis is an economic sociologist; Stein is a nitty-gritty historian of American political economy; and Rodgers is a superb historian of culture and ideas. Yet, for all their differences, the three books complement each other, intersect, and finally tell together a much larger and more important story than if read or reviewed separately. Davis, Stein and Rodgers discuss momentous developments in the last three decades of the twentieth century – a changeover from the ordered business culture of corporate managerialism to the volatile and competitive culture of shareholder capitalism; a structural crisis experienced in the seventies – deindustrialization or the decline of American manufacturing – which the liberal government of the United States, with its inability to intervene directly in the economy, was incapable of managing. And as Rodgers analysis implies, ideas reflected in complex ways the United States’ inability to deal with structural crisis or meet the challenge of the new global order. The irony of his story is that as the already weak social and political institutions of the postwar era began to crumble, Americans turned towards individualism while losing touch with their history.
Yet none of these books or the stories they tell, deal with the main theme of this essay, which is that America’s fate in the late twentieth century cannot be understood properly without an analysis that focuses on the inadequacies of American government as a tool for managing the myriad problems of American society. One need not accept Hegel’s notion that the state is like a God to agree with his view that government is the pinnacle where the conflicts of civil society and family find mediation. Would we think about the English Reformation, for example, without discussing the development of the English state under Tudor monarchs such as Henry VIII and Elizabeth I? So why, in talking about America in the late twentieth century, do we fail to consider the absence of state development or, worse yet, political will in service of the state’s destruction?
In part the answer to my questions can be found in the grand opus of Karl Polanyi, The Great Transformation: The Political and Economic Origins of Our Time (1944), where he shows that in modernity and industrial society, for the first time in history, thinkers and politicians and the general population privilege the economic sphere over the political sphere, so that focus is on the economy and the essential role of the state is in the shadows. Moderns tend, in other words, to be “economistic.”
Another reason for omitting consideration of the role of government has to do with contemporary American politics. Both Left and Right, in different ways, are anti-statist. The Right thinks (or at least argues) that state bureaucracies are an impediment to the optimal functioning of the market; the Left, at least since the war in Vietnam, fears government as a threat to civil liberties and human rights and also believes that the state is unresponsive to the real sites of democratic activism and possible reform in neighborhoods and municipalities. If one is a libertarian, a member of the CATO Institute, then one might as well do without any kind of state: libertarianism is anarchism with a capitalist flavor. If one is a Leftist, it’s likely that non-governmental organizations (NGOs) are seen as an alternative to government, which helps me understand why the Left is so often ineffective and irrelevant. Scholars like Davis, Stein, and Rodgers are implicated, like all of us, in the mind-sets of our time; they are citizens as well as authors. As Fred Block has shown in Structure of Innovation (2011), one has to delve deep to identify the indispensable role and activities of the state, which are certainly there but, like God in Racine and Pascal, “hidden” or largely covert.
Gerald Davis’ Managed by the Markets: How Finance Re-Shaped America might well have another and perhaps more accurately descriptive title – “The Death of the Corporation.” The story that Davis tells is a story of how corporations, which were once real entities that made real things – the proverbial widget or steel and automobiles – turned into a mere “nexus of contracts” which might not make anything in the United States but which nonetheless continue to exist, as a kind of Potemkin village, for two related purposes: first, to establish and support a brand name; and, second and most important, to create profit and value for shareholders.
In the course of telling his story, Davis redefines the meaning of what we have come to describe as post-industrial society. For him, post-industrialism is not merely a service economy as opposed to a manufacturing economy. It is rather an entire economic and social system wholly shaped by finance, in which everything and everybody are related to share price and to the stock market. Corporations in this system are thus signified by the price of their stock market shares on any particular day or in quarterly reports on profits or loss. Contemporary corporations have lost all connection with communities, with their workforce and with any kind of social responsibility. Everything of importance is about the bottom line and the language to describe human and social activity has changed accordingly. We are not so much human beings as people who make investments. In educating ourselves, we invest in our own “human capital.” Communities are described as either having or lacking “social capital.” A house is not a home, but a “tax-advantaged option on future price increases.” And if we nurture our children and enable them to grow up into mature men and women, what we are doing is described not as parenting, but rather investing in the “social and human capital” of the future (vii).
This is a radical and to an extent original thesis because what Davis is describing goes beyond the ordinary process of capitalism in corroding its own cultural supports or debasing the uniquely human goods of labor and aesthetic/moral/religious production that operate essentially outside the market to mere “things” whose value is determined, via the processes of reification and commodification, by the “cash nexus” of the market. Back in the late 1970s, Fred Hirsch, in The Social Limits to Growth (1977), feared that capitalism would destroy the traditional values and institutions that provided the binding (erotic and libidinal) energies that impelled it to change and even “revolutionize” the world; the more value itself was conditioned by market circumstances, the more universal value and creativity would diminish and disappear. Marshall Berman, in All That’s Solid Melts Into Air (1982), followed Marx in arguing that the meaning of modernity (and not accidentally, capitalism) is that it transforms the world dialectically, at one moment destroying all that stands in its profit-seeking way and at another moment restlessly seeking after what is new and useful and can add to its energies. Berman’s interpretive add-on to Marx is that a capitalist society can also give birth to a democratic politics. For Berman human beings in modernity are endlessly suspended between past and future, between the corrosive character of capitalism and its dialectical opposite, the vast creative energies that are let loose within the ever changing and evolving built environment, namely the city, which he, Berman, sees as the ground (Grund) in which democracy is established and in which its endless struggles both for and against capitalism are “sited.”
Davis makes his argument less dramatically than Berman. Focusing on transformations at the heart of capitalism, he provides the reader with an interesting and relatively detailed historical account of the development of the modern corporation. Following the work of historians like Alfred Chandler and his notion of “the visible hand,” Davis sees the typically large modern corporation as beginning in the late nineteenth century as an organizational and management entity that quite purposively rationalized and unified entire industries – oil, steel, railroad, chemicals, farm and construction equipment, electricity, etc. Most of these giant corporations were created by ruthless entrepreneurs who destroyed or absorbed their competitors – the so-called “robber barons” – along with assistance from great financiers like J.P. Morgan who wedded finance to great manufacturing corporations to make the United States by 1890 the world’s largest and most efficient industrial nation.
Though finance was crucial in the creation of great corporations, its influence waned as the corporations grew, because as production improved manufacturing, profits became retained earnings that were plowed back into increased production and additional corporate growth financed by corporate equity. To promote further growth, corporations also went “public” and used funds provided by sale of stock to a large number of shareholders who had but a miniscule interest in the corporation, thus creating corporations that were not so much directly owned by the original founders as managed by corporate managers. This was the development advanced most prominently by Berle and Means in The Modern Corporation (1932) and which Davis describes as a process whereby industry became highly concentrated or centrifugal in terms of market share and management but also increasingly centripetal or dispersed with respect to ownership.
In order to render their mega-corporate status legitimate, corporate managers devised a number of strategies to make their corporations publicly acceptable or, as Davis pungently describes it, to give the corporation “a soul.” First, of course, they created enormous advertising and public relations campaigns, both to increase sales of the products they manufactured and to convince people that their corporation served its workers, the larger public, local communities and the nation: AT&T was not merely a mega-corporation that monopolized phone service, but “Ma Bell” with millions of shareholders and employees. In the immediate postwar era, employers strove to chasten labor’s power (something that Davis does not mention) via the Taft-Hartley Act of 1947; they feared a union movement that stressed employee militancy. But once unions were weakened, the same employers provided a corporate welfare state for their workers: pensions; health insurance; internal promotion ladders; employee country clubs replete with restaurants, gyms, pools. And where corporations lacked social responsibility, a business-oriented Republican president, Richard Nixon, stepped in to mandate occupational safety (OSHA, 1971), concern for the environment (EPA, 1970) and affirmative action (OEO, 1972).
It is unlikely that Nixon meant to constrain business. Rather, he was less myopic and understood its long-term interests as opposed to its short-term gains. And he was a great political strategist, not unlike Bismarck, who attempted to co-opt normally Democratic interest groups and Leftist goals while simultaneously lambasting Democrats and other elements of the Left as unpatriotic, elitist, socialist and irreligious, so as to win over the South (the “Southern strategy”) and working-class voters, his so-called “silent majority,” whose gut-level nationalism clashed with protest against the war in Vietnam and the Cambodian invasion.
The late fifties and sixties were the heyday of managerial capitalism. Davis mentions the oil crisis of 1973 as its terminus, but there were many other factors that caused its downfall. Growth in productivity lowered employment and destroyed the connection between managers and their once large workforces. Service employment, which replaced manufacturing, did not always provide benefits like pensions and health insurance. Conglomerates, in particular, were unwieldy and some of their divisions were often unprofitable. It was difficult to make a case for an auto company owning a baked goods firm.
Academic economists were at the forefront of the attack on managerialism and conglomerates. They argued that large, diversified corporations were often unprofitable or, more precisely, that their share value was low, inasmuch as the whole was less valued than its parts. In defense of shareholders, takeovers were justified. Moreover, managers had made corporations into what they were not: social institutions, whereas, in fact, “contractual relations are the essence of the firm . . . legal fictions which serve as a nexus for a set of contracting relationships among individuals” (83). The Reagan administration seconded the economists by promoting takeovers. As a result of takeovers, between 1980 and 1990 one-third of the largest corporations in the United States had disappeared as independent entities. Shareholder value became the mantra of corporate executives, who were now compensated with shares or stock options, a further incentive for “shareholder capitalism.” And managers and their employees parted company. Managers divested unprofitable segments – usually in manufacturing where global competition was rife – of their corporations and many workers were let go to find new jobs without benefits of any kind. “Shareholder capitalism” meant the beginning of unstable times for ordinary folk.
The disintegration of the corporation meant the transformation of production. Shareholder-oriented firms had no responsibility to workers or communities and since their first priority was profit, they could make goods anywhere. OEM (original equipment manufacturers) corporations that had heretofore made their goods from scratch in the United States now distributed manufacturing on a global basis: one part made in Asia; another in Europe; still another in Latin America. About twenty per cent of American workers were displaced by “outsourcing” and similar practices. These workers were absorbed by service industries (which, with new technologies, would also in time be outsourced), the chief of which was finance.
Traditional banking went the way of the old socially embedded corporation. In a globalizing world, potential lenders could borrow money anywhere. The habit of depositing funds in banks no longer made any sense when most other forms of investment, such as mutual funds, paid more. New cybernetic technologies allowed bankers to follow corporations in abandoning local connection and communal responsibility. The globe was an investment market in which all kinds of debt could be bundled and sold as securities. Because these securities were widely dispersed, it was assumed that risk was lessened, that investment in debt was shared. Government encouraged securitization by delocalizing banks and, through the repeal of the Glass-Steagall Act, by allowing banks and investment houses to merge.
Davis goes on to tell a relatively familiar story about how securitization eventually led to the financial crisis of 2008. Computerization allowed banks to analyze debt bundles and rate each part – from the debt of worthy borrowers to those who were likely to default on their loans. Investors around the world who bought these securities were thus able to assume that they were safe investments. With the advent of globalization, financial flows increased and financial services became a key source of profit. With deregulation, banks became “too big to fail” and lacked responsibility of the kind that bound their predecessors to borrowers. Huge profits and huge annual payouts for the banking executives served as negative incentives with respect to responsibility for community or borrowers and in an unregulated market, speculation and profit went together. Banks became casinos in which the high roller was king.
Davis is correct in arguing that finance massively reshaped American corporations and global markets. But his discussion does not really explain why this happened and what it has to do with the larger trajectory of American history in the late twentieth century. And he accepts fully Daniel Bell’s idea of the “inevitable” transition from an industrial economy to a service-oriented or “post-industrial” economy.
In this he is largely mistaken. Post-industrialism is a somewhat appropriate term to describe the new technologies of contemporary life – cybernetic, digital, and robotic. But there are no “inevitable” transitions in history and post-industrialism is a misnomer for a world which has become a global factory. Manufacturing still exists in many European nations and in Japan, all of which have kept important parts of their manufacturing economy, and in nations throughout the world, Asia, Eastern Europe, Latin America, even parts of Africa, which make manufactured goods for Americans to consume. Most Americans, it is true, make their living in service industries, particularly financial services and healthcare, which have grown to constitute about 45 per cent of the American economy. Manufacturing has declined sharply in the United States since the late sixties, now representing about 10-15 percent of employment. And along with the decline of manufacturing has come the decline of unionism, inasmuch as the heart of the union movement was found in manufacturing industries such as automobiles and steel.
Manufacturing employment decreased in the United States for several reasons, some of which Davis mentions, such as automation which increased productivity and superannuated labor, and also mergers and acquisitions, which reduced workforces to avoid duplication of effort. But what he does not mention is that the decline of manufacturing in America was in many ways a conscious choice that both businessmen and politicians made in order to deal with inter-capitalist competition on a global basis. In industry after industry – textiles, electrical appliances, steel, automobiles, high precision instruments, etc. – the United States lost the competitive battle with Europe and Japan and subsequently with the emerging economies of China and the Asian “dragons.” The manufacture of American products was globally outsourced in search of inexpensive labor in emerging economies – a lowball strategy to increase profit even while global market share for American corporations was decreasing.
As manufacturing declined to meet global inter-capitalist competition, Americans turned to service industries for employment and these industries were in turn enhanced by the creation of new cybernetic and digital technologies. Financial services and financial products were reshaped by the need to generate profit and wealth – the need for growth – in spite of the loss of profit and wealth from manufacturing. Growth was required for the maintenance of an American consumer economy based on aggregate demand. To the extent that financial services could generate such growth, finance and associated service industries such as insurance and mortgage banking began to drive the economy; finance took up the slack left by the decline of American manufacturing. With corporate and individual pension plans invested in equities, as Steve Fraser (2005) has argued, all Americans were involved in the stock market; “everyman” became a speculator. As long ago as the early sixties, David Bazelon (1963) argued that the American economy had lost its solidity and become a “paper economy,” an economy which produced little but which relied on wealth gained through “paper” values. From the mid-seventies to the beginning of the twenty-first century, the American economy was in crisis, but in a crisis masked by the glitter of finance.
Since the early eighties, one financial crisis has followed upon another, beginning with the 1980s Mexican debt crisis, the savings and loan collapse of the late 1980s, the Japanese asset bubble crisis of 1990, the Asian financial crisis of 1997-98; the dot.com bubble of 1999-2001 and the subprime financial crisis of 2007-8, along with the current crisis of the Euro and the economic instability of Greece, Portugal, Spain, Iceland, Ireland and Italy. Finance as a substitute for a manufacturing economy does not appear to work and besides increasing instability and volatility, an economy dominated by financial services produces enormous inequality between the few who rule over speculative fever at the top and the rest of the population who experience declining wages and living standards and currently massive unemployment and underemployment.
At the end of his book Davis says that there were other choices than markets shaped by finances. But he never says what these were. And he never explains why government colluded with business to deregulate industry after industry, particularly financial services, creating via the repeal of Glass-Steagall, financial institutions that were “too big to fail.” Or, put more simply, he never raises the question of why the United States put finance before factories by failing to maintain its manufacturing base, as did other nations, through government support for manufacturing, namely industrial policy or sophisticated and effective workforce training, meaning education of a level and kind that provides people with technical but flexible skills that yield both interesting and well-paid jobs. Not everyone requires higher education, but in a globally competitive world, there are all kinds of highly technical “métiers” that are not professional careers, but that are still necessary and remunerative.
These last questions are the ones around which Judith Stein’s book, The Pivotal Decade, are organized. Her principal argument is that the United States encountered a structural crisis in the seventies – the decline of American manufacturing due to competition from European and Asian manufacturing – and that politicians and their economic advisers handled this crisis badly because they wholly misunderstood it. They saw the drop in economic growth and the recession of the mid-seventies as well as the “stagflation” of the late seventies as “counter-cyclical” economic events that could be managed by traditional means – either by fiscal Keynesianism to stimulate aggregate demand or by monetary policy (loosening or restricting credit) and tax reductions. They accepted de-industrialization as inevitable, as part of the global transformation from industrial to post-industrial society, and, in order to fight inflation, government policies even promoted imports as well as outsourcing of American manufacturing via tax rebates for American corporations who invested elsewhere than in the United States.
Stein’s book is an important one because of its relevance to our current situation. The structural crisis of American de-industrialization, which began in the seventies, and the switch in focus of the American economy from factories to finance, has come to full term in the new millennium with the financial crisis of 2007-8 and the current Great Recession. Problems that were not solved in the seventies and which were evaded by concentrating on service industries and stimulating growth and jobs through paper profits created by a host of new financial instruments such as securitization have come home to roost. Since Americans failed to maintain and improve their manufacturing industries in the seventies, and since they have consistently failed at the kind of workforce training that allows ordinary folk flexible access to mid-level manufacturing and service repair industries, the United States is now faced with a situation in which trade deficits are enormous. The American economy has been allowed to rely solely on consumption (of goods produced elsewhere) and, with aggregate demand at a low ebb and the workforce experiencing both unemployment and declining wages, the country faces many years of low investment, anemic job growth and the possibility of a double-dip recession or even another Great Depression.
The manufacturing industries that were not rebuilt or were allowed or encouraged to move abroad are not likely to return, which in part explains why the United States is rightly described as being in relative decline. Now more than ever there is need for an industrial policy or at least a new attempt to refine and heavily fund workforce training that will give ordinary Americans skills that they now lack. But because of deficiencies in the nature of the American state and a crisis in American politics, the likelihood of this happening is slim.
In the midst of the financial crisis, President Obama intervened in the automobile industry, and government loans helped GM and Chrysler get back on their feet and become competitive in the global market. But for the most part, Obama and his advisors seem to be repeating the mistake of his predecessors. Like Carter and Clinton, Obama views the current recession as merely another cyclical event, to be managed through Keynesian stimulation of aggregate demand, rather than the end point of a long term structural crisis that requires a full-fledged industrial policy or special focus on workforce training.
Why, as Stein asks, did American politicians and economists miss the structural crisis of the seventies and why did they fail to enact an industrial policy or an effective and subtly-designed workforce training policy? There were many reasons Stein suggests besides mere misunderstanding of what was happening. First, during the seventies America still considered itself responsible for the economies of its allies, which meant in practice that officials in the United States encouraged the development of our allies’ industries and did not take too seriously the fact that their role as exporting nations resulted in high trade deficits for the United States. Second, many things drew attention away from the economy: Watergate; the Cold War, especially the Russian invasion of Afghanistan; the revolution in Iran; the aftermaths of Vietnam; civil rights and identity politics generally. Third, politicians like Nixon, Ford and Carter were less concerned with de-industrialization than with inflation, and inexpensive imports were considered one way to beat inflation. Fourth, the theory of post-industrialism set forth by Daniel Bell was a powerful influence on American politicians, especially liberal ones.
As I can attest from service in the Carter administration, de-industrialization was considered a natural outcome of advanced economic development. Comparative advantage dictated that wealthy nations concentrate on service industries that employed people with professional skills and new technology expertise requiring college or university educations and yielding industries that had higher “added value,” whilst cheaper labor in other countries produced goods that could easily be imported to the United States. Further, there was little tradition in the United States, except in wartime, for government to actively intervene in the economic decisions of corporations. If the American steel and auto industries or the electronics or textile industries could not compete with production abroad, that was the outcome of normal competitive processes and not something with which the government should interfere. Moreover, one of the tenets of American international economic policy was free trade. Policymakers were reluctant to impose high tariffs on foreign goods even when “dumping” occurred. And the growth of American corporations, even when they outsourced production abroad, was believed to be good for ordinary Americans in the long run. To be sure, labor was hurt by the decline of manufacturing, but only one of the Presidents in the seventies – oddly, Nixon – was close to labor unions and, ironically, it was Carter who tended to have the least regard for labor and who believed that wage increases sought by unions were the reason for inflation.
Last but not least, goods produced abroad for import to the United States resulted in goods that Americans could easily afford. Retailing was transformed during the seventies with the growth of huge corporations like Walmart and great box stores such as Home Depot. These stores stimulated aggregate demand and enlarged the consumer economy, which was what most politicians were concerned with. Many economists were hostile to the idea of economic planning or industrial policy and believed instead that tax reductions, particularly on capital gains, would revive a sluggish economy through increased business investment. Liberal economists were plagued by the inadequacy of Keynesian demand stimulus with respect to the recession of the seventies, but had little else to offer. Conservative economists affirmed supply side remedies. Very few economists or politicians supported industrial policy or focused on new technology workforce training for those affected by the loss of manufacturing, even though some Carter administration officials (Stuart Eisenzstadt) did encourage a modicum of debate on the subject.
The problem with Stein’s book is that she sometimes loses the forest by concentrating on the trees – it’s no small matter that her argument is dense and often hard to follow, since this tendency doubtless diminishes the attention that her excellent book will receive. More important, she also makes a major mistake at the beginning of her book, in discussing the immediate postwar era before the seventies – the age of American affluence or, as she calls it, “the great compression.” Here the word compression refers to the equalization of income and wealth among Americans, the fact that in the immediate postwar period the middle class in American society grew and the incomes and property of ordinary people came closer to those of the wealthy than ever before while, at the same time, poverty diminished. Stein says that she was prompted to write her book because of a glaring fact: that after 1975 American wages and incomes began to decline and never again reached the heights of the immediate postwar era. Stein attributes the good fortunes of this period to two factors: first, government shaped and stabilized markets, and, second, that labor unions were strong and helped to increase workers’ wages and benefits.
In both cases Stein is on shaky ground. It is certainly true that government’s role in the postwar era was less problematic than later in the twentieth century. There were many programs developed during the New Deal or in the twenty years after the war that helped individuals and families and there was no great push for “deregulation.” Social security was firmly established; the GI Bill allowed returning World War II veterans to get a free college education; labor and capital seemed more or less compatible rather than conflictual; and business, in its corporate managerial mode, accepted at least some limited role for government as a Keynesian stabilizer of the economy. Medicare and Medicaid, added in the sixties, gave seniors much needed healthcare and provided additional security in old age, so that the elderly left the labor market, opening up employment for younger people.
But Stein overestimates the role of government even when Keynesian fiscal stimulation provided apparent stabilization for the economy. She doesn’t say much about what made for the social peace. And while she scolds those in power in the seventies, she doesn’t really look for deeper reasons why Keynesian policies seemed to falter in the seventies. To be sure, she knows about the rise in gasoline prices and about “stagflation,” but these were symptoms of the structural crisis rather than its cause.
Stein is equally wrong in arguing that a powerful labor movement, helped workers extract benefits and higher wages from employers. Though labor unions were strong in the thirties and forties and not as weak in the fifties and sixties as they became in the seventies and eighties, it is precisely in the immediate postwar period that labor changed and not for the better.
After the Second World War there were two connected developments that severely weakened the American labor movement. The first was Taft-Hartley which stalled or even aborted labor’s ability to increase membership and unionize industry in the South and elsewhere; and the second was that, once capital had won a victory like Taft-Hartley, however partial that victory, it was willing, in its corporate managerial guise, to provide considerable largesse for its employees, so long as the issue of worker control was no longer on the table. Having lost one battle and then lulled into a sensation of comfort by what became the postwar “era of good feeling,” labor gave up militant unionism for pensions, high wages, health insurance, and other benefits. Labor leaders turned conservative, protecting what they had achieved rather than envisaging a role for labor militancy as the “social conscience” of American society and politics. The labor union movement, as is well known, also became racist, protecting white workers from African-Americans and other possible new entrants into the unionized labor force. And labor leaders like George Meany and Lane Kirkland became devout Cold Warriors, positioning labor and the American worker among those who were committed to the continuance of American world hegemony and to militaristic aggression in support of hegemony. Hence “hard hats” vs. “peaceniks” in the Vietnam era as well something deeper yet, the “silent majority” of white workers supporting the policies of conservative and neo-liberal politicians whose aim was to use political will to weaken and erode the power of the state.
Stein also fails to emphasize or perhaps to see that the immediate postwar era was unique or, more precisely, an anomalous moment in American history. Our affluence, our global influence, derived from one salient circumstance: we had no major economic competitors and it was American manufacturing that supplied the goods for the redevelopment of a devastated Europe and Asia. American affluence in the postwar era, as well as the strength of labor unions and the generally good relations between capital and labor, were all made possible because, with global power in a world of war-torn allies and enemies, the United States reigned supreme. While it is true that the Marshall Plan beneficently rebuilt Western Europe and Japan, American interests were also involved. Europe and Japan were America’s best customers; their growth and American prosperity coincided.
Felix Rohaytn, the savvy banker from Lazard Freres and the Chairman of MAC (Municipal Assistance Corporation, the organization which saved New York City from bankruptcy), seems to have understood better than many policymakers what had happened by the mid-1970s. We had rebuilt allies and former foes and now they were our competitors; Henry Luce’s vaunted “American century” had lasted all of thirty years, from 1945-1975 (though Luce actually proclaimed his doctrine of the American Century in 1941 in Life magazine).
One can cite, as Stein does, statistics that indicate the relative decline of the American economy between the two dates. In the late seventies the American economy was still the largest in the world, three times larger than its closest rival (Japan). But American’s share of global GDP had diminished from 34.3 percent in 1950 to 24.6 percent in 1976. The U.S. produced only 15 percent of the world’s oil rather than 50 percent immediately after the war. American global share of steel production went from 50 percent down to 20 percent. In 1945, the U.S. shipped 32 percent of global exports; in 1976, 11 percent. Between 1946 and 1968 the United States experienced a trade surplus thirteen times. But from 1947 on, imports increased by 11.4 percent annually, while exports increased by only 7.3% annually.
Statistics, however, don’t tell the whole story and are not its most important aspect. In spite of these numbers, the U.S. remained in the late seventies the world’s most powerful, rich, and militarily strong nation. What amazes me is the innocence of American policymakers and their failure to understand that while it was necessary to rebuild wartime allies and former foes – both for humanitarian reasons and because they were the best market for American products – there would come a time when, fully redeveloped, they would become competitors. The problem here is that American international and domestic economic policy was formulated in an era of American triumph which could not persist. But while events change, continuity of policy often remains. The misunderstanding that Stein talks about is better put as follows: while the world had changed by 1975, Americans appeared to take little notice. I remember laughing at a colleague who had just bought an ugly little car – a Datsun. By the eighties and nineties I laughed no more, as the Japanese auto industry – Honda, Toyota and Nissan – outpaced and outsold General Motors, Ford, and Chrysler.
Postwar economic policy and postwar politics framed insufficient responses to what was in fact the structural global economic crisis of the seventies. The economic prosperity of the immediate postwar period planted the seeds of what would later be failed responses to new events. Two principles of American policy define its continuity: first, that economic growth is the rising tide that lifts all boats and, second, that growth is not a function of government agency or intervention, but rather jobs are created by the private sector. European societies were able to pursue industrial policies when faced with structural crises. They accepted some level of either centralized or decentralized government to steer and, in part, design the economy; and many European companies were in fact state companies. In France the government proposed five year plans for growth; in Germany the soziale Marktwirtschaft (social market economy) encouraged government, labor and capital to work together to do long-term planning for the national economy. To be sure, the United States had a welfare state, though one far weaker than West European social democracies such as Scandinavia, Germany, France and the Benelux countries. But after the Second World War, economic planning, part of which would have been industrial policy, was ruled out, both because this kind of planning went against the American grain – or so government officials argued – and also because, given the vast prosperity of postwar America and the strength of its private sector, it hardly seemed necessary.
Continuity of policy from administration to administration was such that when structural crisis and the decline of American manufacturing required more than a response related to Keynesian demand management, it was not available. Instead, administrations went in an altogether different and fateful direction. They reduced taxes, especially on capital gains; they deregulated entire industries, all in the hope that “freeing” business from government burdens would encourage business investment. And business did indeed invest, but abroad rather than in the United States. Production and capital were exported in order to save on labor costs and also to capture emerging markets. Both unions and the American manufacturing workforce declined, as did the wages of most Americans. The illusion of growth was created by inflated equity values that produced cycles of booms and busts and service industries, especially finance, provided jobs for some but not all Americans. The one clearly innovative and productive aspect of the new global economy was high technology; it alone allowed the American economy to grow in the eighties and the nineties.
But ordinary folk were out of luck; high tech required skills that they did not have and would not likely get. And the skills gap could never be overcome merely by providing more people with access to higher education. There were not enough jobs in high tech for everyone that needed a job and those without the skills were either unemployed or underemployed, relegated to low paying jobs in industries such as retail. Workforce training to provide flexible vocations with mid-level skills, successfully achieved in Germany, might have helped the situation, but Carter’s CETA policy, focusing on low level and menial government jobs at low wages was minimal at best and at worst a failed policy. For ordinary folk, retail became the key source of employment. Walmart became the largest employer and the largest corporation in the United States, but what was good for Walmart – a firm that imported most of its goods from China – was not necessarily good for America.
Stein’s book is important because she identifies a structural crisis that dominated the late twentieth century and that even now conditions our economy and our politics. Current problems have their source in the failure to confront and manage the structural crisis of the seventies. Obama’s difficulties are in part caused by a regressive Republican Party that exploits the understandable fears and anxieties of the American electorate. But his problems also derive from the fact that, given the loss of manufacturing in the seventies, his agenda must necessarily focus now on rebuilding the nation by creating innovative manufacturing and through the repair of a rotting infrastructure. Unfortunately, Obama is caught in a political and policy web not of his own making, but which is extremely difficult for any one president to transcend. And the fear and anxiety that the Republicans exploit – their idea that government is useless and should to a large extent (save for defense expenditures) be abolished – is accepted by a substantial part of the American electorate because it has come to believe, perhaps correctly, that government, at least the current one with its historical roots in weak command of the private sector, is incapable of managing the economy.
The structural crisis of the seventies and the failed political response that Stein discusses were reflected in cultural transformations of the late twentieth-century. The United States emerged from the Second World War a highly unified nation with a strong sense of national purpose. By the time that the World Trade Center was destroyed on September 11, 2001, it was an entirely different country, a nation divided about many issues – economic, political, racial, religious, social, and sexual. Americans were pitted against other Americans in a whole series of culture wars. And the loss of faith in government’s ability to manage our society and our economy affected both individual and collective consciousness.
Rodgers’ story is about a cultural transformation, about the process by means of which an America of yore, in which adherence to institutions and social context were paramount, became a society in which deracination and disconnection were rife. He shows how many thinkers and intellectuals grew skeptical about the legitimacy, efficacy, or validity of institutions and began to observe that people trusted only in themselves and their own agency to make choices allowing for a decent life. Age of Fracture does not speak of this cultural transformation as I would view it, as a change which was in many ways a despairing and possibly politically dangerous response to widespread institutional breakdown in the last thirty years of the twentieth century. Rodgers book is, ironically, not about the socio-political context of ideas, but about how thinkers and writers, academics and intellectuals somewhat independently understood the society in which they lived and how from the seventies on their discourse turned away, in most disciplines, from emphasis on social and political institutions to emphasis on individual actors and behavior.
Rodgers talks about postwar economists whose focus shifted to microeconomics. He makes the point that microeconomics concerns itself with individual economic decisions, in contrast to macroeconomics, which accords first importance to social and political institutions that are paramount to economic life. Shifting to politics, he argues that political scientists were much influenced by Robert Dahl’s (1968) or V.O. Key’s (1942) idea of pluralism and “veto groups.” Society was seen as governed by various interest groups that vied for power within the social and political arena, each trying to maximize the benefits it received from the state and the access it had to the state or, more precisely, Congress, the federal bureaucracy and the White House. Or, to provide another example, C. Wright Mills (1956) saw the United States controlled by an integrated power elite composed of politicians, corporate businessmen, and top military personnel.
By the end of the century, all meaning, says Rodgers, “had been drained out of the concept of power,” so that thinkers and writers no longer talked about actual groups who held power or about classes, interest groups, elites, class consciousness or class struggle, but rather about how even the formation of class was the result of individual action and historical contingency. In E.P. Thompson’s (1968) The Making of the English Working Class (and in the work of Eric Hobsbawm as well), protest against the factory system and the hardships of industrial capitalism were seen as arising out of myriad cultural sources that were integral to English history. For Thompson, individual actors from many indigenous spheres of English society slowly arrived at a consciousness of class that they had themselves shaped: “the English working class made itself.”
Other historians and political scientists focused on the strategy of politics, on internal political processes, and how political behavior was connected to individual strategies and game playing – e.g. Mancur Olson’s (1965) free rider hypothesis, in which savvy political actors abstain from action because they know that someone else will do the work for them. Rodgers charts the conservative swerve from conventional notions of class as well. According to conservatives, the people who held power in a democratic society and who wanted to destroy existing institutions were no recognizable class such as the bourgeoisie, the aristocracy or the demos, but rather a “new class” of intellectuals and professionals, not the owners of property, but the technicians who were essential to the operation of both business and government. In this view, knowledge, not position, endowed people with power.
Political scientists and historians, especially of Left but sometimes also of Right persuasion, were likewise influenced by the idea, taken from Gramsci, that the real source of power in society had to do with the creation of a dominant ideology. Power was less a function of ownership of the means of production, of authority and rank, of class and status, or even of force – what Weber called “the decisive means of violence” which was the purvey of the modern state – and much more a function of an ideology that shaped the thinking of everyone in a particular society. Gramsci’s theory complements Marxism (which lacks a politics) in showing how important political thought or theory is, and how political ideology is not only created by powerful groups but also how it is diffused and internalized by individuals, so that even members of the working class can believe in property ownership and aspire to middle-class consumerism (the American Dream of a house and cars in the suburbs) or, conversely, how wealthy and upper middle-class owners, managers and professionals can accept and promote radical protest, progressive reform, “regulation” of the market by the state and many substantive policies and programs that redistribute societal wealth.
Rodgers has Democrats and liberals using Gramsci, whereas on the other side of the political spectrum, conservatives affirmed the ideology of a new kind of hero and historical actor who made the world anew by himself or herself and who was the opposite of a social activist, corporate manager or state bureaucrat – George Gilder’s vaunted entrepreneur who needed the freedom of tax cuts and government deregulation (in other words, supply side economic policies), to make America into a real new frontier. Last but not least, and to the academic Left, Rodgers points to Foucault’s notion that power was part of discourse, that it was embedded in language, an idea that made power and the discourse of power so diffuse as to be indefinable.
Rodgers takes the same thesis – that the ideological climate of the late twentieth century was one in which a strong sense of social and historical context was replaced by a discourse of disaggregation and individualism – and applies it to subjects such as race, gender, the relation between state and civil society (communitarianism). Given the collapse of the Soviet Union and the end of the Cold War, ushering in an era in which liberalism was unchallenged by socialism, Rodgers also discusses Fukuyama’s notion that liberalism represents the “end of history.”
Rodgers shows how the historical memory of race that is strong in books like Alex Haley’s Roots (1976) degenerates into divisions about racial identity and arguments, such as Shelby Steele’s, that racial identity destroys individual identity. Women in the sixties have a strong sense of solidarity, but by the end of the century some women like Judith Butler deny the existence of feminine identity altogether, whereas it is conservatives who insist upon a reified and of course domesticated version of feminine identity which is anti-feminist.
With the discourse of power dissolved, and the protest about race and feminism – a protest about human and civil rights – either defeated or destroyed from within, Rodgers examines the failed attempt of communitarianism to provide some notion of American social life beyond individualism. Communitarianism was a movement beset by a weak sense of history and also by squabbles between left communitarians such as the authors of Habits of the Heart (Bellah, Sullivan, et.al, 1985) who emphasize the need for new forms of communal life, an improved society and an expanded liberal or social democratic state, and right communitarians who want to stress the communal import of traditional institutions such as religion, family, neighborhoods, marriage, etc., while rejecting any expansion of the state as well as new feminist or gay “identities.”
Rodgers provides a brilliant analysis of communitarianism by discussing three philosophers – John Rawls, Robert Nozick, and Michael Walzer. Of these three thinkers, only one – Walzer – qualifies even slightly as a communitarian, whilst the other two – Rawls and Nozick – pretty much define, at a high level of philosophical discourse, the broken political climate of the late century twentieth century in both theory and practice.
Rodgers shows that Rawls’ celebrated A Theory of Justice (1971) is rooted in individualist and utilitarian premises about people in a market society who try to maximize their interest. They make the rational economic calculus that a more egalitarian society in which the poor have rights and are provided with social assistance is likely to be a more efficient society than one in which market values dominate. If we connect Rawls’ book to actual politics, this is more or less the position that Democrats in America have adopted.
Rawls was refuted by Nozick’s libertarian account of justice in Anarchy, the State and Utopia (1974) Nozick argued that insofar as the decisions that human beings made in the market were free, they were inherently just. In a minimal state, market and community would flourish and individuals would voluntarily choose rules that allowed them the freedom “to do their own thing.” The role of the state was to enable this level of freedom rather than constrain it. “How dare any state or group of individuals do more,” Nozick wrote, “or less.” Again, tying Nozick’s ideas to actual politics, his extremely thin notion of social obligation, his distrust of equality (as harmful to freedom) and what we would now describe as his libertarianism, is more or less the ideology of contemporary Republicans.
In Spheres of Justice (1983), Walzer did not argue for equality per se, or at least not for any singular notion of equality, but rather for what he described as “borders.” Social life was organized into different spheres, each of which deserved to be accorded their own particular justice. The borders between the spheres prevented either aristocratic power (in pre-modern societies) or money (in capitalist societies) from overrunning what Walzer called “the complex equality” of modern society. Walzer recognized that most communities made provision for the general welfare of their citizens, but he was less concerned with aspects of welfare and social democracy than with an affirmation of pluralism that was local and communal and that brought the idea of the general welfare into active connection with the true and complex sources of social life. In some ways Walzer was restating Burke’s idea that adherence to state and society entailed more than obedience to rules or procedures: “in order to love one’s country,” Burke wrote, “one’s country ought to be lovely.” Love, loyalty, patriotism was never general but particular; love of country originated via the experience of place and through participation in a plurality of spheres that comprised the larger society. Walzer’s argument surrendered any notion, so vividly expressed in Hegel’s Philosophy of Right, that civil society was subordinate to the power of the state. As a result of Walzer’s theory the reverse was true: the institutions of civil society were now superior to the state and in this guise politics and power as such dissolved into a complex of communal associations. Rodgers argues that although Walzer was a social democrat, his pluralism was easily appropriated by conservatives, who exploited the emphasis on communal and local participatory democracy to focus on traditional institutions like school, neighborhood, church and family, or those institutions that the first president Bush described as “a thousand points of light.”
Fukuyama’s (1992) notion of the end of history comes into Rodger’s mix because in the absence of political discourse and with the collapse of the Soviet Union, liberalism becomes the dominant political philosophy in the world. Liberalism thus represents the ideological “end of history.” But both reality and Fukuyama’s many critics demolished a thesis which is a not-too-obscure attempt to affirm Western ideological centrality in face of a globalizing world in which Europe and America are receding before the power and cultures of Asia, especially China, as well as emerging nations such as India and Brazil.
Though Age of Fracture provides a tightly-woven, clear, and solid narrative, it suffers from several defects. The book is more descriptive than analytic. One would not expect a scholar to be a “hanging judge,” so it’s appropriate that Rodgers does not directly argue against the ideas of authors whom he is exposing. But most of the time he presents one or another writer’s view without critique, meaning the kind of analysis that gets at the heart of an argument, revealing the assumptions on which its substance and conclusions are based. I liked the section on Rawls, Nozick, and Walzer because in this part Rodgers does analyze the three thinkers, providing the reader not only with an accurate interpretation of their respective arguments, but also with a sense of how these arguments affect our current political situation.
Rodgers’ story misses what I would describe as the dialectic of modernity, which perpetually shifts from conditioned historical and social circumstances to a heightened and sometimes dangerous as well as adventurous focus on the possibilities of the autonomous, deracinated and “authentic” self – to wit, the central theme of Lionel Trilling’s opus from the celebrated essays of The Liberal Imagination (1947) to the poignant swan song of Sincerity and Authenticity (1972). And though he contrasts the issues and debates of the immediate postwar era with those of the late twentieth century, Rodgers does not grasp the fact that the earlier context relates to an era of good feeling based on successful achievement (victory in World War II) and vast and relatively shared prosperity, whilst the latter period – say from the War in Vietnam to the destruction of the World Trade Center in 2001 – bespeaks an America of enormous political, social and cultural divisions as well as failed policy and soured public opinion.
Another problem is that Rodgers appears to buy into postmodern pessimism. His entire understanding of the cultural transformation of the late twentieth century is a story about how we went from good to bad; his book is a “downer.” Postmodernism has several meanings, but it is often a philosophy of cultural despair: “the times are bad but there is nothing that we can do about it.” There is a Spenglerian tone to much of Rodgers’ argument. He sees a degenerated culture everywhere and ends his book on a tragic note, with a discussion of 9/11 and its aftermath. He records the fact that for a while the Bush administration spoke out for a common culture and for solidarity between Americans united against a common enemy. But then he notes that very soon Bush and his advisors returned to “the market-imbued” and individualist vision (264). This is about as close as Rodgers ever gets to suggesting that the cultural transformation he describes is rooted in politics and not a function of the “decline of civilization.”
Rodgers describes the immediate postwar era as one rich in awareness of history and society, but he fails to notice that, when compared to European culture, American culture was notably lacking in a deep sense of the weight of social and political institutions. When thinkers like Daniel Bell, Erich Fromm, Robert Dahl, and Talcott Parsons, all overly cheery and shallow American optimists, dominate the intellectual scene, as they did in the fifties and sixties, they prepare the way for, rather than contrast sharply, with the illusory affirmation of autonomy and individualism that Rodgers finds in the late twentieth century.
There is a straight line that connects Daniel Bell’s “new class” of post-industrial technicians (1973) and professionals with David Brooks’ (2000) consumerist “bobos” who live off the fat of the land but who have no conception of public service or what American revolutionaries called “public happiness.” Fromm’s psychoanalytic views (1941) discard the tragic sense of life and human history found in Freud’s Civilization and Its Discontents; they turn psychoanalysis into a shallow theory that allows gullible Americans, believers in the “American Dream,” to imagine that life corresponds to a novel by Horatio Alger. Dahl’s pluralism, which looks with favor on the representation of interests, might in the fifties and sixties refer to an America where many interests competed and countervailed against each other, but this interest-based conception of society easily degenerates into an America where competition becomes unfair, weighted in favor of those with money and power. And Parsons’ society of shared values and beliefs is insufficient as the glue holding American society together. It bespeaks a society in which middle class white people are the mainstream. Structural-functional consensus is not liable to hold when many people have lost good jobs and when the old white majority is displaced by a majority-minority society consisting of people of color from all corners of the globe.
The most telling deficiency of Rodgers’ book is that he fails to connect his discussion of ideas with their actual social and historical context which, considering the nature of his argument, is ironic. Put another way, he does not explain why ideas changed so radically between midcentury and century’s end. Social consciousness was strong in the fifties and sixties because social institutions were strong. Corporate managerialism and the old industrial society provided well-paid employment for most people, as well as accompanying benefits such as pensions, which allowed the working classes to become, in lifestyle at least, middle class. People were rooted in work in particular places and tended to remain fixed in their own communities, such as the ethnic neighborhoods of industrial cities like Chicago and Baltimore. Suburbanization dispersed these heretofore fixed populations and the loss of employment due to de-industrialization had an enormous impact. The old industrial society and the New Deal provided people with a sense of security. Character was woven around lifetime careers. Benefits from the New Deal such as Social Security and the G.I. Bill attested to the advantages of a strong government.
Deindustrialization and globalization had the opposite effect: shorn of work and with their communities devastated by the loss of factories, people were left to their own devices. Republicans exploited this situation, casting government as the culprit, and many people, enough to elect Ronald Reagan, agreed with them. Government, which seemed unable to deal with the economy or to aid men, women, or communities in distress, became an imposition, part of the problem rather than part of the solution. The transformation of consciousness that Rodgers discerns – disaggregation and individualism – was in fact conditioned by actual social and political breakdown.
A defect shared by all three authors is their common tendency to view the immediate postwar era as a golden age from which, in the late twentieth century, Americans divagated. Golden ages tend to be mythical, illusory visions of the past that allow us to convince ourselves that things were much better “once upon a time.” More important, in thinking of the past as the Garden of Eden, we preclude the future. To see where we are now and where we might be going we need a usable past, by which I mean one that we regard critically and truthfully. Our current social and political situation as well as our postmodern culture may have begun to emerge during the seventies. But Davis, Stein, and Rodgers all suggest that the period before the seventies was the best of times. This makes no sense. “Stagflation,” OPEC, global competition, and postmodern “deconstruction” were symptoms of transformation. But what was their source?
One side of the late forties and fifties might appear from hindsight like a golden age, but there was another side that the Beatnik poets and novelists – Ginsburg and Kerouac – saw and that thinkers like David Riesman, Herbert Marcuse, Paul Goodman, Irving Howe and Richard Hofstadter were sufficiently prescient to recognize. Ginsburg’s epic poem, Howl (1956), is a very personal work, but it also has deep implications for politics and society. Ginsburg is describing his suffering and alienation not only as a gay man in a monolithically straight society. He is also revealing a depth of feeling and a sense of identity wrought from pain that would erupt in the sixties and beyond in the civil rights movement that “liberated” not only blacks and other minorities but also women and homosexuals.
In Growing Up Absurd (1961) Goodman depicts a society which has very shallow shared values, so that young people lack the kind of patriotism, to be carefully distinguished from nationalism, that would allow them, out of love of country, to care for and act in concert with their fellow citizens for the common good. He also shows that Americans in the nineteen fifties left many things undone: it was a land of “incomplete revolutions.” What Goodman failed to see, but what occurred in the late twentieth century when Americans tried to achieve civil rights for all or create a more equal “Great Society,” was the political divisiveness and culture wars that plague the nation now. Hofstadter (1954) was way ahead of his time in showing that the paranoid style in American politics was a feature of what he called “the pseudo-conservative revolt” led by regressive and paranoid extremists who, rather than defending the status quo, the position of true conservatives, sought to destroy it. American conservatives were, in other words, nihilists. Hofstadter also saw large deficiencies in the nature of American government. “When one considers American history as a whole,” he wrote, “it is hard to think of any long period in which it can be said that the country was well governed.” In his view we had a government that could cope with problems, “but not master them.”
There is a story behind the story that cannot be found in the books discussed above. And that story is about the “graceful degradation” of the American liberal state or what I call the “postwar American model” in the latter years of the twentieth century. Graceful degradation is a metaphor taken from computer science. It refers to a process by means of which “systems” respond to both external and internal challenges with built-in fail safe mechanisms. However, each time a fail-safe response occurs, the system becomes less effective. Eventually, the system degrades to the point where it is broken and a new system must be created to replace it.
Though the metaphor is not exact, because history and society do not work like sophisticated machines, it is nonetheless suggestive; it provokes thought. The postwar American model worked well for the first twenty or twenty-five years after the war, but as it attempted to meet numerous challenges – civil rights, costly wars brought on by the struggle against the Soviet Union, and especially the structural crisis of deindustrialization and the challenge of global inter-capitalist competition, it “gracefully degraded” with dire results: reliance on finance as a substitute for manufacturing; reliance on militarism as a way of enforcing an American hegemony no longer rooted in economic growth; the breakdown of civil political discourse and debate; extreme levels of inequality between rich and poor; and “culture wars,” a strategy that fostered evasion of the moral consequences of inequality and tore the country apart.
All of the authors discussed above, notwithstanding the quality of their scholarship and the important substance of their arguments, are what I would call inadequately historical. Though Stein is the worst offender, all of them focus too narrowly on their own stories without sufficiently grasping their historical context. They are correct in seeing that things did turn sour in the seventies, but they fail to provide a satisfactory understanding of why things went bad or what larger problem relating to American politics and to the shape of the American state was responsible for our fall from the postwar Garden of Eden.
Compared to European societies with their monarchical and absolutist traditions of strong government, the United States has a weak state. One reason for this is an historic tradition of anti-statism, which has many sources: colonial fear of strong government in reaction to British rule; the use of the states-rights argument to uphold Southern slavery; deep commitment to local institutions and to regional differences; popular antipathy to taxation; the belief that government impedes the efficient operation of the market. While some of the above are particular to the history of the United States, dislike of taxation, regionalism and market fundamentalism are not absent from European politics. Moreover, the size and shape of American government is not always obstructed by anti-statist sentiment. American state development follows an approximate pattern in which the state grows in relation both to challenges and responsibilities.
With the advent of big business in the late nineteenth- and early twentieth-century, American politicians and presidents, notably Theodore Roosevelt and Woodrow Wilson, did pay attention to what I would call statecraft. Accordingly, they devised a new American state which allowed government simultaneously both to regulate and promote big business or industry. The state or government remained subservient to civil society and the economy, but big business was restrained both by the rule of law and by a government that regulated “interests” even while providing it with bureaucratic expertise and enforcing certain restraints. Some people call this work of statecraft corporate liberalism; others call it pluralism (though, not in a European sense, what is often known as “corporatism”).
During the nineteen-twenties, as Secretary of Commerce, Herbert Hoover attempted to develop a state that might be described as corporatist. His aim was to wed big business interests, organized into associations, to government. This might have worked if the Great Depression had not occurred and Hoover’s idea was in fact embodied in Roosevelt’s National Recovery Administration (NRA). Hoover also furthered state building by creating the Reconstruction Finance Corporation (RFC) to provide gargantuan amounts of relief and aid to businesses at the beginning of the Depression.
Though the New Deal is often associated with state building, in fact FDR is perhaps best seen as having used with great efficacy the state bequeathed to him by the Progressives. New Deal programs promoted economic development, regulated business, and helped to increase living standards through programs like Social Security. But even during the Great Depression of the 1930s, government policymakers as well as many ordinary Americans eschewed strong government and were especially wary of direct state intervention in the economy. The early New Deal was “a chaos of experimentation” mainly focused on “relief” and also on the creation of infrastructure that would extend technologies like electricity to rural areas in the South and Mountain West (TVA and similar projects under the Works Progress Administration (WPA) and Public Works Administration (PWA)) while providing at least temporary work for the unemployed. Other attempts in the early New Deal to organize business like the National Recovery Administration (NRA) failed because of the complex mix of big and small business in the United States. Small businesses through the National American Manufacturing Society (NAM) brought suit against the government and the NRA and the Supreme Court declared the National Industrial Recovery Act (NIRA), the legislation that established the NRA, unconstitutional. The chaos of experimentation was not, however, without its victories for modern liberalism, namely the Social Security Act and the creation of the National Labor Relations Board (NLRB), both won in 1935, before Roosevelt’s stunning second electoral triumph in 1936.
A year later, in 1937, America experienced the second phase of the Great Depression mainly because FDR and his people (particular the Secretary of the Treasury, Henry Morgenthau), attempted to make up for expenditures on relief relating to the 1929-35 Depression by balancing the budget; first stimulus, then austerity, but too quickly. The Second Depression of 1937 was worse than the first and less easily dealt with. There was much debate within the Roosevelt administration about how to manage it and what emerged was a two-pronged strategy. One element of this strategy was a regulatory regime that focused on regulation of finance and securities – the Securities Exchange Commission (SEC) and the Glass-Steagall Act that prevented commercial banks from engaging in either the trading or initiation of securities by setting up a wall between everyday banking, whether checking or savings, and investment banking.
At the same time, in the late thirties and early forties Franklin Roosevelt and his advisors considered but ultimately rejected economic planning and a strong and substantive social welfare state, opting instead for indirect means – fiscal and monetary policy – to deal with counter-cyclical crises such as recessions or depressions. At first they saw a modified Keynesianism or monetarism as a way to palliate what they thought would be industrial stagnation, but the war and immediate postwar economy revitalized their belief in a growth economy. The result was a largely unplanned economy and an extremely weak welfare state (aimed mostly at getting the elderly out the workforce and with some benefits for the very poor) that depended on growth and on private sector employment, all of which came to be seen as working more or less automatically via seamless corporate managerialism and “the rising tide that lifts all boats.” To make this point more emphatically, the growth economy obviated the need for statecraft.
We now come to the point in the story where our understanding of the American political model becomes rather complex. By the end of the Second World War in 1945 the American state was very large but not necessarily, except in one area, very powerful, and that area was the Defense department and the entire national security apparatus that was developed during the War and then maintained and expanded after the War because of the presumed threat to the United States and to the world from the Soviet Union. This is no place to begin to discuss the Cold War and the Department of Defense and associated intelligence agencies like the CIA, Defense Intelligence, Army and Navy Intelligence, all currently united under the National Security Administration. Suffice it to say that besides all the books on the question of who started the Cold War, the Soviet Union or the United States, two great books tell the whole story of the defense apparatus in terms of American hegemony and in terms of domestic American politics – with respect to the former, Melvin Leffler, Preponderance of Power: National Security, the Truman Administration and the Cold War (1992) and with respect to the latter, Julian Zelizer, Arsenal of Democracy: The Politics of National Security From World War II to the War on Terrorism (2010)
Leffler’s brilliant book demonstrates a number of things: that Truman was ill-equipped to deal with postwar foreign policy; that he was paranoiac about the Soviet Union in spite of the fact that America was infinitely more powerful – militarily, economically, politically – than the USSR; that intelligent statesmen such as Acheson and Marshall were not threatened by the Soviet Union (they did not believe that the Cold War would turn “hot”), but that the national security apparatus and our vast armed forces all over the world were principally directed to insuring that the United States could shape the destiny of what Leffler calls “the Eurasian core” – namely Western Europe, Japan, Korea, the Near East, India and what would become, in time, the Asian “dragons.” For Leffler, then, it is not Kennan’s policy of containment, but rather American political-economic hegemony – by which I do not mean imperialism but rather America’s strategic leadership role in the world – that was the driving force behind the power and size of the national security apparatus, or what I call the “warfare state.”
And here is where one confronts the paradox at issue here. One thinks of the national security apparatus, the Department of Defense (DOD) and the military-industrial complex that President Dwight Eisenhower warned about – all as part and parcel of the enormous state force that guards America from harm. But if one considers Leffler’s thesis and also add that of Julian Zelizer, namely that the politics of defense or national security was, contrary to the conventional view, endlessly involved with partisan politics, with which political partisanship could wrap itself in the mantle of the Commander-in-Chief, the flag, and the Hobbesian notion that outside the legal system of the state there was nothing but “a war of all against all” from which the American people had to be protected. In short, one can see how equal parts of paranoia and political advantage militated in favor of an ever-expanding warfare state.
But what if this warfare state was also, dialectically, not only about war and defense but also about its seeming opposite – about economic development and about the growth model of the American polity? What if the defense apparatus marched hand in hand with the process by means of which Americans came to believe in the American Dream, in lieu of a substantive social welfare state (what Theda Skocpol (1985) calls, referring to European or Canadian social democracy, “social Keynesianism”) and with a government expressly designed to eschew industrial policy and planning and an electorate and Congress fearful of state intervention in the economy? What I’m suggesting is that the national security apparatus actually converged with the growth model in such a way as to sanction both overt and covert funding for infrastructure improvement, for scientific and technological innovation, for the development of vast corporations devoted to arms and telecommunications and for large expenditures on weaponry that might be wasted in war (the Sidney Melman thesis). In the guise of arms manufacturing for the globe, the Defense department provided millions of jobs for workers, enormous profits for shareholders, all kinds of regional economic development especially in the underdeveloped South of America and in the Far West and Mountain West as well as New England (since the Second World War, 18% of the Massachusetts economy has been and remains devoted to defense-related industry).
The key case of convergence between the growth model of the American polity, the weak welfare state (little affordable housing, an aborted poverty program, benign neglect of cities and city planning, suburban and exurban sprawl, no universal health insurance, inattention to environmental quality, overgrowth and overreach) and the national security apparatus is in fact exemplified by one stupendous piece of legislation in large part responsible for the entire phenomenon of postwar suburbanization – the National Interstate and Defense Highway Act of 1956. The justification for the Defense Highway Act was pure national security politicking of the kind that Zelizer delights in. President Eisenhower and his Republican Administration, but Democrats as well as Republicans, argued that billions of dollars were required to build an intersecting network of interstate highways in order to assure, that if the United States were attacked wantonly via nuclear weapons by the Soviet Union, our armed forces, even after the unthinkable “first strike” of thermonuclear war, would be able to move men and materiel to affected areas by means of the best highway system in the world.
This was the ostensible reason for building the interstates. The real reason was singular – the growth objective. In the name of defense, vast billions of dollars (now trillions) were expended to construct some 41,000 miles of highway – the gift that kept on giving, as it were – and which, with ramps to once green fields every five or ten miles, to encourage suburban development, not merely housing but also shopping malls, the relocation of industry, office buildings, all kinds of public development including elementary and high school educational facilities as well as universities, sports arenas, convention centers – all of which promoted automobile purchases, and, especially, the purchase of durable goods and machinery for home, office and factory. Without the interstate highways, funded because of defense requirements, no consumer economy, no aggregate demand, no cornucopia of goods and services that were the envy of the world and, especially, in the postwar era, of the Soviet Union. Think of the irony and hidden envy in Nikita Khrushchev banging his shoe at the UN and arguing with Nixon at a trade fair while saying “we will bury you.”
With seasoned officials such as George C. Marshall, Dean Acheson, James B. Conant, John Foster Dulles, and John J. McCloy about, it cannot be said that there was no statecraft in the Truman administrations or in the postwar administration of Dwight Eisenhower. The early twentieth century regulatory regime, the work of Wilson and TR and to a lesser extent FDR, was maintained, but my point is this: the growth model of the American polity rendered irrelevant any further development of a strong state – one which could intervene directly in the economy, engage in planning, arbitrate authoritatively between capital and labor, set strong guidelines for local urban and public education.
And, to be sure, the growth model of the American polity was a stupendous if short term success. As such, it provided an image of prosperity and abundance for all Americans, whether rich or poor. Things now described as part of the American Dream – a nicely-furnished house in the suburbs, two cars, the possibility of collegiate education for one’s children, some ability to save, perhaps even a vacation house – are related to the prosperity and stability as well as relative equality (middle-income wages were rising) of the late forties, fifties and sixties. The problem is that American politicians and the American people became fixated on this era; it became the standard by which they measured themselves, their individual lives and their collective national fate. This is unfortunate, because growth of this kind became implausible by the middle seventies.
Growth economics in the immediate postwar era was based on an anomaly, in that the United States was the only developed nation in the world that had not been devastated by war. Indeed, the War had improved our economy and in order to redevelop, the world, especially Western Europe and Japan, “bought American”: machinery for their own new factories, durable goods, and foodstuffs. They also welcomed investment by American capital – a point not lost on the French publisher, Jean-Jacques Servan-Schreiber who wrote The American Challenge (Le Defi Americain, 1963) – which was the first step in creating a new global economy after two world wars and the Great Depression.
Redeveloped economies like those of Germany and Japan (and all of Western and Northern Europe via the European Union) began to produce the same goods as America and became competitors. In time, other entrants from Southern Asia (Korea, Singapore) and of course China and India would come aboard as well. Robert Brenner argues that in the mid-seventies rates of profit began to fall both in America and globally because of intense competition between manufacturing nations and the fall in the rate of profit produced fiscal crisis and increasing unemployment. With less revenue governments could not afford generous social welfare programs and in the United States, with its huge defense budget, the trade-off between social programs and defense – between guns and butter – grew difficult, with at least one political party, the Republicans, willing to sacrifice social services in order to fund national security. The need for trade-offs or in some cases the refusal of trade-offs was the reason why contentious debates about annual budgets began to take central place in American politics from the late seventies to the present time.
The whole question of growth in the late twentieth century is not one that can be profitably discussed in absolute terms. The decline in growth suffered by the United States in this period is comparative and relative and it is undoubtedly true that the gross domestic product (GDP) of the United States remains the largest by far of any nation in the world. It is also true that America suffers from enormous trade or current account deficits, budget deficits, low savings rates, and declining incomes for most people save those in the top 10% income bracket. In terms of ownership of income-producing property, about 5% of the population own more than 80% of such assets, a statistic that most ordinary Americans find unbelievable. Widely-distributed growth and prosperity are no longer, as they were in the immediate postwar era, seemingly automatic. And rates of growth for the United States as compared with China, India, Brazil, etc. have decreased since the 1990s. Compared to the export capacities of Germany and China, the U.S. export economy lags way behind.
Slowly but surely the whole world comes online with precisely the same products, obviating the theory of comparative advantage. The rate of profit tends to fall, and the rate of profit of U.S. corporations, especially those in manufacturing, has been falling since 1973. Growth in the late seventies, eighties and nineties came almost entirely from high tech manufacture – Intel chips; cybernetic software; and from large retail businesses and global tourism as well as sophisticated agribusiness (processed foodstuffs). Moreover, there was not only a skills gap – lack of education – that separated people with good jobs from those with bad jobs or no jobs at all, but there was and probably will continue to be a dearth of good jobs relative to the growth of the American population. So, effectively, for a very large number of Americans, the growth model of the political economy and polity, no longer works.
More important, inasmuch as both the political economy and the polity were based on growth, both economic policy and politics assumed an ad hoc character when growth slackened. Under the so-called neo-liberal regime that began with Ronald Reagan in 1980 there was no statecraft; neo-liberalism is not “new” liberalism as much as it is anti-liberalism. As such, it eschews any substantial role for the state in the operation of the market and it considers government a burden rather than a support for the economy.
Under Reagan and his successors (including Clinton), political will was used to destroy rather than build government and to negate as much as possible of the regulatory regime that had been put in place by the Progressives (TR and Wilson) or the New Deal. Of course, one paradoxical source of growth was maintained – defense expenditures – and this is perhaps why the neoliberal regime has had so long a life – roughly thirty-one years (1977-2008). No one really wants to cut the defense element in the annual budget because government spending for defense is merely Keynesian stimulation of the economy by another name and, for our purpose, we can call it “defense Keynesianism.” It is expenditure on defense, even while cutting taxes for the rich, that allowed Ronald Reagan to overcome the recession of 1981-83 and get re-elected in 1984 by a landslide. Bill Clinton, a better Reaganite than Reagan, used small tax increases and steady defense spending – more defense Keynesianism – to significantly improve the short term fate of the American economy during his second term in office (1997-2001)
But even though overall growth slowed considerably and in some spheres stopped, growth ideology maintained its hold. While the state either held steady or diminished, economic policy consisted of catch-as-can and often desperate attempts to maintain growth via a string of asset bubbles that eventually busted and, even worse, redistributed wealth in an inegalitarian fashion – from the middle classes to the very wealthy. First there was the savings and loan crisis, then the dot.com crisis, and finally the immense housing and debt swap crisis that brought the entire economy to its knees in 2008. The state’s role was to encourage investment through low interest rates, which it did via the Federal Reserve’s monetary policy. Publicists perpetrated the myth that everyone could own stock and their own home in the new “ownership” society, even while a large and growing minority of working men and women, unemployed or underemployed in a de-industrialized economy, could hardly pay their bills or send their children to college. Each bubble in turn was seen as evidence of a “new economy” in which recessions would either be mild or non-existent, and time after time, when one bubble burst, another was called forth to restore an evanescent prosperity.
The growth model no longer worked, but the American polity had nothing else in its bag of tricks. Even though neo-liberalism was the antithesis of Marxism, the neo-liberal regime brought about “the withering of the state.” Political institutions corroded under the weight of huge amounts of money injected into the electoral system. The work of many agencies of the federal government and even the Department of Defense was privatized and farmed out to private contractors (not unlike tax farming in the French ancien régime). In a nation divided between a small percentage of the very wealthy, a declining middle class and the poor, and with the sense that there were no remedies for inequality, Social Darwinist values revived and people were diverted from economic questions to squabbles over modernist transformations of society and culture, such as racial equality, feminism, gay rights, the right to abortion, etc. Cities were either neglected, leaving large segments of the population, particularly minorities and single mothers, in poverty and squalor, or “gentrified” in a manner that reconfigured them as loci for all but the very rich – aptly described by Themis Chronopoulos’ (2011) phrase, “luxury cities.” Cities, founts and centers of civilization and democracy, like New York, San Francisco, Boston, San Jose (Silicon Valley), Los Angeles, Chicago, and Miami became playgrounds for the wealthy and for “young urban professionals” (yuppies) in spheres where big money could be made (finance, real estate, or high technology).
Normal politics at almost every level (local, state, national, but especially national) was reduced to incessant warfare between one political party (the Democrats) which tried in vain to make a broken system work, and another political party (the Republicans) which, notwithstanding the evidence, lived in a mythical world where tax reductions for the rich increased government revenue and where an unencumbered market produced optimal results for all. What was not delusion was something worse: a lie.
The inadequacy of the polity bred extremism: a dangerous right populism that was reactionary and nihilistic, which sought to take America back to an imaginary world of free enterprise and moral purity; and a left populism which was romantic, utopian, anti-statist, and enamored of an implausible vision of participatory democracy.
While the entire world around it underwent rapid transformation, American politics and American political ideology remained fixed in the immediate postwar era, with the United States as the eternal hegemonic power, a land of abundance, prosperity, social peace, and democracy. Ronald Reagan’s success as a “great communicator” derived from his ability to tell either fairy tales or lies to an electorate that, afraid for the future, needed to believe him. Empty phrases like “the city upon a hill” and “it’s morning again in America” allowed people to feel that America was still the providential nation that the founding fathers had described as “the best hope for democracy.” But the reality was a broken polity, rife with inequality, where, in Marvin Harris’ perhaps hyperbolic phrase, “nothing works.”
Fixing this mess as described above and putting statecraft at the top of his priorities is now left to President Barack Obama. It is he who has inherited the American political crisis of the late twentieth century. Can Obama transcend the mess he inherited and also, in so doing, repair the country? The instrument which he has had for more than three years now, and which he may still have in his hands after the general election in November 2012, is the American state. Leaving aside the founders and Lincoln (who created and then re-created the Union), it was Presidents Theodore Roosevelt and Woodrow Wilson and their advisors who in the early twentieth century created the modern American state. FDR used this state to good effect during the New Deal and the Second World War and so, to the extent that the war in Vietnam allowed, did Lyndon Johnson (civil rights, Medicare, Medicaid). But LBJ was a representative figure of an era, the postwar period, in which the political realm was subordinated to the economic sphere because of what I have called the growth model of the American polity, a polity that enjoyed no more than three decades of success. Under neo-liberalism, when growth slowed or in some cases ceased, it was sustained only through asset bubbles that produced cycles of booms and busts.
Obama has dealt with this crisis and with all the hardship and instability that it has caused, moving from one side to another, from one polarity to another, in attempting to do good for ordinary folk and, at the same time, keep the existing system of financial institutions and private healthcare from total collapse. But if Obama is wise and fortunate enough to have a second term as president, even while continuing to deal with a very slowly recovering economy, his priority must be political and he must, after an entire century, rebuild and strengthen the American state.
Congress, with very little fanfare and with almost no media coverage, is beginning to consider President Obama’s Reforming and Consolidating Act of 2012. Ostensibly, this Act has been put forth to reduce the number of Cabinet departments and consolidate the number of departmental agencies and thereby to save government funds and help reduce the ever present budget deficit. If passed it is possible that the Act will actually achieve these ends. But, as with past attempts to reorganize the American government – under Franklin Roosevelt and under Richard Nixon in the twentieth century – there are also other and more important reasons for state consolidation and these reasons are why most presidential reorganization plans have failed.
The first reason is simply that via reorganization the president gets more power and more ability to take hold of and bend easily to his political will the bureaucracy which is his charge and which empowers him. This alone makes Congress wary. The second reason for state consolidation is to allow the president and his administration to eliminate agencies that serve special interests, defined of course by the administration in charge of reorganization. And here not only Congress but the special interests involved are wary. All cabinet departments and the agencies with them serve interests – this is the way the America state has worked since the late nineteenth century when the Department of Agriculture was created, followed in time by departments that identify themselves with interests – Commerce, Labor, Health and Human Services, Defense (lest we forget the military-industrial complex), Transportation, Housing and Urban Development, Education, Veteran’s Affairs. Presidents who have no concern about the government serving “interests” (for which, of course, there are literally thousands of lobbyists in Washington) do not propose state consolidation and reorganization plans, because Congress, which is all about representing interests, does not like them (plus reorganization would require a transformation of all congressional committees). The reorganization plans of both FDR and Nixon got absolutely nowhere, in part because the War came along for Roosevelt and Watergate aborted many of Nixon’s more intelligent projects. At the beginning of his Administration, Clinton thought about doing it (with Gore in charge) and then, dealing with the deficit, a healthcare bill and foreign policy, put any such thought aside.
So why, then, is the presumably cautious and centrist President Obama attempting to reorganize and consolidate the Federal government? I believe because he has his eye on the future that I have sketched out above. That future depends on him and his successors being able to use the governmental apparatus as an efficient, cost-saving, and, most important, as a planning tool. And, of course, because government reorganization and consolidation is at the root of statecraft, no individual, group or “interest” attached to the status quo likes it.
The first part of Obama’s proposal for consolidation and reorganization gives him the authority to destroy the Commerce Department (while keeping its name) and putting within it all the agencies of the Federal government that have to do with trade or business, especially on a global rather than national scale – The Office of the Special Trade Representative; the Export-Import Bank; the Overseas Private Investment Corporation; the Trade and Development Agency and the Small Business Administration. Agencies like the National Oceanic and Atmospheric Administration (NOAA), which do not belong in the Commerce Department, will go to the Department of Interior or, perhaps, to some revised cabinet department, which will include the Environmental Protection Administration (EPA), to focus exclusively on the vast yet pressing matter of ecology and climate change.
If Obama can put through the first part of his reorganization plan, other elements should follow, such as bringing together all agencies concerned with transportation, housing, urban development, economic development and social services. Such a department would allow comprehensive planning of the built environment in the United States and also connect metropolitan development with appropriate social services. We need far more public transportation than we have – at the moment, the New York metropolitan area contains 75% of all the public transportation in the entire country – and something like planning for a national system of bullet trains, entailing rebuilding road beds and securing land for right of way. The “American Dream” is tied up with the idea that home ownership is the ultimate good. But in a world of falling rates of profit and diminished growth as well as global competition, the waste of resources and the cost of private housing runs counter to our real interests. In creating a new built environment the government could focus on affordable public housing for a large part of the population, in lieu of the panacea of a suburban house with a two car garage. Americans, like other people in the world, can be well-housed in apartments which provide the added benefit, because they do not use so much land, of eliminating “sprawl.” If the Education and Labor departments were combined, policymakers could begin to create the kind of workforce training programs required by a twenty-first century manufacturing economy.
Doubtless the executive is not the only part of the Federal government that needs to be rebuilt or altered in a major way. The reorganization of the executive will affect Congress, because if it is geared to planning, then the Congress will have to reorganize itself in a similar manner. But there are glaring problems with Congress that can be repaired immediately, such as the Senate filibuster, which now requires at least sixty votes in favor of a bill in order to pass any legislation. This is an impediment to progress and constitutes a recipe for American decline.
Earlier in this essay I cited Richard Hofstadter stating that “America has rarely been well governed.” There are many reasons why this is so: fear of state power; the popular distaste for taxation; the historic conservative affirmation of “state’s rights”; the liberal inability to grasp the idea of the common good; the deleterious effect of the South – slavery, segregation, militarism, evangelical religion – on American politics. None of these are easily overcome and it may be mere wishful thinking to believe that Obama or any president can successfully undertake the task of state-building.
But wishful thinking is, in one sense, what history is about. On May 6, 2012, a watershed has been crossed in a civil rights movement that in fifty years – a mere blink of an eye to an historian – has seen not merely the end of African-American segregation in the South, an immense wave of global immigration that has altered the ethnic composition of the United States, but also the recognition by the president of the United States and the new president of France (Francois Hollande) of the rights of the Gay, Lesbian, Bisexual and Transgender (GLBT) men and women to marry; what was once taboo has now become commonplace. We speak endlessly in modern society of the revolutionary nature of industrial capitalist social and economic transformation, but we often forget that the continuing democratic evolutions and revolutions that began, as well, in the eighteenth century are what has made capitalism, a system which is by definition without a human face, into a system that allows, not in spite of but because of, human struggle for the improvement of human life.
At this juncture in the history of the United States, statecraft is more important than ever in order to build a better America and a better world. Since there are already so many good things about the United States to celebrate, it is no small thing and no implausible thing to emphasize the fact that the weak link in American history and society is the state. Fixing it, building it, is what must be done.