The fear of destruction is supposed to be a key driving force in capitalism, the spur that drives competitors to ceaselessly innovate and seek efficiencies to stave off bankruptcy or takeover at the hands of their rivals. Indeed, Joseph Schumpeter held up the frequency of business collapses as a great virtue of capitalism – his “creative destruction” that brings innovation through the downfall of outmoded former leading firms. (Ironically, the term emerged in a book, Capitalism, Socialism, and Democracy, that predicted the peaceful and fruitful downfall of capitalism at the hands of democratic socialism, which rarely comes up when the term is discussed in economics or business classes)
And indeed, one of the observations that Schumpeter made of the real economy – that, rather than perfectly adjusting to competition, many individuals and firms are destroyed – seems to bear fruit when you consider that a majority of new businesses fail in their first year. However, there is a cost that comes with the spur of fear; you lose the ability to look beyond the short-term. As Steve Perlstein notes, the 1980s shift to a “market for corporate control” (i.e, hostile takeovers for companies that don’t live up to 5% earnings growth per quarter) and the increasing shift to executive compensation that emphasized quarterly performance (performance bonuses at multiples of base salaries, stock options, etc.). As the people at the Aspen Institute note, this can be somewhat changed by public policy, but at a certain point, all companies have to focus on the short-term.
But the same is not true for the public sector.
Thinking in the Long Run:
Governments and public institutions, by contrast, are as close to permanent as any human institution can get. The British Exchequer, for example, dates back to the reign of Henry I in the 12th century, or more than eight hundred years. While that’s something of an extreme example as continuous bureaucracies go, it’s still the fact that public institutions routinely have institutional memories that go back one or two hundred years, even in relatively young countries like the United States.
What this means, in essence, is that public institutions are not bound by the business cycle. Certainly, a recession will crimp public revenues and most likely a change in parties, but the institutions survive. And what that means is that governments can look, and think and act beyond the business cycle, and beyond the secular cycle as well.
All which means that government can act as the ultimate venture capitalist, making investments that might not pay off for decades to come, and it’s a role that only the public sector can play.
Indeed, when we think of the development of capitalism as we know it from the 18th century to the present, the more the idea that capitalism flourishes only in laissez-faire environments seems ridiculous. Even given the stipulation that the development of American capitalism is not the same as say, the development of German capitalism where the state was a major investor, subsidizer, and manager of industrial production, especially in heavy industries such as coal, iron, and steel, railroads and telegraphs, the history of American economic development shows that American governments at all levels have enjoyed huge success as extreme long-term venture capitalists in infrastructure and technology.
The Cumberland Road and the Erie Canal, a Federal and state infrastructure investment respectively, were the ultimate venture capital projects of their time, far above the level of risk that any private investor could have afforded. Indeed, many private railroad and canal corporations went bankrupt in the first fifty years of the American republic, due to the high costs of construction and the scarcity of capital.
Later in the 19th century, Federal and state investments in railroad construction showed the same pattern – because of the extremely high up-front costs of building a railroad (purchasing the land, the engine, and the rails, paying for the labor-intensive construction), railroad corporations were entirely reliant on government subsidization. Both the Federal and state governments gave railroads enormous amounts of public lands that were used as collateral for bank loans and stock issues, as well as cost-plus construction contracts and bonuses paid for from public bonds.
In the 20th century, the same thing happened with air and car transportation. The Works Progress Administration built over 250 airports at a time when commercial aviation was just struggling to get off the ground – indeed, before the days in which jet engines and cheap fuel radically brought down the time and cost of air travel, it was the U.S postal services Air Mail service that kept many airports in business. If you look at other industries – the links between high-tech firms and military/aerospace contracts, the fact that the government was both a major producer and the major consumer of computers well into the 1970s, the huge amounts of basic science and research done in the National Institutes of Health, the Defense Advanced Research Projects Agency (DARPA – the folks who invented the internet), and other agencies, you can’t help but realize that the government (for all that it might have the occasional misfires with supersonic air travel or cancer vaccines in the 1970s) has an amazing track record as a financier of research and development.
And the long term really matters when it comes to the kind of initial investment in research and development that we’re talking about. Jimmy Carter’s solar panels on the White House were widely mocked in the late 1970s, since at that time solar-electric technology was incredibly inefficient in capturing and storing power and nowhere near to being competitive with tradition coal or gas-powered electrical plants. Thirty years later, however, solar power is a major and growing player in the energy industry, basically competitive with traditional fuels, and solar companies’ stocks are highly prized. But none of that could have happened without early Federal investment.
Thus, for all that conservatives say that government intervention stifles innovation, the principle we should derive from this is that anywhere that basic research is necessary, and anywhere where investment in new technology is simply too expensive or too risky, and especially where a new technology is both functional but a long way off from profitability, the government should play the dominant role.
For anyone who isn’t a free-market zealot, this is a relatively non-controversial proposition, for all that it suggests that the area that Joseph Schumpeter labeled as the cause of economic progress and the prime mover of the business cycle – namely, technological innovation – is an area where the public sector is superior to the private sector.
What is a more controversial idea is that the government should get both a fair value and a voice in the use of technology that it helps to finance. In this country, we’ve become very accustomed to the idea that corporations should get these things almost for free – from the huge public investments in transportation infrastructure, to the essentially free use of the NIH for basic biomedical research that later becomes hugely profitable drugs, to the oil, water, timber, and mineral resources that are extracted from public lends for a fraction of their market value. Why shouldn’t the Federal government demand a fair share of the return of what is the public’s money?
Moreover, despite the fact that the Federal government arguably has a large property right in new technologies it develops, corporations have been free to make use of Federal advances without listening to the Federal government. Take the NIH and the pharmaceutical industry – Pfizer, Merck and other companies literally couldn’t exist without the Federal governments contributions to basic research and the use of their labs, so why shouldn’t the Federal government insist in return the right for Medicare and Medicaid to negotiate bulk purchase rates, or even regulate the price of pharmaceuticals outright?
For all that economics is supposed to be a social science, it’s actually quite hard to think about how the economy actually functions without our beliefs getting in the way. Ever since the Renaissance, people in the West have tended to think of innovation as stemming from the individual genius, the lone creator working in their laboratory or workshop or studio to surprise the world with their superlative discoveries and inventions.
From that vantage point, it’s easy to see why the role of the innovator, the entrepreneur has become so crucial a concept for capitalism and its theorists. Especially given the tension between the philosophic liberal individualism that has accompanied economics ever since Adam Smith (originally a moral philosopher, mind you), and the reality that the modern capitalist world is dominated by huge, bureaucratic institutions called corporation, the innovative individual has become an ideologically crucial figure, a way of keeping the faith that the individual – even if it’s just the rare genius – is still in the driving seat.
But what if innovation comes from the group, from the collaborative work of dozens and hundreds of people who live and work in institutions? What if government turns out to be not Leviathan but Mercury?