If you had a conversation about the growing gap between rich and poor almost anywhere in today’s world, you would very likely refer to “the top one percent,” a phrase that evokes the skyrocketing wealth of the superrich. A similar conversation in West Germany in the 1970s or 1980s would have revolved around the latest movements in wage earners’ aggregate share of the national income, evoking images of a society divided into employers and employees. In 1980s Britain, you might have talked about income growth among the bottom tenth of the population, as the government tried to steer the discussion away from income relativities and overall inequality.
In recent decades, a diagnosis of democratic crisis or even of a post-democratic condition has emerged in public debate in many Western states. The rise of electoral abstention, particularly since the 1970s and 1980s, often serves as statistical evidence for this assessment.1 Yet what exactly can abstention tell us about the state of democracy? My current research project proposes to historicize abstention, not as a political phenomenon per se but rather as an object of contention leading to multiple interpretations and practices in the political sphere. To that aim, I inquire how political actors—politicians and officials, but also journalists and political scientists—have handled abstention through the postwar decades in France, West Germany, and Switzerland. Knowledge played a significant role in the ways abstention was framed in public debate. It was accompanied by the development of various forms of expertise on the topic, ranging from emerging political science to electoral surveys.2 Public discourse on abstention became a matter for experts, journalists, and established politicians—none of whom were prone to abstaining from voting themselves.
Brainstorming as a way to organize ideation was first practiced in the United States in 1938 in the advertising firm Batten, Barton, Durstine & Osborn (BBDO). One partner, Alex Osborn, later described it as “using the brain to storm a problem,” adding that it should be done “in commando fashion.”1 As a method for thinking freely and wildly, so as to generate “new thoughts and ideas that no individual would have thought of on their own,”2 it was remarkable for its initial combination of conscious effort and play, of tenacious exercise and practices of freedom, and of rationality and irrationality. Brainstorming gained traction in American manufacturing, government, and the military in and after World War Two.3 And while brainstorming developed as a knowledge-generating practice squarely at the heart of military-industrial settings, it was pitted against predominant utilitarian rationalities of management, the military, and bureaucracies, for instance. Practiced in settings that explicitly suspended hierarchical orderings, it was geared toward the democratic expertise of no expertise—where anybody can have ideas. I have hypothesized that in order to overcome the boundaries imposed by modern and emergent rationalities in these settings, brainstorming offered a form of counterknowledge: an understanding that came about by not following the usual rules of thought.4
A specter is haunting the current political discourse, the specter of cultural cleavage. More and more observers see the emergence of a socio-cultural gap between a hegemonic, globalist, educated class and an underrepresented, locally anchored underclass. The titles of two studies speak volumes: Cleavage Politics and the Populist Right (2010) by sociologist Simon Bornschier, and "The Class Basis of the Cleavage between the New Left and the Radical Right" (2012) by political scientist Daniel Oesch. Meanwhile, French philosopher Guillaume Paoli observes a cultural confrontation between two societal blocs.1 And in his recent work on the "society of singularities,” German sociologist Andreas Reckwitz postulates a new "cultural class divide"—a polarizing dichotomy between a "new middle class" equipped with high levels of cultural and economic capital and a "new underclass" lacking all of this.2
At the beginning of the history and sociology of knowledge as we know them today, there was a crisis. By the early 1970s, the future of the earth as a natural habitat for prosperity and progress was looking so bleak that many observers began turning pessimistic. Most famously, the Club of Rome declared Limits to Growth in its 1972 report. But other institutions and intellectuals took a similar line. To name just one, Nicholas Georgescu-Roegen, an economics professor at Vanderbilt University, probed the depths of history with The Entropy Law and the Economic Process (1971) only to find that Malthus was right all along. In spite of two centuries of industrial frenzy, entropy always was and always would be the reigning earthly principle.
The postwar was, as it often is, a projecting age. Following World War One, political, military, thought, and other leaders resolved to prevent such a catastrophe from ever occurring again. Projects proposed at the Paris peace talks were many and varied in origin, scale, ideology, and so on. More significant, though, was an overarching commonality in their conceptualization. The projects were defined by a certain way of thinking.
What do governments know? When and why have they generated knowledge about themselves, sovereign territories, the functioning of bureaucracies, legal systems, and the effectiveness of legislation? In other words, how have officials made that capacious concept we call the state legible?
State knowledge took on heightened importance in Central Europe in the nineteenth century with the transition away from remaining vestiges of feudalism. This is especially clear to see during the revolutions of 1848. Over the course of a turbulent two years, revolutionaries protested against a great many things. They most famously called for national unification and the introduction of liberal constitutions, but they also demanded the reform of outdated modes of administration. Such ultimatums were unsettling for governments in two ways. First, they required a rethinking of law, as well as of the kinds of bureaucratic structures and activities needed to bring about a more flexible handling of domestic affairs. And second, they prompted an urgent need to generate knowledge to gage the effectiveness of these initiatives.
Frederick the Great (1712–1786) was not a homosexual. Or so claimed the German physician and amateur medical historian Gaston Vorberg in 1921. Scurrilous rumors about the sexual desires of the legendary Prussian monarch had circulated ever since the eighteenth century. Vorberg sought to debunk them using the tools of critical scholarship and source analysis. In his essay "Gossip about the Sex Life of Frederick II," Vorberg defended the straightness of the king on the basis of his “long and arduous research.”
Canada’s definition and documentation of “Indians” is a project of bureaucratic knowledge production in service of the continued assertion of settler colonial political visions.1 The Indian Act was introduced in 1876 to assert the terms of the political relationship between the Dominion of Canada and certain peoples the Act defines as “Indians.” The Act has been amended many times, but is remains a current piece of legislation in Canada and still defines “Indian” as a political and legal category of person.2 Defining and identifying “Indians” served the broader project of managing Canada’s so-called Indian problem. From the perspective of nineteenth-century legislators, the “problem” was one of Indigenous peoples asserting nationhood and insisting on claims to the lands where they have lived since time immemorial, thus creating obstacles for settler claims to sovereignty. But it is also a problem of knowledge, which Indian Affairs administrators sought to address through a practice of classification. To apply and enforce the provisions of the Act designed to undermine Indigenous sovereignty and compel their assimilation, “Indians” had to be made visible to state legislators, bureaucrats, and other agents. The definitional work of the Indian Act is both a technique of classification and a way of seeing.
Beginning in the second half of the nineteenth century, as the intensified Western aggressions expedited the Qing Empire’s decline, Chinese sociocultural elites started to question the value and relevance of their traditional knowledge system. Believing knowledge to be the secret behind the rise of the Western powers, these elites avidly consumed so-called New Learning (xinxue), that is, general, mostly Western knowledge that was new and foreign for China.1 Importing, translating, and reading books containing Western knowledge were deemed urgent tasks, crucial to the survival of China. As the renowned reformer Liang Qichao (1873–1929) put it, “if a nation wants to strengthen itself, it should translate more Western books; if a student wants to stand on his own feet, he should read more Western books.”2