Canada’s definition and documentation of “Indians” is a project of bureaucratic knowledge production in service of the continued assertion of settler colonial political visions.1 The Indian Act was introduced in 1876 to assert the terms of the political relationship between the Dominion of Canada and certain peoples the Act defines as “Indians.” The Act has been amended many times, but is remains a current piece of legislation in Canada and still defines “Indian” as a political and legal category of person.2 Defining and identifying “Indians” served the broader project of managing Canada’s so-called Indian problem. From the perspective of nineteenth-century legislators, the “problem” was one of Indigenous peoples asserting nationhood and insisting on claims to the lands where they have lived since time immemorial, thus creating obstacles for settler claims to sovereignty. But it is also a problem of knowledge, which Indian Affairs administrators sought to address through a practice of classification. To apply and enforce the provisions of the Act designed to undermine Indigenous sovereignty and compel their assimilation, “Indians” had to be made visible to state legislators, bureaucrats, and other agents. The definitional work of the Indian Act is both a technique of classification and a way of seeing.
The Bolshevik Revolution strove to create a “new man,” a morally and psychologically superior human being. This new man required a complete physical and mental renewal, including, among other measures, the hygienic literacy of the masses. A wide range of media were employed for the Revolution’s ends, including not only various forms of print but also mobile cinemas and theatrical productions. A theater movement aimed at instructing the masses gained strength in the early years of the Revolution, and many theatrical performances addressed prevailing problems in public health. The hygienic awareness of the population was especially crucial during World War I and the Russian Civil War that followed, when diseases flourished in conditions of hunger and claimed millions of lives. In the 1920s, the performances came to local clubhouses and reached even the kolkhoz fields to entertain and educate workers and farmers. Beginning in 1925, theatrical hygiene propaganda was centrally managed by the newly founded Moscow Theater for Sanitary Culture (1924–1947).
Russia’s support for right-wing politicians around the world has been in the news a lot in recent years. From Ukraine to France and the United States, Vladimir Putin has aligned Russia with political groups that oppose immigration, LGBT rights, and secularism. But this isn’t the first time a Russian leader has been the figurehead of world conservatism.1 After the Congress of Vienna in 1815, Russia was known as the “gendarme of Europe” for its interventions against revolutionary forces all over the continent. Before that, Russia stood alongside Britain in leading the worldwide reaction against the French Revolution.
Often remembered as a critique of Keynesian economics, Friedrich Hayek’s The Road to Serfdom (1944) contained two other important assertions about the future of liberalism. Buried in the thirteenth chapter—”The Totalitarians in Our Midst”—of Hayek’s bestseller was a discussion of the fundamental relationship between knowledge and liberalism. Hayek posited there that the humanities represented the road to freedom, whereas science represented the road to totalitarianism: “serfdom.” In particular, he singled out the idea, common among socialists at the time, that science could serve as a basis for new moral laws and social betterment. He called this idea “German” and labeled it anti-liberal. Only insights from the humanities, he claimed, could provide an ethical culture for the liberalism of the future. Hayek depicted a progressive science as authoritarian and the traditional humanities as freeing.
Long a matter of academic attention, the very criteria of what makes a fact now circulates as a matter of politics. Indexing the increasingly widespread concern about what makes a fact, the Oxford dictionary selected “post-truth” as the 2016 word of the year.
In a recent column in Dissent, the historian Daniel T. Rodgers takes issue with how the word “neoliberalism” has become “a linguistic omnivore” in present-day scholarship. Deeming its success “a measure of its substantive hollowness,” he untangles its various meanings (“market fundamentalism,” “commodification of the self,” “finance capitalism,” and so on) and appeals for a return to a descriptive language closer to social reality. I argue the contrary here. Neoliberalism owes its success to its distinct ideological shape, which functions thanks to, not in spite of, its paradoxes and contradictions. Although the original agenda of neoliberalism has been revised many times over, its scope and reach have steadily increased. Its commonly overlooked scientific dynamism, sponsored by private individuals and foundations, relayed by think tanks, and embedded in a growing, yet problematic “marketplace of ideas,” remains at the very heart of the neoliberal project today.
A century ago, World War I brought devastation and violence to Europe and other regions of the world, in many cases upending previously dominant political, social, and cultural orders. For women in large parts of the Western world, the end of the war saw a historical achievement, the right to vote.1 Suffragist activists had fought for this right for the better part of the previous century. They employed a wide range of tools, organizing rallies and marches, founding political organizations, and even conducting hunger strikes and rare acts of violence. Newspapers and magazines such as The Suffragist in the United States were common tools for spreading information and knowledge, building networks, and encouraging and motivating readers. Part of a broader history of the politicization of women and their bodies, the history of the suffragist movement was local and global, national and transnational.2 Female political activism did not end with the right to vote or with standing for and holding office, however. It took up voter mobilization and women’s political and civic education.
We are members of knowledge societies, but we live in “an age of ignorance.” We are swimming in “oceans of ignorance” that have been consciously, unconsciously, and structurally produced “by neglect, forgetfulness, myopia, extinction, secrecy, or suppression.”1 Little wonder, then, that there is also a lot of ignorance about the persistence of racism as a structural phenomenon that orders society in discriminatory ways and racial knowledge as a normalized element of our knowledge societies.
“We are living in a new age,” President Sukarno proclaimed at the First National Science Congress in 1958, “the age of atomic revolution, of nuclear revolution, explorers and sputnik, of interplanetary communications with the moon and the stars, and the content of the sea.”1 And the new age, he reasoned, necessitated new roles. If it was up to him, scientists and other academically trained elites would guide Indonesia’s development into the future. Yet there seem to have been two problems. Although Indonesians had conducted scientific research during the colonial era, their number remained insignificant. As a result, Indonesian culture lacked a sense of scientific authorship and ownership.2 At the same time, “science” had overtly Western and imperialist connotations, against which the new Indonesian state postulated its postcolonial identity. Here I discuss three discursive strategies that Sukarno employed during the 1950s and early 1960s to resolve these tensions and Indonesianize the production of academic knowledge.
In the 1970s and 1980s, the concept of the “knowledge society” (Wissensgesellschaft ) rapidly gained in popularity among social scientists and politicians in Western countries.1 The concept referred to a socioeconomic system that was no longer organized around the manufacture of material—especially industrial—goods but instead around the production of knowledge, expertise, and highly specialized skills. The prominence of this perspective was strongly influenced by the experience of de-industrialization in Western Europe and North America in the last third of the twentieth century, with former sites of industrial production being dismantled and the so-called service sector rapidly gaining in importance. Closely linked to emphasis on the relevance of knowledge in the twenty-first century was concern with educational models that seemed to be outdated because they were rooted in the industrial paradigm of the nineteenth and twentieth centuries. It was in this context that school and university curricula were revised and “modernized” so that they would match the technological demands of postindustrial societies. These efforts were driven by the understanding that the international standing of formerly industrial countries and regions depended on their ability to supply and apply the skills and expertise needed to compete in an increasingly global economy.