Skip to content
Publicly Available Published by De Gruyter April 27, 2021

The Fourth Generation of Human Rights: Epistemic Rights in Digital Lifeworlds

  • Mathias Risse EMAIL logo


In contrast to China’s efforts to upgrade its system of governance around a stupefying amount of data collection and electronic scoring, countries committed to democracy and human rights did not upgrade their systems. Instead, those countries ended up with surveillance capitalism. It is vital for the survival of those ideas about governance to perform such an upgrade. This paper aims to contribute to that goal. I propose a framework of epistemic actorhood in terms of four roles and characterize digital lifeworlds and what matters about them both in terms of how they fit in with Max Tegmark’s distinctions among stages of life and in terms of how they generate their own episteme, the data episteme, with its immense possibilities of infopower (a term inspired by Foucault). Epistemic rights that strengthen existing human rights – as part of a fourth generation of rights – are needed to protect epistemic actorhood in those roles. In the long run, we might well need a new kind of right, the right to the exercise of genuinely human intelligence.

Where is the Life we have lost in living?

Where is the wisdom we have lost in knowledge?

Where is the knowledge we have lost in information?

T. S. Eliot, from the Opening Stanza of Choruses from the Rock, 1934

1 Introduction: 1948, Analog and Digital

On December 4, 1948, George Orwell sent to his publisher the manuscript of 1984, a dystopian novel that would capture some of the great fears of the 20th century.[1] Taking place in an imagined future set in 1984, the book explores the consequences of mass surveillance and the repressive regimentation of everything people do (Bowker 2003, chap. 18; Orwell 1961). Less than a week after Orwell’s submission, on December 10, 1948, the UN General Assembly took a historic vote. The idea that there should be a document stating protections and provisions owed to all humans had gained momentum during the Second World War. A growing sense that human affairs had, repeatedly, gotten derailed dramatically in the 20th century made the late 40s a period when the project of institutionalizing human rights briefly flourished before the world encountered its next crisis and plunged into the Cold War. While at its founding in 1945 the UN did not endorse detailed human rights prescriptions, a committee was charged to attend to that task. Under Eleanor Roosevelt’s leadership, the Human Rights Commission drafted the preamble and 30 articles that would become the Universal Declaration of Human Rights (UDHR) through that vote in late 1948.[2]

But while the year 1948 thereby marked progress for the endeavor to see each human life protected, it also witnessed breakthroughs in a very different domain. That year Norbert Wiener (2019) published Cybernetics or Control and Communication in the Animal and the Machine, and Claude Shannon (Shannon and Weaver 1971) published “The Mathematical Theory of Communication,” two groundbreaking pieces for the multidisciplinary efforts to come to terms with an increasing abundance of information and rapidly developing capacities for electronic computation, especially in the new field of computer science, or informatics.[3] Alan Turing had developed the Turing machine, a mathematical model of computation that defines an abstract machine. In 1950, he would publish “Computing Machinery and Intelligence,” proposing an experiment that became known as the Turing Test, an attempt to define standards for machines to count as “intelligent” (1950). In Princeton, in the 40s and early 50s, John von Neumann advanced the theoretical design of digital electronic computers and built machines that accelerated the development of hardware. The digital age was on its way (Dyson 2012a, 2012b; Nilsson 2009; see also Buchanan 2005). Eventually it would make the fears articulated by Orwell come alive in ways beyond what even he could imagine, and beyond what a document from 1948 could protect against.[4]

The term “lifeworld” (from German Lebenswelt, which is familiar from phenomenology, especially Husserl) characterizes the immediate impressions, activities and relationships that make up the world as a person experiences it and that people in shared cultural contexts experience together.[5] In 1948 lifeworlds were thoroughly analog, involving interactions and technologies driven by tactile physical experiences and organized around measurements that represented what they measured, as a clock’s moving hands represent time, making clocks “analog” to time. That the Universal Declaration emerged from our analog past does not mean it fails to address the digital lifeworlds we increasingly inhabit, lifeworlds structured around electronic devices and numerically coded information (“digital” information, from the Latin for finger). But it does mean the Declaration was designed to respond to the numerous ways people were mistreated, specifically in the analog lifeworlds of the industrial age with its political and economic possibilities. Only decades after the Declaration would digital lifeworlds connect humans, sophisticated machines and abundant data in the elaborate ways that now shape our reality, as a result of developments that accelerated in the 1940s.[6]

Eventually these lifeworlds might merge into a full-fledged Life 3.0, whose participants not only design their cultural context (as was true for Life 2.0, which developed from the evolutionary, precultural Life 1.0), but also their physical shapes (for three stages of life, see Tegmark 2017). Digital lifeworlds in Life 3.0 might be populated by genetically enhanced humans, cyborgs, uploaded brains, as well as advanced algorithms embedded into any manner of physical device. If there is an intelligence explosion (singularity) from within our digital lifeworlds, genetically or technologically unenhanced humans – who, ironically, created those lifeworlds – would be intellectually inferior to other inhabitants, and might find Life 3.0 unwelcoming.

Only time can tell whether digital lifeworlds will lead from our analog past to a fully digitalized future with such high-tech inhabitants, the last phase of Life 2.0 evolving into Life 3.0. But we clearly inhabit digital lifeworlds now and must adjust the human rights project to protect human life as it unfolds, albeit also with an eye to what the future might bring. In digital lifeworlds as we know them, increasingly more activities are captured as data and stored, sorted and processed. Digital data can be copied without loss of quality as many times as one likes, and at great speed. More and more of what we do, say, or believe, and more and more of our movements and interactions leave digital marks, some potentially permanent. To mention some symptoms of the digitalization that has engulfed us, as of 2000, one quarter of global information was digital, but as of 2013, it was 98%. More data are gathered since we increasingly shift activities into digital formats, storage is cheaper, computational power to process data has steadily increased, and replication and transmission of digital information is easy. As of 2018, every day 269 billion emails were sent, 350 million photos uploaded on Facebook, 3.5 billion Google searches conducted, and 500 million tweets sent (Susskind 2018, pp. 63–64).

Providing myriads upon myriads of data, digital lifeworlds engage us much more as epistemic actors – as knowers and knowns – than was ever even possible in the analog world with its limited capacities for storing, sorting or processing information. Accordingly, it is regarding their epistemic rights, rights as knowers and knowns, that humans need especially high levels of protection at this late stage of Life 2.0 with its colossal possibilities of epistemic intrusiveness. After all, not even the totalitarian surveillance of pre-digital times comes close to the surveillance possibilities generated by the fact that all electronic devices ranging from smart phones, tablets or laptops to GPS, networked climate control or doorbells and home systems like Alexa collect data, and that all Internet platforms do so. If we do get to Life 3.0, these rights would have to include one to the exercise of a distinctively human intelligence in lifeworlds shared with entities of our making that nonetheless might surpass us immeasurably.

It is partly because of the relevance they already have and partly because of their relevance in a possible Life 3.0 that we should give the kind of importance to epistemic rights that comes with acknowledging them as components of a fourth generation of human rights. Accordingly, I seek to substantiate the significance of epistemic rights, understood as human rights, for the protection of a distinctively human life. Casting this project in terms of a fourth generation of human rights highlights its philosophical and political urgency. In recent years the Chinese Communist Party has upgraded its rule to a new technological level built around a stupefying amount of data collection and electronic scoring. This startled observers who expected the rise of the Chinese middle class to bring democratization and a broader embrace of human rights. What is also striking is that countries committed to democracy and human rights did not upgrade their systems. Instead of adjusting democracy and human rights to new technological possibilities, those countries ended up with what Shoshana Zuboff calls surveillance capitalism (2019). It is vital for the survival of those ideas (democracy and human rights) to undergo such an upgrade. The present project aims to contribute to that.[7]

Section 2 introduces a framework of “epistemic actorhood” to capture four different roles persons play in information exchanges, with an eye to digital lifeworlds. Epistemic rights protect individuals as knowers and knowns in these four roles. Section 3 explores the (substantial) presence of epistemic rights in the UDHR and beyond, and thus the recognition they have received in the analog world for the protection of a distinctively human life. That section also introduces the background to the discussion about a fourth generation of human rights. Section 4 discusses digital lifeworlds by embedding them into the large-scale historical perspective captured by the Life 1.0/2.0/3.0 distinction. Section 5 looks at digital lifeworlds from another angle, enlisting perspectives from Foucault (who already appears in Section 2). Knowledge itself is a problematic notion that needs to be understood at the nexus to power, which is true then also for epistemic rights.

With these characterizations of digital lifeworlds in place, Section 6 turns to epistemic rights in Life 2.0. Such rights are already exceedingly important because of the epistemic intrusiveness of digital lifeworlds in Life 2.0. They should be stronger and more extensive than what the UDHR provides. But if Life 3.0 does emerge from our digital lifeworlds, we might need another right, one to exercise human intelligence in the first place, as discussed in Section 7. The point of a fourth generation of human rights is to protect human life in light of ongoing technological innovation, but then also in the presence of new kinds of intelligence. Whether there will be such Life 3.0, and just what it will look like, is for now a matter of speculation. But we do not want to start pondering what is needed to protect a distinctively human life only once it becomes clear that there will indeed be such changes. That is especially so since change might occur rather quickly. The required argument for the validity of the right to the exercise of human intelligence can draw on the secular meaning-of-life literature. Arguments that make the case that human life has meaning if there are no deities will also show that super-intelligent non-human inhabitants of Life 3.0 have reason to respect human life enough to accept such a right. Section 8 concludes.

Note two qualifications. To begin with, my proposals for the content of that fourth generation in Sections 6 and 7 are manifesto-style. I hope to establish the appropriateness and necessity of thinking about such a fourth generation, and thereby help establish an agenda that develops these themes in detail.[8] Also, here I do not explore the notions of epistemic rights and epistemic justice themselves in detail, nor do I offer the substantive account of human rights needed to show that epistemic rights can be human rights by the lights of such an account.[9] For present purposes I assume it is clear enough what it means to speak of epistemic rights as rights protecting individuals in various roles of epistemic actorhood, and that it is plausible enough in light of the importance of knowledge for a distinctively human life that epistemic rights, once established, would be human rights. What this piece adds is an account of the special significance of digital lifeworlds in our thinking about protecting a distinctively human life to the extent that it involves practices of acquiring knowledge and becoming known to others.

2 Epistemic Actorhood

Data, I take it, is anything recorded and transmissible in some act of communication. Information is data that is useful in given contexts. Most commonly data is useful by being accurate, the kind of thing captured in truthful statements.[10] Information-gathering with the intention of acting back on the environment is the key activity of all intelligent life. “To live effectively is to live with adequate information,” Norbert Wiener once wrote (1989, p. 18.) For humans, inquiry – the systematic gathering of information through language or otherwise – is an essential pursuit. Accordingly, much scrutiny is devoted to what constitutes successful inquiry, involving fields like epistemology or scientific methodology. However, knowledge acquisition is not exhaustively understood as a purely rational matter. Inquiry inevitably occurs in contexts where information is channeled and presented somehow, and where it is more or less difficult for people to acquire knowledge, including self-knowledge. Scrutiny of human inquiry therefore also involves history, ethics, sociology or political science.

Throughout history and across cultures, multifarious standards of inquiry have evolved. Michel Foucault used the term episteme – Greek for understanding – to denote the structure of thought, the worldview(s), of an era, structures that, one way or another, are collectively maintained in ways that reflect power structures. Individual inquirers can evade them only under strains, intellectual or political. The episteme includes a shared set of rules of how to go about inquiry, of who gets to go about what kinds of inquiry, as well as a shared body of what counts as knowledge (see Foucault 1982, 1994, 1980; on Foucault, see Gutting 2001, chap. 9; Watkin 2018).

But as we reflect on inquiry, we must recognize humans not merely as individual knowers and as collectively maintaining epistemes, but also as (wittingly or unwittingly) revealing information (individually or collectively). Much information people seek is about other humans. So, individuals – things about them, personal data – are known to others. We are “knowers,” but also “knowns.” And people are also known in aggregates: individuals gather information about behavioral patterns of neighbors, customers or fellow citizens. Polling and market research have made strides in coming to know people collectively, for which digital lifeworlds offer a rich array of tools. As revealers or bearers of information, individuals are subject to rules that define success in terms of knownness, one’s own and that of others. These rules are a subset of those that apply to successful inquiry (where the target is humans). What is distinctive about this subset is not the rationality that applies to seeking information, but the moral, social or political standards expressing what information should or should not be available about people, and to whom. Moreover, as members of collectives, people maintain rules regulating what we should reveal about ourselves and what is generally known about us (all of which, again, is part of the episteme, since knowers are also knowns).

An “epistemic actor” is a person or entity integrated into some communication network (system of information exchange) as seeker or revealer of information. In philosophical discourse, “actors” often are people with agency (“agents”), connoted with terms like choice or rationality. But the term also denotes performers who follow scripts provided by producers. This sense is what I enlist. Talking about epistemic “actors” rather than “agents” de-emphasizes that they do things in ways reflecting genuine choice and a background rationality these individuals themselves could expound. Epistemic actors, with their thoughts, feelings and beliefs, play certain roles within communication networks. As seekers they obtain, and as revealers they generate, information. Both times this occurs according to prevalent standards, which vary in nature from rational to moral or sociological. These standards can be critically assessed or transgressed. However, individuals – the actors – do not normally even noticeably contribute to these standards. Both in terms of being knowers and in terms being knowns, actors fill roles by meeting expectations not of their making but that instead reflect what is required by the episteme.[11]

We can distinguish four roles that constitute epistemic actorhood: individual epistemic subjects, collective epistemic subjects, individual epistemic objects, and collective epistemic objects. Since I am interested in digital lifeworlds, I introduce these roles with an eye to such contexts. First of all, people operate as individual epistemic subjects. They are learners or knowers whose endeavors are expected to respect certain standards of inquiry, ranging from standards of rationality (how best to obtain information) to moral standards or plain societal divisions of labor (who gets to have what kinds of knowledge). To gather and process information, people must grasp established norms within the episteme. This will include finding appropriate use for media, from books or newspapers to photos or videos. In digital lifeworlds much has changed in terms of how this role is fleshed out. Information is stored and processed at astronomic scales. The Internet has begun to approximate what H. G. Wells called a world brain (2016).

Secondly, people are part of a collective epistemic subject. In that capacity they help establish or (more commonly) maintain standards of inquiry, the various types of rules constitutive of the current episteme. Whereas in the first role I myself figure things out, according to certain standards, in this second role I hold others to standards and help create standards. This role is about maintenance of the episteme. For many, how they fill the role of contributor to, or sustainer of, the information environment is rather passive, consisting in compliance. Nonetheless, the role as such has been transformed in the digital age since the way we gather information has been affected considerably through the availability of digital media: we may google things, or have information sent our way from certain platforms.

Thirdly, persons are individual epistemic objects. They get to be known by others as delineated by rules concerning what information about oneself may be shared. This role is that of an information holder, or provider – the role of a known. It is about managing privacy, which entails many complications. Expectations around the role of individual epistemic objects apply to both oneself and others: there are limits to what we are supposed to reveal about ourselves (depending on whom we interact with), and there are expectations around both what kind of information we are supposed to reveal about others and any other ways in which we might facilitate their getting to be known. Increasingly what we feel or believe is data that can be gathered or inferred from things we do (such as clicks). We can be traced in multifarious ways. We are subject to much surveillance. Accordingly, this role has been greatly boosted in digital lifeworlds. Some people (“influencers”) even become famous through the way they share things about themselves.

Finally, individuals are part of a collective epistemic object. They maintain and contribute to the pool of what is known about us collectively and help ascertain what is done with it. This role is that of a contributor to data patterns, parallel to that of a maintainer of the epistemic environment where information is gathered. Digital lifeworlds have brought lasting changes to data-gathering because we can now be known collectively in ways that draw on an immense pool of indirectly inferred information about our inner lives and private acts that nonetheless gives rise to known patterns of human behavior, thought or feeling. This kind of understanding of human patterns would have been unthinkable before.

3 Epistemic Rights on the UDHR and Beyond

With this understanding of epistemic actorhood in place, the vocabulary of epistemic rights captures ways in which such actors can be wronged, and thus ways in which a distinctively human life can be violated to the extent that it involves practices of acquiring knowledge and becoming known to others. Let us see to what extent epistemic rights already appear in the human rights movement.

English writer H. G. Wells, who died in 1946, and whom I briefly mentioned above, is best known for science fiction like Time Machine or War of the Worlds. But he was also a clairvoyant social critic who offered progressive visions at a global scale. As a way of safeguarding the future, he prominently advocated a universal declaration of rights.[12] The importance of access to knowledge is present throughout Wells’ work, especially in his efforts to frame such a document (1940, chap. 10). His proposed declaration includes 11 articles, fewer than the UDHR, but on average they are longer. The first includes a “right to live:”

Every man is a joint inheritor of all the natural resources and of the powers, inventions and possibilities accumulated by our forerunners. He is entitled, within the measure of these resources and without distinction of race, color or professed beliefs or opinions, to the nourishment, covering and medical care needed to realize his full possibilities of physical and mental development from birth to death. Notwithstanding the various and unequal qualities of individuals, all men shall be deemed absolutely equal in the eyes of the law, equally important in social life and equally entitled to the respect of their fellow-men.

So, the right to live itself implicitly appeals to the importance of knowledge by insisting each person be entitled to partake of the legacy accumulated by humanity, including presumably the accomplishments of the mind. Wells introduces the “right to knowledge” as Article 4:

It is the duty of the community to equip every man with sufficient education to enable him to be as useful and interested a citizen as his capacity allows. Furthermore, it is the duty of the community to render all knowledge available to him and such special education as will give him equality of opportunity for the development of his distinctive gifts in the service of mankind. He shall have easy and prompt access to all information necessary for him to form a judgment upon current events and issues.

Wells covers freedom of thought and worship separately. But while epistemic rights were on the radar of advocates for a universal declaration, the term “knowledge” does not appear on the UDHR. Nonetheless, epistemic rights are quite present, rights we can understand as protecting both individual and the collective knowers, as well as individual knowns. What is missing is rights protecting collective knowns.

To begin with, the individual epistemic object is safeguarded in Article 12 through protection from arbitrary interference with privacy, family, home or correspondence and from attacks upon honor and reputation. But the bulk of epistemic rights on the UDHR is about protecting the knower. We find freedom of thought and conscience in Article 18. Freedom of opinion and expression appear in Article 19, interpreted broadly as including freedom to hold opinions without interference and seek, receive and impart information and ideas through any media regardless of frontiers. Cultural rights indispensable for dignity and free development of personality appear in Article 22, which we can read as rights protecting the collective knower. Article 26 formulates a right to education, a crucial right protecting individual knowers. Finally, Article 27 articulates the right freely to participate in cultural life, to enjoy the arts and to share in scientific advancement, which again register as epistemic rights protecting the collective knower.

From here epistemic rights have found their way into legally binding human rights conventions and other fundamental legal documents. For instance, the Charter of Fundamental Rights of the European Union (2000) states that, “Everyone has the right to the protection of personal data concerning him or her” (Article 8). One has a right to own one’s intellectual property, protected by the World Trade Organization’s TRIPS agreement. To mention one other international example, there is the 2018 General Data Protection Regulation of the European Union. For a domestic example, consider the UK’s 2000 Freedom of Information Act, which grants individuals rights to information held by public authorities.

Since the late 1970s, scholars and activists have talked about three generations of human rights, the first comprising civil and political rights, the second economic, social and cultural ones, and the third collective or solidarity rights.[13] The distinction was inspired by the themes of the French Revolution: liberté, égalité, and fraternité. Epistemic rights are subsumed under these categories, “knowledge” making no explicit appearance. This model hardly intends to capture linear progression with one generation giving rise to the next only to disappear. Instead, the generations are interdependent and interpenetrating, much as needs once recognized continue to be needs after more needs are recognized.

For almost as long as talk about generations has been around, there has been sporadic talk of a fourth. What fourth-generation rights are supposed to cover has varied, from future generations or genetic lineage to women, indigenous people or technological change.[14] The notion of a fourth generation advances the human rights project. For two reasons, I submit that human rights as they apply in digital lifeworlds should count as that next generation, and prominently include epistemic rights.

First of all, digital lifeworlds only emerged after the first three generations had been formulated in analog lifeworlds. Given the overwhelming importance of digital lifeworlds for human life, it is fitting to see this fourth generation as connected to them. Again, China has updated its system of party rule in the last decade, reasserting its operations for digital lifeworlds. In the part of the world shaped by liberalism, democracy and capitalism, the main tendency has been to strengthen capitalism rather than liberalism or democracy. Accordingly, we now find ourselves in surveillance capitalism rather than in democratized digital lifeworlds with strong rights protection.

Secondly, while it remains to be seen if and to what extent digital lifeworlds take humanity beyond Life 2.0, Life 3.0 could plausibly emerge only from these lifeworlds. Therefore, reflection on digital lifeworlds is a suitable starting point for the rights needed in any possible Life 3.0, a life that would put into a new place a species that has become so dominant it could name the present geological era after itself (“Anthropocene”). Accordingly, a fourth theme might be integration (intégration, to stick with the French), that of humans into the rich possibilities of digital lifeworlds that include entities surpassing human intelligence. After the first generation concerned with protecting personhood, the second with relative status, and the third with collective endeavors, the fourth concerns humanity’s relationship with entities of similar or larger general intelligence that would share our lifeworlds.

If this much is plausible, epistemic rights – based on those that already exist, but taking into account current realities and future possibilities – should be core components of that fourth generation, next steps in the development of the human rights project. Epistemic rights are already extraordinarily important because of the epistemic intrusiveness of Life 2.0 but must be stronger and more extensive than what the UDHR and subsequent documents from the analog world provide. In Life 3.0 itself these rights would also secure the distinctiveness of human life in the presence of other intelligence (which in turn would have a substantial moral status all its own). Epistemic rights in that scenario would include a right to exercise human intelligence.

4 Digital Lifeworlds and the Stages of Human Life

Let us say more about digital lifeworlds by way of embedding them into Tegmark’s stages of human life (2017). To begin with, life is a process that can retain complexity by replicating. What is replicated is both matter (“hardware,” consisting of atoms) and information (“software,” consisting of bits). That is, life is a “self-replicating, information-processing system whose information (software) determines both its behavior and the blueprints for its hardware” (Tegmark 2017, p. 25). Some life is intelligent in that it collects information about its environment through sensors, processing it to act back on its environment. Gathering and processing occurs in a broad range of ways and levels of complexity, from bacterial stimulus-response mechanisms to the complex interpretation of our environment the human eye enables our brains to perform.

In the first stage, Life 1.0, both hardware and software evolve through mutation and adaptation across generations. For individuals, everything is fixed at birth. Bacteria cannot learn anything about their environment that is not part of their DNA. In Life 2.0, hardware arises through evolution, but software is to some extent designed by living individuals. The transition to Life 2.0 is gradual, as altogether the distinctions among these stages are untidy (but one could talk about Life 2.1, 2.15, etc.). Still, a major difference between initial and later stages of Life 1.0 is the emergence of consciousness, which drives the transition to Life 2.0.

For humans, neither hardware nor software is fully available at birth. That our bodies grow outside the womb means growth potential is not capped by size. That brains do most learning in ways beyond activating what is transmitted through DNA means the limits of learning are not prescribed by DNA. Individuals acquire much software through learning, first as suggested by our environment, later under our own direction. Information contained in DNA has not evolved dramatically in the last several thousand years. Meanwhile information stored collectively has exploded. Ever since the development of scripts, pools of information can be preserved accurately and grow over generations. Historian David Christian calls us “networking creatures,” emphasizing that collective learning characterizes our species (2004, part III). Over time information has also been used to develop sophisticated technology that provides scaffolding for later generations to use and enhance information. The Internet now in principle allows everybody to access all public knowledge through a few clicks.

This informational perspective on life illuminates the importance of epistemic rights. To begin with, let us distinguish the applicability of normative considerations from the presence of normative practices. Normative considerations apply when they illuminate a particular context or the role of certain entities. Environmental ethics applies to ecosystems, but ecosystems cannot participate in practices to articulate normative considerations. Normative considerations apply to Life 1.0, but Life 1.0 cannot sustain normative practices since those necessarily involve a sense that we make choices (thus a self-conscious “we” to begin with). If future space travelers reach a planet where all indigenous life is 1.0, respect for such life should guide them although that life cannot participate in normative practices.

Life 2.0, however, does sustain normative practices, and they concern the design of the current stage of collectively built software and how each person should fare in it. To the extent that they involve coordination of complex cooperation through language, these practices as we know them only involve humans. Within normative practices of our Life 2.0 human rights play a distinctive role. Their point is to protect each of us from common abuses that arise from human organization: to protect each of us from the rest of us. Epistemic rights specifically protect individuals as beneficiaries of and participants in the designed software of Life 2.0, the use of intergenerationally accumulated information. To the extent that we can understand life from an information standpoint we also see the relevance of rights that concern knowing and being known.

The limits of the possibilities of the human body – hardware constraints – have been pushed in recent centuries through increasing appreciation of nutrition, hygiene, social parameters of health, and typical causes and courses of diseases. Nonetheless, what we can do with all the accumulated information and resulting technological capacities remains constrained by the fragility of our bodies. That would be different in Life 3.0, where both software and hardware are designed. Much as the evolution of consciousness drove the transition to Life 2.0, digital technology drives the transition to any possible Life 3.0. Digital lifeworlds link humans, powerful machines and abundant data in intricate and complex ways whose potential we are only beginning to comprehend.

For now, these lifeworlds firmly belong to Life 2.0 and concern the digitalization of accumulated information, with all the processing such digitalization makes possible with ever increasing computational capacities and refined software and hardware engineering. But it is also from within the technological possibilities thus created that those who command contemporary technology increasingly push towards forms of life in which its physical containers themselves are part of what we (and then they) design. Those tendencies might eventually generate living arrangements populated by genetically enhanced humans, cyborgs, uploaded brains, as well as advanced algorithms embedded into sundry physical devices. If there is an intelligence explosion (singularity) – which, if it does happen, would happen from within digital lifeworlds – genetically or technologically unenhanced humans (ironically those who created those lifeworlds) would be intellectually inferior to Life 3.0’s other inhabitants. They might find that life unwelcoming, even unbearable.

What is remarkable about current digital lifeworlds is that they make individuals known to governments, companies and each other in ways they never were in analog lifeworlds. Thereby individuals also contribute to behavioral patterns governments and corporations use to shape the future. Since possibilities for self-knowledge too are increasingly fashioned by digital media, in the process governments and corporations can also increasingly determine what kind of possibilities for self-knowledge highly networked humans may have.[15] It is in light of these rather overbearing possibilities, to reconnect to the previous section, that among fourth-generation human rights there must be substantially strengthened epistemic rights to protect personhood in digital lifeworlds.

The emergence of Life 3.0 is compatible with Lives 1.0 and 2.0 still being around. But indeed living arrangements for unenhanced humans could be precarious: they may not be tolerated as actual participants, but rather as fringe figures, perhaps the same way dogs are now. But unlike dogs they would know that it was once different: as opposed to Life 1.0 being integrated into Life 2.0, Life 2.0 being integrated into Life 3.0 would be conscious of the transition. If Life 3.0 gets on its way, normative practices will involve humans and the synthetic forms of life that evolved from within our digital lifeworlds. It would no longer just be humans working out among themselves what counts as appropriate treatment. Instead, in Life 3.0, one normative discourse would likely be about humanity as such receiving the right treatment vis-à-vis other entities that could claim an elevated normative status all their own. This would also entail that protecting human epistemic actorhood would not merely mean protecting them in terms of various facets of access to information; it would also mean protecting them as they exercise a distinctively human form of intelligence to begin with.

5 The Data Episteme: Infopower in Digital Lifeworlds

Let us next approach digital lifeworlds and their relevance for humanity from a different angle: the nexus of knowledge and power analyzed by Foucault. Foucault has made the notion of knowledge complex in new ways. A brief contrast with Francis Bacon will be useful.

A statesman and thinker of early Stuart England, Bacon the philosopher was celebrated for establishing an inductive methodology for scientific inquiry. His famous dictum “knowledge is power” is an early but paradigmatic statement of scientific optimism: knowledge is acquaintance with facts and regularities “out there;” how to get so acquainted is teachable; and the right methods empower knowers to do things much better than others ever could.[16] Bacon the politician would have grasped the usefulness of scientific understanding for statecraft. What was not on his radar was the idea of an intimate two-way relationship locking knowledge with power. That thought permeates Foucault’s work.

Foucault insists that “there is no power relation without the correlative constitution of a field of knowledge, nor any knowledge that does not presuppose and constitute at the same time, power relations” (1977, p. 27). While the Baconian tradition thinks of knowledge as initially residing outside of the value-laden domain of politics (where power operates) to which it could subsequently be imported, Foucault contends that what passes for knowledge is always influenced by power relations. As noted in Section 2, every era has its structure of thought that inquirers can evade only under great strains and that also constrain self-knowledge, an understanding of one’s personhood and place in the world. The term episteme denotes this kind of grounding in conditions of possibility that always already reflect the power relations of the era.[17]

Foucault’s Discipline and Punish observes that in the late 18th century the manner of punishment changes. Instead of corporal punishment, including executions in front of jeering crowds, there was a move towards workhouses and prisons. The point and purpose of punishment was no longer public cruelty but to instill obedience through compulsory discipline and routine. That is, the late 18th century witnessed a move from public, often chaotic practices of punishment towards more private and insidious ones. Based on that observation, Foucault investigates where else in society related tendencies appeared, that is, where society aimed to force regular patterns upon people to make them compliant. He finds that schools, hospitals and the military operated similarly. Society’s increasingly diffuse exercise of power instills routine in people in this range of seemingly very different institutions. Routine encourages us to conform, and accordingly also limits our ability to construct identities that have difficulty conforming in such ways.

A society of docile routine-followers can be readily controlled, partly through "an explosion of numerous and diverse techniques for achieving the subjugations of bodies and the control of populations” (Foucault 1990 , Vol. 1, p. 140). Such measures amount to biopower, a term that, for Foucault, refers to practices of public health, regulation of heredity, and risk regulation, among other mechanisms often linked less directly with physical health. In the process, those under its power became increasingly legible to the government, involving intricate administrative systems for tracking identities. In due course there would be standardized passports (now biometrical, to verify the holder is the one named), social security numbers, sundry identification numbers, licenses, credit scores, health records and employment contracts. Birth certificates ground our belonging in a state to begin with. Eventually we see personhood only around such identifiability.[18]

Colin Koopman transferred Foucault’s approach to digital lifeworlds (2019; see also Cheney-Lippold 2017.) The term he uses for our current system of knowledge is data episteme. Over and above biopower a new type of power permeates society: infopower, which stands and falls with the data episteme. The enormous amount of data of digital lifeworlds makes possible online profiles that must be carefully managed (for purposes from finding mates to building professional networks), approaches to marketing driven by data mining around demographic categorizations, cyberwars among governments, unmatched levels of state-sponsored surveillance (as revealed by Edward Snowden), as well as the high level of data collection that makes our current stage of capitalism surveillance capitalism, unprecedented levels of data-sharing via social media, personalized genetic reporting, and the quantification of our selves through electronic wearables.

As in the case of biopower, in the age of infopower participants make themselves compliant with power structures around the phenomena just listed, a compliance procuring benefits and freedom for some, but deprivation and unfreedom for others. As Koopman says,

information’s formatting is a work that prepares us to be the kinds of persons who not only can suffer these inequalities and unfreedoms, but can also eagerly inflict them, often unwittingly, on others who have also been so formatted. Information thus became political precisely when we became our information.[19]

Let us reconnect to our discussion of 1948 as a breakthrough year in various domains. Koopman submits that the Wiener-Shannon theory of information was not actually a theory of information. Instead, it was, as the titles of the two seminal works reveal, a theory of communication that presupposed information as the material it would transmit. It is a theory of information channels, not information itself. By the time Wiener and Shannon wrote, steps into the data episteme had already been taken, making people comfortable with the idea that communication would not be about wisdom or knowledge (as in the Eliot epigraph), but about something that could be called information.

The history of the term “information” speaks volumes about underlying changes in intellectual outlook. Originally, in a metaphysical sense, for X to be informed by Y meant for X to be shaped (given form, be in-formed) by Y, where basic cosmological principles normally would be at work to do that shaping (putting form into matter). Later on, in a more empiricist world, it came to mean for X to receive a report from Y. The source of form-giving had shifted from cosmology to observations. The X that was being in-formed was a person’s mind, as nothing else could be a knower. But then, as John Durham Peters writes,

between the middle of the 18th and the middle of the 19th century, there arose a new kind of empiricism, no longer bound by the scale of the human body. The state became a knower; bureaucracy its sense; statistics its information. (1988, p. 14)

So, the meaning of “information” changed alongside the emergence of statistics, a discipline not only bound up with the changing nature of the state since the 18th century, but etymologically derived from “state.” Implicit in statistics is a knower not subject to the limits of individuality and mortality that constrain a person’s mind. Statistical data are “gathered by mortals, but the pooling and analysis of them creates an implied-I that is disembodied and all-seeing” (Peters 1988, p. 15. See also Hacking 1990). Once computers are developed, they do what the state has been doing for a while, though they do it more efficiently and elegantly: “they make vast invisible aggregates intelligible and manipulable” (Peters 1988, p. 15).[20]

The software was there long before the hardware. In that sense, Wiener and Shannon’s work was both pathbreaking and reflecting a path that had been embarked on decades before. Or as philosopher of technology Lewis Mumford put it, the computer existed as a practice long before it existed as a machine.[21] In 1948 people were therefore also receptive to theorizing communication in terms of information, a manner of understanding communication that would amount to “the relentless encouragement of further communications” (Halpern 2015, p. 74).[22]

To anybody who has ever struggled to say anything insightful about what “information” actually is and about how humankind would ever have managed without that notion, it should now be clearer why from within the data episteme it is difficult to reflect on those matters. In response to that very question, a recent textbook introduction to Wiener-Shannon theory (representatively) contains the following statements:

So, what is information? It is what remains after every iota of natural redundancy has been squeezed out of a message, and after every aimless syllable of noise has been removed. It is the unfettered essence that passes from computer to computer, from satellite to Earth, from eye to brain, and (over many generations of natural selection) from the natural world to the collective gene pool of every species. (Stone 2015, p. 20)

In other words, practitioners of information theory are as baffled to explain their basic notion as two young fish in David Foster Wallace’s What is Water? (2009) are to make sense of the term “water” when it comes up in exchange with an elder fish. If you live in water, you can no longer explain what water is. If you live in the data episteme, you have trouble accounting for what information is.

6 Epistemic Rights in the Digital Lifeworlds of Life 2.0

So what kind of protection is needed for epistemic actorhood, first in the digital lifeworlds of Life 2.0 and then (possibly) in Life 3.0? Put differently, what kind of epistemic rights can rein in infopower in our data episteme? In Life 3.0 human rights must be reconsidered. They were meant to protect against threats from other humans when the only other intelligent life around was other animals that had arisen alongside humans in the evolution of organic life. Amazing adaptation to niches notwithstanding, other animals are inferior to humans in general intelligence. If Life 3.0 does arise, human rights would also need to secure a moral status potentially threatened by synthetic life of a possibly enormously larger intelligence. In the domain of epistemic rights this would involve a right to the exercise of human intelligence. But before it comes to that, epistemic rights must be formulated and secured for the last stage of Life 2.0 – which is immensely important for its own sake and puts humans in a position to argue human intelligence is worth protecting.

Let us deal with digital lifeworlds in Life 2.0 first. What kind of protection is needed in the four roles of epistemic actorhood? To formulate a proposal, I work with four values that, jointly with the formulation of those roles, guide us towards protections and entitlements needed in the data episteme of digital lifeworlds. These values are welfare, autonomy (independent decision-making), dignity (respectful, non-infantilizing, non-humiliating treatment), and self-government (control over leadership). I take it that these values are clear enough, and recognizable as core values of the human rights movement.[23] The following list of rights should be understood cumulatively: rights introduced to protect epistemic actors in one role also protect them in others, but I will not list them again. The most important addition to the epistemic rights the human rights framework already contains are rights to protect persons in their role as parts of the collective epistemic object.

  1. Rights to protect individuals as individual epistemic subjects (knowers)

Welfare: What is primarily needed is a substantially boosted right to education, including basic literacy in digital lifeworlds. Future economic and political possibilities in the data episteme increasingly depend on such a capacity.

Autonomy: Freedom of thought, expression and opinion, including the right to seek information, are already established as human rights. What is also needed is an explicit right to have governments and companies take measures to prevent use of the tools digital lifeworlds provide for the systematic spread of falsehoods that would undermine ability of independent decision-making (content moderation).

Dignity, Self-Government: Nothing more to be added with those other rights in place.

  1. Rights to protect individuals in their roles as belonging to the collective epistemic subject

Autonomy: There already are cultural rights indispensable for dignity and free development of personality and the right freely to participate in cultural life, to enjoy the arts and to share in scientific advancement and its benefits. These need to be adjusted to the data episteme (and actually be taken seriously). The way infopower is exercised can only be legitimate if rights are in place that generate possibilities of participation in the design of the data episteme.

Welfare, Dignity, Self-Government: Nothing more to be added with those other rights in place.

  1. Rights to protect individuals as individual epistemic objects (knowns)

Autonomy: Rights to protection of personal data, combined with much education about how important such protection is.

Dignity: There already are rights to be protected from arbitrary interference with privacy, family, home or correspondence and from attacks upon honor and reputation. These rights must be adjusted for digital lifeworlds with their new possibilities of synthetic media (e.g., deepfakes).

Welfare, Self-Government: Nothing more to be added with those other rights in place.

  1. Rights to protect individuals in their roles as belonging to the collective epistemic object

Self-Government: There need to be rights to substantial control over collected data. One hallmark of the data episteme is an enormous amount of data collection. Control over them must be broadly shared. This is the most important genuine addition to the body of existing human rights. When the UDHR was passed in 1948, nothing like this data deluge and its possible uses by government and companies was on the radar.

Welfare, Autonomy, Dignity: Nothing more to be added with those other rights in place.

I paint with a broad brush here, offering rights in manifesto-style. These rights require refinement and specification, and we would need to spell out how such more refined and more closely specified rights give moral guidance, but then also how to conceptualize them legally. Doing so comes with agendas of its own, as the most recently stated demand that control over collected data must be broadly shared illustrates. Next we would need to consider proposals for what broadly shared control amounts to and explore what kinds of obligation on the side of the different actors in the human rights domain (most importantly states and companies) all this would entail.[24]

7 Epistemic Rights in the Digital Lifeworlds of Life 3.0

If a full-fledged Life 3.0 emerges, it will do so from within digital lifeworlds. It might be populated by genetically enhanced humans, cyborgs, uploaded brains, as well as advanced algorithms embedded into any manner of physical device. Genetically or technologically unenhanced humans would be intellectually inferior to other inhabitants. As creatures from Life 2.0, they would be unable to design their shapes and thus also be inferior in terms of longevity and abilities to entities that can do so. Unenhanced humans might find Life 3.0 inhospitable. The likely response is for humanity to enhance itself.

Normative practices would change. The new entities human ingenuity made possible must be accorded a moral status all their own. New moral and legal rights and standards would need to delineate the complex relationships among these various entities.[25] Human rights must expand beyond protecting “each of us from the rest of us” to protecting “us from them,” much as such protection would have to prevail conversely. As far as epistemic rights are concerned, we need a right to the exercise of genuinely human intelligence, to use the human mind with its power and limitations that reflect millions of years of evolution of organic life, a right that would need to hold even if we were surrounded by intelligence vastly larger than ours. Such a right, in Life 3.0, would have to hold against the various kinds of intelligence participating in its normative practices. Again, I propose this right in manifesto-style, aiming to establish its basic appropriateness and necessity with the goal of helping to create an agenda.

These new intelligences might have little patience for us. They might extinguish us, as Stephen Hawking, for one, warned.[26] So how to argue for such a right? To begin with, these new entities would be designed by us, or anyway spring from technologies that emerged from digital lifeworlds. Human intelligence and the larger context of organic life, with all its obvious shortfalls, made synthetic intelligence possible. This would presumably not merely be a matter of nostalgia, but the foundation of a profound respect for human intelligence. Support for such an argument from respect could come from the secular meaning-of-life literature. The reasons philosophers have offered for why human life would not be pointless in a godless universe could explain why non-human life would have reason to endorse a right to the exercise of a genuinely human intelligence. Let me make this point through brief references to Bertrand Russell and Ronald Dworkin.

Russell is a seminal figure in multiple areas of mathematics and philosophy. One of his best-known pieces, a classic contribution to the secular meaning-of-life literature, is his 1903 article “A Free Man’s Worship” (1976). Russell takes account of the intrinsic meaninglessness of the physical universe to explore where that leaves us by way of understanding the point of human existence. His knowledge of the sciences as of around 1900 put to a crude awakening all thinking seeing us high up in a metaphysically conceived “great chain of being.” Nothing in or about the world could answer questions about the point or purpose of human life. We could only provide these answers from within ourselves, from an internal human standpoint.

But we can indeed do that much because we have the kind of mind that allows us to do so. As Russell (1976) writes, in the heavy prose he used in those days:

Man is yet free, during his brief years, to examine, to criticize, to know, and in imagination to create. To him alone, in the world with which he is acquainted, this freedom belongs; and in this lies his superiority to the resistless forces that control his outward life.

And a bit later:

In this lies Man’s true freedom: in determination to worship only the God created by our own love of the good, to respect only the heaven which inspires the insight of our best moments. In action, in desire, we must submit perpetually to the tyranny of outside forces; but in thought, in aspiration, we are free, free from our fellow-men, free from the petty planet on which our bodies impotently crawl, free even, while we live, from the tyranny of death. Let us learn, then, that energy of faith which enables us to live constantly in the vision of the good; and let us descend, in action, into the world of fact, with that vision always before us.

And yet a bit later:

The life of Man, viewed outwardly, is but a small thing in comparison with the forces of Nature. The slave is doomed to worship Time and Fate and Death, because they are greater than anything he finds in himself, and because all his thoughts are of things which they devour. But, great as they are, to think of them greatly, to feel their passionless splendor, is greater still. And such thought makes us free men … To abandon the struggle for private happiness, to expel all eagerness of temporary desire, to burn with passion for eternal things—this is emancipation, and this is the free man’s worship.

Humans vis-à-vis each other can put their brains to work in such a way that most things we have long cared about (everything associated with human accomplishment) are grounded in lifeworlds of shared experience. We literally live the life of the mind: that the human brain enables that life makes it an awesome thing worthy of respect from all manners of intelligence.

More recently Dworkin echoed that thought in a discussion of a secular understanding of sacredness (1993, chap. 3). He sees human life as the highest product of evolution, in the secular sense that it features enormous complexity, mental abilities and self-awareness. In addition, each life reflects efforts of civilization, parental care, etc. to flourish, enough to generate intrinsic, objective value. Human life rightly generates awe in us, admiration, inspiration. That value should also suffice to generate a right to the exercise of genuinely human intelligence in the presence of more intelligent creatures. To fully put the epistemic rights in place that apply in digital lifeworlds of Life 2.0 would also make us worthy of such a right to the exercise of human intelligence in Life 3.0.

8 Conclusion

In contrast to China’s efforts to upgrade its system of governance to new technological heights built around a stupefying amount of data collection and electronic scoring, countries committed to democracy and human rights did not upgrade their systems. It is vital for the ongoing relevance of those ideas about governance to perform such an upgrade. Protecting epistemic actorhood, in turn, is crucial to that project. After all, a distinctively human life (which human rights protect) now unfolds in digital lifeworlds. A set of epistemic rights that strengthen existing human rights – as part of a fourth generation of human rights – is needed to protect epistemic actorhood in such lifeworlds. Democracy too occurs in digital lifeworlds and can only flourish there if citizens are protected as knowers and knowns, both individually and collectively. Otherwise infopower will be wielded by only a few, and such a regime will hardly be to the collective benefit.

In the long run, if indeed we progress into Life 3.0, we need a new kind of human right, a right to the exercise of genuinely human intelligence. To the extent that we can substantiate the meaning of human life in the godless world that most of contemporary science describes, we can also substantiate such a right vis-à-vis artificial intelligences. If it comes to that, we must hope that such arguments can persuade a superior intelligence, and that such intelligence will participate in shared normative practices. But such intelligence, by definition, would be vastly beyond ours, and thus would be impossible for us to anticipate.

Corresponding author: Mathias Risse, Harvard Kennedy School, Harvard University, 79 JFK St, Cambridge, 02138, USA, E-mail:


Bacon, F. 2008. In Francis Bacon: The Major Works, edited by B. Vickers. New York: Oxford University Press.Search in Google Scholar

Benjamin, R. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity.10.1093/sf/soz162Search in Google Scholar

Bobbio, N. 1996. The Age of Rights. Cambridge, UK: Polity.Search in Google Scholar

Bostrom, N. 2016. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.Search in Google Scholar

Bowker, G. 2003. Inside George Orwell: A Biography. New York: Palgrave Macmillan.Search in Google Scholar

Buchanan, B. G. 2005. “A (Very) Brief History of Artificial Intelligence.” AI Magazine 26 (4): 53–60.Search in Google Scholar

Chang, H. -J. 2009. Bad Samaritans: The Myth of Free Trade and the Secret History of Capitalism. New York, NY: Bloomsbury Press.Search in Google Scholar

Cheney-Lippold, J. 2017. We Are Data: Algorithms and the Making of Our Digital Selves. New York City: NYU Press.10.2307/j.ctt1gk0941Search in Google Scholar

Christian, D. 2004. Maps of Time: An Introduction to Big History. Berkeley: University of California Press.10.1525/9780520931923Search in Google Scholar

Conway, F. 2006. Dark Hero of the Information Age: In Search of Norbert Wiener, the Father of Cybernetics. New York: Basic Books.Search in Google Scholar

Coomaraswamy, R. 1997. “Reinventing International Law: Women’s Rights as Human Rights in the International Community.” Commonwealth Law Bulletin 23 (3–4): 1249–62, in Google Scholar

Dilloway, J. 1998. Human Rights and World Order: Two Discourses to the H.G. Wells Society. Nottingham: H. G. Wells Society.Search in Google Scholar

Dretske, F. I. 2008. “The Metaphysics of Information.” In Wittgenstein and the Philosophy of Information, edited by A. Pichler, and H. Hravochec, 273–83. Frankfurt am Main: Ontos.10.1515/9783110328462.273Search in Google Scholar

Dworkin, R. 1993. Life’s Dominion: An Argument about Abortion, Euthanasia, and Individual Freedom. New York: Knopf.Search in Google Scholar

Dyson, G. 2012. Turing’s Cathedral: The Origins of the Digital Universe. New York City: Vintage.Search in Google Scholar

Dyson, G. B. 2012. Darwin Among the Machines: The Evolution of Global Intelligence. New York City: Basic Books.Search in Google Scholar

Eubanks, V. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.Search in Google Scholar

Falcon y Tella, M. J. 2007. Challenges for Human Rights. Leiden; Boston: Martinus Nijhoff Publishers.10.1163/ej.9789004160224.i-138Search in Google Scholar

Floridi, L. 2013. The Ethics of Information. Oxford: Oxford University Press.10.1093/acprof:oso/9780199641321.001.0001Search in Google Scholar

Floridi, L. 2013. The Philosophy of Information. Oxford: Oxford University Press.Search in Google Scholar

Foucault, M. 1977. Discipline and Punishment: The Birth of the Prison. London: Travistock.Search in Google Scholar

Foucault, M. 1980. In Power/Knowledge: Selected Interviews and Other Writings, 1972-1977, edited by C. Gordon. New York: Vintage.Search in Google Scholar

Foucault, M. 1982. The Archaeology of Knowledge: And the Discourse on Language. New York: Vintage.Search in Google Scholar

Foucault, M. 1990. The History of Sexuality, Vol. 1: An Introduction. New York: Vintage.Search in Google Scholar

Foucault, M. 1994. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage.Search in Google Scholar

Fricker, M. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford; New York: Oxford University Press.10.1093/acprof:oso/9780198237907.001.0001Search in Google Scholar

Gaukroger, S. 2001. Francis Bacon and the Transformation of Early-Modern Philosophy. Cambridge; New York: Cambridge University Press.10.1017/CBO9780511612688Search in Google Scholar

Giere, R. N. 2010. Scientific Perspectivism. Chicago: University of Chicago Press.Search in Google Scholar

Gleick, J. 2012. The Information: A History, A Theory, A Flood. New York: Vintage.Search in Google Scholar

Gooding, D. C. 1994. Experiment and the Making of Meaning – Human Agency in Scientific Observation and Experiment. Dordrecht: Springer.Search in Google Scholar

Groebner, V. 2007. Who Are You? Identification, Deception, and Surveillance in Early Modern Europe, translated by M. Kyburz, and J. Peck. Brooklyn, NY: Zone Books.Search in Google Scholar

Gutting, G. 2001. French Philosophy in the Twentieth Century. Cambridge, UK; New York: Cambridge University Press.10.1017/CBO9780511806902Search in Google Scholar

Hacking, I. 1990. The Taming of Chance. Cambridge, UK; New York: Cambridge University Press.10.1017/CBO9780511819766Search in Google Scholar

Halpern, O. 2015. Beautiful Data: A History of Vision and Reason since 1945. Durham: Duke University Press Books.10.1215/9780822376323Search in Google Scholar

Hamano, T. 1998. “H. G. Wells, President Roosevelt, and the Universal Declaration of Human Rights.” Life & Human Rights 9 (Autumn): 6–16.Search in Google Scholar

Haugeland, J. 1981. “Analog and Analog.” Philosophical Topics 12 (1): 213–25, in Google Scholar

Henry, J. 2017. Knowledge Is Power: How Magic, the Government and an Apocalyptic Vision Helped Francis Bacon to Create Modern Science. Cambridge, UK: Icon Books.Search in Google Scholar

Illich, I. 1971. Deschooling Society. New York: Harper & Row.Search in Google Scholar

Illich, I. 2001. Tools for Conviviality. London: Marion Boyars.Search in Google Scholar

Innes, D. C. 2019. Francis Bacon. Phillipsburg, New Jersey: P & R Publishing.Search in Google Scholar

Kerner, C., and M. Risse. 2021. “Beyond Porn and Discreditation: Promises and Perils of Deepfake Technology in Digital Lifeworlds.” Moral Philosophy and Politics. 8 (1): 81–108.10.1515/mopp-2020-0024Search in Google Scholar

Klein, N. 2007. The Shock Doctrine: The Rise of Disaster Capitalism. New York: Metropolitan Books.Search in Google Scholar

Kline, R. R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Information Age. Baltimore: Johns Hopkins University Press.Search in Google Scholar

Koopman, C. 2019. How We Became Our Data: A Genealogy of the Informational Person. Chicago: University of Chicago Press.10.7208/chicago/9780226626611.001.0001Search in Google Scholar

Krücken, G., and G. S. Drori, eds. 2010. World Society: The Writings of John W. Meyer, 1st ed. Oxford: Oxford University Press.Search in Google Scholar

Lash, S. M. 2002. Critique of Information. London: Sage Publications Ltd.10.4135/9781446217283Search in Google Scholar

Lauren, P. G. 2003. The Evolution of International Human Rights. Philadelphia: University of Pennsylvania Press.Search in Google Scholar

Leonelli, S. 2016. Data-Centric Biology: A Philosophical Study. Chicago; London: University of Chicago Press.10.7208/chicago/9780226416502.001.0001Search in Google Scholar

Lesne, A. 2007. “The Discrete vs. Continuous Controversy in Physics.” Mathematical Structures in Computer Science 17 (2): 1–39, in Google Scholar

Lewis, D. 1971. “Analogue and Digital.” Nous 5 (3): 321–7, in Google Scholar

Morsink, J. 1999. The Universal Declaration of Human Rights. Philadelphia: University of Pennsylvania Press.10.9783/9780812200416Search in Google Scholar

Mumford, L. 1974. Pentagon of Power: The Myth of the Machine, Vol. II. New York: Harcourt, Brace Jovanovich.Search in Google Scholar

Nilsson, N. J. 2009. Quest for Artificial Intelligence. Cambridge; New York: Cambridge University Press.10.1017/CBO9780511819346Search in Google Scholar

Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.10.2307/j.ctt1pwt9w5Search in Google Scholar

O’Neil, C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Reprint ed. New York: Broadway Books.Search in Google Scholar

Orwell, G. 1961. 1984. New York City: Signet Classic.Search in Google Scholar

Partington, J. S. 2016. Building Cosmopolis: The Political Thought of H.G. Wells. Aldershot; Burlington: Routledge.10.4324/9781315261171Search in Google Scholar

Partington, J. S. 2007. “Human Rights and Public Accountability in H. G. Wells’ Functional World State.” In Cosmopolitics and the Emergence of a Future, 2007th ed., edited by D. Morgan, and G. Banham, 163–90. Basingstoke; New York: Palgrave Macmillan.10.1057/9780230210684_9Search in Google Scholar

Peters, J. D. 1988. “Information: Notes toward a Critical History.” Journal of Communication Inquiry 12 (2): 9–23, in Google Scholar

Piketty, T. 2014. Capital in the Twenty First Century, translated by A. Goldhammer. Cambridge, MA: Belknap Press: An Imprint of Harvard University Press.10.4159/9780674369542Search in Google Scholar

Pinker, S. 2019. “Tech Prophecy and the Underappreciated Causal Power of Ideas.” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by J. Brockman, 100–12. New York: Penguin Press.Search in Google Scholar

Radder, H., ed. 2003. The Philosophy of Scientific Experimentation. Pittsburgh: University of Pittsburgh Press.10.2307/j.ctt5hjsnfSearch in Google Scholar

Risse, M. 2012. On Global Justice. Princeton: Princeton University Press.Search in Google Scholar

Ritchie-Calder, P. 1967. On Human Rights. London: H. G. Wells Society.Search in Google Scholar

Rosenberg, D. 2013. “Data before the Fact.” In “Raw Data” Is an Oxymoron, edited by L. Gitelman, 15–40. Cambridge, Massachusetts; London, England: The MIT Press.Search in Google Scholar

Russell, B. 1976. “A Free Man’s Worship.” In Mysticism and Logic Including a Free Man’s Worship, 25–30. London: George Allen & Unwin.10.4324/9780203450963-2Search in Google Scholar

Scott, J. C. 1998. Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven, CT: Yale University Press.Search in Google Scholar

Shannon, C. E., and W. Weaver. 1971. The Mathematical Theory of Communication, 16th Printing ed. Urbana: The University of Illinois Press.Search in Google Scholar

Smith, D. C., and W. F. Stone. 1989. “Peace and Human Rights: H. G. Wells and the Universal Declaration.” Canadian Journal of Peace Research 21 (1): 21–6, 75–8.Search in Google Scholar

Smith, D. W. 2013. Husserl. London, New York: Routledge.10.4324/9780203742952Search in Google Scholar

Stiglitz, J. E. 2013. The Price of Inequality: How Today’s Divided Society Endangers Our Future. New York: W. W. Norton & Company.10.1111/npqu.11358Search in Google Scholar

Stone, J. V. 2015. Information Theory: A Tutorial Introduction. Sheffield: Sebtel Press.Search in Google Scholar

Sunstein, C. R. 2016. The Ethics of Influence: Government in the Age of Behavioral Science. New York: Cambridge University Press.10.1017/CBO9781316493021Search in Google Scholar

Susskind, J. 2018. Future Politics: Living Together in a World Transformed by Tech. Oxford; New York: Oxford University Press.Search in Google Scholar

Tegmark, M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf.Search in Google Scholar

Thorp, T. M. 2014. Climate Justice: A Voice for the Future. Houndmills: Palgrave Macmillan.10.1057/9781137394644Search in Google Scholar

Turing, A. 1950. “Computing Machinery and Intelligence.” Mind 59 (236): 433–60, in Google Scholar

Vasak, K. 1977. “Human Rights – A Thirty-Year Struggle: The Sustained Efforts to Give Force of Law to the Universal Declaration of Human Rights.” The UNESCO Courier 30 (11): 29–32.Search in Google Scholar

Wagar, W. W. 1961. H. G. Wells and the World State. New Haven: Yale University Press.Search in Google Scholar

Wallace, D. F. 2009. This Is Water: Some Thoughts, Delivered on a Significant Occasion, about Living a Compassionate Life. New York: Little, Brown and Company.Search in Google Scholar

Watkin, C. 2018. Michel Foucault. Phillipsburg, New Jersey: P & R Publishing.Search in Google Scholar

Watson, L. 2018. “Systematic Epistemic Rights Violations in the Media: A Brexit Case Study.” Social Epistemology 82 (2): 88–102, in Google Scholar

Wells, H. G. 1940. The Common Sense of War and Peace. London: Penguin.Search in Google Scholar

Wells, H. G. 2016. World Brain. Redditch, Worcestershire: Read Books Ltd.Search in Google Scholar

Wiener, N. 2019. Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press.10.7551/mitpress/11810.001.0001Search in Google Scholar

Wiener, N. 1989. The Human Use of Human Beings: Cybernetics and Society. London: Free Association Books.Search in Google Scholar

Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.Search in Google Scholar

Published Online: 2021-04-27
Published in Print: 2021-10-26

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 23.2.2024 from
Scroll to top button