Democracy Naturalized: In Search of the Individual in the Post-truth Condition

: This article takes a ‘naturalistic’ look at the historically changing nature of the individual and its implications for the terms on which democracy might be realized, starting from classical Athens, moving through early debates in evolutionary theory, to contemporary moral and political thought. Generally speaking, liberal democracy sees individuality as the mark of an evolutionarily mature species, whereas socialist democracy sees it as the mark of an evolutionary immature species. Overall, the individual has been ‘de-naturalized’ over time, resulting in the indeterminate figure who thrives in the post-truth condition.

surveillance across the world. More striking in this context is the counterinsurgency of 'hackers', who come from the most unexpected quarters, reflecting the widespread availability of the skills needed not only to process information but to produce it-more to the point, to reverse engineer it (Wark 2004). Indeed, I have encapsulated the ontological stance of the post-truth condition as 'To be is to be redeployable' (Fuller 2020a, Introduction).
To be sure, this state of affairs results in a kind of 'agonistic democracy', which is arguably democracy's default condition (Laclau and Mouffe 1985). After all, long before the internet, in that cradle of democracy, classical Athens, matters were often settled by voice votes and drawing lots. By the standards of today's 'epistemic democrats', such methods would count as 'post-truth', since they presume that every citizen is in principle equal to any task and the overall strength of feeling on a topic should carry the day. The sense of justice in these procedures supposedly lay in the reversibility of any decisions so taken, though as Thucydides documented in The Peloponnesian War, that policy brought its own perils when practiced too often. Indeed, on his telling, whatever errors of judgement were committed by the Athenian demos, they were compounded by a lack of resolve in following through on them.
More generally, the philosophical discussion of democracy has been historically ambiguous on the exact role of the demos in the determination of actionable truths: Is the truth ultimately what a majority-or even a supermajority-decide under normal cognitive conditions-or perhaps under special ('ideal') cognitive conditions? The very fact that these questions are central to the discussion suggests that self-appointed 'democratic' philosophers are not fully signed up to democracy. Of course, they are in good company, not least John Stuart Mill and later defenders of 'deliberative democracy', who seem to want to ensure that however many people, of whatever background, are involved in democratic processes, they should end up taking decisions that are 'correct' by some independently approved normative standard, typically defined as closeness to one's own true interests-if not closeness to some more comprehensive understanding of 'objective' truth, for which fidelity to the 'facts' serves as a proxy. Yet, such lofty metaphysics masks a technocratic 'rule by experts', an epistemocratic oligarchy, in more precise terms (Fuller 2020c). For this reason, those who have studied deliberative democracy in practice have concluded that this 'epistemic' approach renders 'deliberation' a Potemkin village of human decision-making, which the critics use as leverage for an argument against democracy altogether (Brennan 2016, ch. 3). I more simply conclude that the hypocrisy behind deliberative democracy reveals that democracy has never been what many of its theoretical proponents have thought it is; hence my earlier characterization of the censoriousness surrounding the post-truth condition is itself a bit post-truth.
Against philosophy's historically passive-aggressive attitude towards democracy, I have sympathetically presented the post-truth condition as the moment that democracy comes into its own-that is, without the ministrations of 'democratic theorists' (Fuller 2018(Fuller , 2020a. However, this moment has been widely misunderstood. An especially 'fake' reading of post-truth equates it with 'relativism' in all its normatively deplorable senses. Yet, relativism presupposes a sense of boundedness-of individuals, cultures, etc.-that post-truth specifically denies. Indeed, the 'rule by experts' admired by those who oppose post-truth more closely approximates the relativist standpoint. After all, expertise presupposes well-defined domains of reality to which only those who have undergone certain discipline-based forms of training have proper access. The association of a fixed enculturation with a fixed reality is what makes expertise inherently 'relativist'. Someone who understood the implications of this point was Werner Heisenberg, who corresponded with Thomas Kuhn about 'closed sciences' that may be very effective but simply because they treat a domain of reality as a system bounded by a set of fixed principles, over which those trained in the science have absolute jurisdiction (Bokulich 2006). Interestingly, Heisenberg had in mind Newtonian mechanics, especially after the early twentieth century revolutions in physics. Unlike Kuhn, who regarded Newtonian mechanics as the gold standard of 'paradigms' in science because it channeled physics research for two centuries, Heisenberg held that relativity and quantum theory were more 'scientific', not simply because they could predict phenomena that Newton could not but because they operate with a more open-minded conception of the reality underwriting the predictions. Somewhat in the spirit of Karl Popper, Heisenberg believed that true scientists (as opposed to 'mere experts') approached reality more as prospectors than owners. I have defended the fruitfulness of this semantic and metaphysical open-mindedness as 'deviant interdisciplinarity' (Fuller 2013).
Of course, none of this denies that post-truth stands in a peculiar relationship to 'realism', as philosophers normally understand that term. Indeed, post-truth is 'antirealist' in Michael Dummett's rather exact sense, which is about the indeterminacy-that is, neither the denial nor the pluralization-of reality, in the absence of an agreed decision procedure that resolves the indeterminacy. It is this deep insight, grounded in Wittgenstein's philosophy of mathematics, to which we must turn to understand the post-truth condition. A striking feature of the post-truth condition is its highly conflictual nature. This can be attributed to a lack of kriterion in the original Stoic sense: namely, where to draw the line between what is 'in ' and 'out', 'yours' (or 'ours') and 'mine'-and 'true' and 'false'. We might regard this lack of kriterion as the antirealist's 'state of nature', and the post-truth condition returns us to that. From that standpoint, Hobbes might be the patron saint of post-truth, with the 'social contract' corresponding to the decision procedure that makes all the right binary 'cuts', to recall the Greek root of kriterion.
Nowadays, the default 'rules of the game' in various spheres are being challenged, in ways that typically implicate some form of double standard on the part of those in charge of the rules. What passes for 'populism' in politics today largely begins with this complaint, often cast as accusations of 'corruption'. However, it would be a mistake to reduce this complaint to an expression of 'cynicism' or 'skepticism', as these terms are normally understood. It has a deeper moral dimension, which explains the pervasive sense of outrage in our times. The risk theorist Taleb (2018) has captured it as the need to have 'skin in the game'. More prosaically, the gamemasters are accused of not playing by their own rules. Indeed, the post-truth condition is normatively animated by a quest for fairness in an admittedly unpredictable world, a point to which I shall return in the Conclusion. While in the past, this quest was largely the focus of those on the receiving end of asymmetrical power relations (aka 'victims'), today everyone seems to think they occupy that position. This has especially blindsided 'social justice' campaigners, who now find their arguments and rhetoric easily appropriated and redeployed by their opponents. A sign of our post-truth times is that two ideologically opposed US Senators-Republican Josh Hawley and Democrat Amy Klobuchar-have recently published well-considered books that decry the unfair practices of social media giants, who coax people into giving up their data, which the giants then sell to turn a profit, while not paying taxes and even claiming to provide a public good.
In what follows, I consider a central feature of this complex of issues that define the post-truth condition: the status of the individual. It too is indeterminate, especially in the absence of generally agreed rules of the game. For example, the technocratic policy of mass vaccination meets resistance because people who regard themselves as 'individuals' don't appreciate being treated as if they were well-bounded atoms, whose potential motions can be preempted by state fiat. In other words, they reject the implied Epicurean metaphysics of the individual. (This is also what Mill rejected in Bentham.) The alternative view, which envisages the individual as more like a geometric point that projects a field, or a dynamic vector space, is closer to the post-truth individual. In classical terms, it accords with the Stoic's more protean self-understanding that associates one's freedom with the capacity to think, act and even identify well beyond one's variable material circumstances; hence Stoicism is often presented as the source of 'universalism' as an attitude toward the world, notably in Kant (Fuller 2021a).
Nevertheless, it is easy to imagine competing universalisms, given the finitude of any starting point-however abstractly defined. Without some agreed kriterion to settle the conflicts that are bound to arise from these alternative projections, chaos will ensue-which brings us to the post-truth condition. We can understand this problem historically in terms of the evaporation of the nation-state as the locus of political action since the end of the Cold War. For the past quarter millennium, the great ideological struggles of the Westernized world-defined as 'Left versus Right' since the French Revolution-have focused on taking control of the apparatus of the state by increasingly procedural means, notably elections and other formalized modes of succession. In retrospect, 1648's Peace of Westphalia may be compared in the history of politics to 1687's Principia Mathematica in the history of science. But even as far back as Aquinas, the 'state' had been portrayed as the ultimate stabilizer, the entity that defines the 'state of play' (status quo), based on a domain with 'rules of the game', which a monarch, parliament or judiciary might then referee in the spirit of kriterion (Kantorowicz 1957).
All of that is disappearing. The competing universalisms replacing it lack the agreed terms of engagement previously associated with the state. I have discussed this as part of a 90-degree axial rotation in politics, resulting in 'Up versus Down' as the relevant ideological poles, where 'Up' means that science and technology might (and should) easily change how and where we live and 'Down' means that our fate is (and should be) tied to the limits of our Homo sapiens bodies, in which case science and technology potentially threaten our ability to live 'sustainably' (Fuller and Lipinska 2014, ch. 1). Both poles are much stronger among the younger generation who see themselves as 'political' than the old 'Left versus Right' dichotomy. Moreover, it is marked by both low voter turnout and more heterodox forms of political expression, ranging from cyber-hacking to civil (and not so civil) disobedience.
This article proceeds by taking what I call a 'naturalistic' look at democracy, one that turns on the changing nature of the individual over time and its implications for the terms on which democracy might be realized. From this standpoint, the individual has been gradually 'de-naturalized', eventuating in the indeterminate figure who thrives in the post-truth condition. In the first instance, these changes have been to do with extended life expectancy, which provides for a greater sense of intergenerational difference, resulting in a long-term shift from judging individuals based on 'character' to 'track record': from intuition to induction, epistemologically speaking. Evolutionary biology adds another dimension, which is exemplified by the late nineteenth century debate over the role of the individual in the deep history of life: Is individuality the mark of an evolutionarily mature or immature species? Democracy is implicated in both answers-the former liberal, the latter socialist. This disagreement had been foreshadowed in the early history of utilitarianism, in which Mill stood for the liberal and Bentham the socialist. Behind their difference lay Kant's divided settlement between Stoic and Epicurean conceptions of the individual. In the conclusion, I focus on the Stoic conception, which is common to Popper's stress on the 'reversibility' of democratic decisions, Rawls' 'veil of ignorance' as the basis for deriving the principles of justice-as well as the post-truth understanding of the individual as both radically self-determining and radically open to persuasion.

The Life Cycle: The Naturalistic Nub of Democracy
The interwar years of the twentieth century-that is, between the First and Second World Wars-witnessed a very active discussion about what Max Weber's understudy, Karl Mannheim, called the 'problem of generations', which became a cornerstone of his sociology of knowledge. In the wake of Germany's humiliating defeat, how the past should inform the future, given that many younger people had given up their lives in a conflict that had been caused by the incompetence of older people? The idea of 'learning from history' loomed large, albeit inflected in various, often contradictory ways, signified by the rise of 'nostalgia' as both a popular and analytic concept. Moreover, the concerns raised so vividly during these years have periodically recurred to the present day, typically after some major war or catastrophe, once one starts thinking counterfactually about it-that is, it need not have happened this way. However, I wish to focus here on their implications for democracy in the post-truth condition. The problem of generations is a clearly naturalistic one. It relates directly to extended longevity, which allows a longer period for each individual to pass through each phase of the life process. This extension, in turn, allows for the development of clearer lifestyle differences between the phases, which can itself become a source of intergenerational friction. For example, as people have remained children longer, children have come to be treated as having their own interests and issues that are not reducible to their being immature adults, or even adults in waiting. Thus, 'adolescent' undergoes a telling metamorphosis in the twentieth century, from being simply the turning into an adult (as the etymology would suggest) to, in the hands of Erik Erikson, a relatively autonomous state of being in the world. But perhaps more importantly for the post-truth condition, the extended longevity of humans enables the emergence of a clearer sense of track record of those who occupy the adult roles. What makes the post-truth condition distinctive in this regard is the capacity of rhetoric to switch what might be otherwise seen as a reasonable track record into one that demands a break with the past. After all, Brexit and Trump were not the products of imminent war or catastrophe-indeed, not even in living memory, as in the rise of Hitler: They were the outcomes of normal democratic processes in normal times.
Contrast Plato, who notoriously claimed that democracies enabled the young to revolt against the old. He seems to have been responding to some recent changes in Athenian law which allowed both fathers to disown sons and sons to declare fathers incompetent (Reinhold 1970). These moves did indeed democratize-as well as destabilize-the social and political order. No doubt, the relative closeness in age between generational members combined with relative short life expectancy contributed to the instability. In any case, the ancient Greeks did not countenance 'track record' in the sense that has become a political football between generations since the end of the First World War. Judgements of competence were typically based on relatively short track records that leaned heavily on issues of a person's perceived 'character', which in turn justified either sustaining the person in post or removing them from office. Such an epistemic ecology gave pride of place to Aristotle's epagoge, a mental capacity that allegedly enables people to intuit a universal from only a few particulars. Nowadays we would associate such a mental orientation, which requires that one make 'good enough' decisions on the spot based on scant evidence, with what Kahneman (2011) calls 'fast' thinking. He regards this orientation as our default evolutionary cognitive position, which gets disciplined-perhaps too disciplined-over time, as we linger over decisions, collecting more evidence and pooling it into a common knowledge base. It is from such 'slow' thinking that modern ideas of induction and track records come into play, in both science and politics. Of course, Plato believed that those truly fit to rule-the philosopher-kings-should be subject to protracted training before being set loose on the polis. But his curriculum was designed to immunize the mind of the aspiring ruler from the sway of popular passions. In short, it too was about building character-not 'competence' or 'expertise' in the modern sense, which implies facility with one or more empirical knowledge bases.
An early formal recognition of extended human longevity came with Bismarck's establishment of 70 as the state retirement pension age, which was later reduced to the talismanic 65, copied by welfare states throughout the world. The political calculation was that enough-but not too many-people would reach that age to make the prospect attractive to the electorate as a basis for raising their taxes on their income during their working lives. (The so-called 'fiscal crisis of the welfare state' of the past half-century reflects still greater human longevity, but now combined with dropping birth rates.) But long before Bismarck, students of animal development had observed that the relatively slow maturation of H. sapiens vis-a-vis other species ('neoteny'). Much normative significance has been attached to this fact since the Enlightenment (Bjorklund 2007, ch. 8). On the one hand, parents came to be held responsible for the education of their children (Locke); on the other hand, established authorities were accused of protracted paternalism (Kant). Common to these seemingly polar attitudes is that each new generation needs to cultivate their own experience, establish their own track records and not simply assume that their experience would be like those who have lived before them, however wise they might have been.
Friedrich Hayek, hardly known for his defense of political democracy, provocatively tried to resolve the problem that arises from the personal character of the acquisition and obsolescence of experience by proposing that a person should vote only once in life-namely, at the midpoint of life expectancy, when one's experience and stake in society are optimally balanced. In this way, the future would be informed yet not overburdened by the past (Hayek 1978, chs. 7-8). This was the crucible in which the modern concept of the individual was forged. It wasn't the rigidly self-interested hominid body that emerges with Jeremy Bentham's Neo-Epicurean version of utilitarianism-the 'rational egoist' that John Stuart Mill critiqued early in his career as Homo oeconomicus. Rather, it was an individual whose sense of physical boundaries and self-worth emerged against the backdrop of changing social relations, which is what both Hegel and Hayek found attractive in that other great Enlightenment figure, Adam Smith. In this context, the 'market' is an extended metaphor for the fluidity of this environment, in which prices simply capture snapshots of an ongoing process (Fuller 2020a, ch. 4). I shall return to this idea in the Conclusion when considering the conception of the individual that is appropriate to the post-truth condition.

The Individual in Evolutionary Perspective and Its Implications for Democracy
The final quarter of the nineteenth century witnessed a very lively debate over the criteria for being an organism that attracted interest across the nascent biological and social sciences. This was roughly the period between the widespread public acceptance of Darwin's and Spencer's theories of evolution and the scientific rediscovery of Mendel's work on the particulate nature of genetic transmission. By that time, it had become commonplace to think of an organism as a physically distinct individual, a normal version of which would count as a specimen of a species. However, many of the leading evolutionists of the day-including Thomas Henry Huxley and August Friedrich Weismann-had doubts. They suggested that perhaps humans were among the very few-if not the only-species whose specimen organism is an individual. In effect, to study an individual ant as one would an individual human would be to commit the fallacy of division, since all of an ant's species-based properties are realized only in concert with other ants in one continuous act of collective self-reproduction. The 'colony' that emerges from this collective effort is the true organism that is comparable to the individual human. In that sense, the ant colony is a physically distributed specimen. It would follow that each ant is akin to a cell or an organ in the individual human, which means that strictu sensu individual ants do not 'die' but are simply replaced, just as the human body regenerates its cells and organs throughout the course of its life.
To be sure, Huxley, Weismann and others who participated in this debate remained open-minded about the extent to which human exceptionalism in some normatively deep sense was truly implicated (Minot 1884). In what follows, my presentation of Huxley and Weismann will strategically highlight their differences in both substance and sensibility. To be sure, they detected a direction of travel in the evolutionary process (aka 'progress'), but they characterized it very differently, with interesting political overtones that bear on how we think about democracy in the post-truth condition. As we shall see, their different visions of human evolution turn on the ontological standing of the individual.
For his part, Huxley concluded that a more highly evolved species is one whose members are more highly developed as individuals. A mark of such individuality is that the liminal moments in the passage of life-what humans signify as 'birth' and 'death'-come to acquire a greater significance than simple reproduction and replacement. Behind this interpretation was Huxley's long-standing interest in finding a naturalistic basis for the seemingly unique-and in his day, increasingly vexed-role played by Christianity in the history of science. The individuality of Jesus as the human incarnation of the divine loomed large in Huxley's mind, since a secularized version of that image had come to be applied to 'genius' scientists (especially Newton) and other heroic figures who effectively replaced the Christian saints and martyrs. (Auguste Comte's Positivist calendar comes to mind.) But of course, we should not forget the larger cultural association of political freedom and individual self-expression, the cornerstone of what Huxley's contemporary, John Stuart Mill, was promoting as 'liberalism'. Indeed, Huxley's naturalistic spin propped up the high imperialist strategy of 'developing' the more 'backward' peoples to be fit for eventual self-rule. Rudyard Kipling poeticized it as 'The White Man's Burden', but Huxley saw it more prosaically as the facilitation of human evolution. This helps to explain Mill's various frustrations with South Asians in his career as an East India Company administrator: They came from someone who was not deeply racist but rather believed that the sooner the East India Company concluded its operations, the better for all concerned. A somewhat more charitable perspective would say that from this standpoint, references to both working classes at home and indigenous peoples abroad as 'masses' underscored this lack of individuality as the main outward sign of their 'backwardness'. As we shall now see, Weismann came at the individual from the opposite direction.
For his part, Weismann came at the individual from the opposite direction. Unlike Huxley, he did not start from the religious and scientific prehistory to evolutionary theory. Rather, he confined his thinking to evolutionary theory itself, to which he then gave a distinctive metaphysical spin. In this respect, Weismann was probably a more kindred spirit to Darwin than Huxley was, in that he abandoned anthropocentrism altogether. Weismann introduced the distinction between the 'germ' and 'soma' levels of life, the former being immortal and the latter mortal. For him it marked a categorical difference between 'life as such' and 'lives'. Nowadays biologists understand Weismann as having presaged the genotype/phenotype distinction, but perhaps closer to his original metaphysical intent is Richard Dawkins' notoriously counter-intuitive idea that individual organisms (cf. soma) are simply vehicles for recycling and reproducing 'selfish genes' (cf. germ) that persist in perpetuity. Against this backdrop, an evolutionarily more mature species is a more functionally differentiated one. The parts of such a species are more easily reproduced and replaced without disturbing the overall performance of the whole. From that standpoint, the high plasticity and even apparent uniqueness of at least some of its individual members mark H. sapiens as a relatively immature species. It seemed to follow for Weismann that over evolutionary time, humans will resolve their individuality by becoming integral parts of a larger whole. In short, like it or not, ants are our species future. Lurking here may be the idea of anti-liberal 'corporate state' that in the twentieth century came to be associated with Communism and Fascism.
I have argued that Max Weber appeared to be attuned to this disagreement, which helps to explain the several unresolved issues in his conception of society and its scientific study (Fuller 2020b). On the one hand, Weber's default liberal sensibility made him a natural ally of Huxley, as reflected in his methodological interest in developing 'ideal types' of a kind of complex individual who serves as a paradigm for a style of behavior that can be found to varying degrees across members of a given society. Thus, the ideal type of, say, the Protestant capitalist is at once a specimen and a role model of H. sapiens over a certain expanse of space and time. It is studied largely by understanding its modus operandi, which is not reducible to whatever success or failure the ideal type might endure in a larger changing environment. This is not to deny that, say, the Protestant capitalist has flourished under many conditions but to think of such a person as 'functional' is to shortchange the ways in which that ideal type's exemplars have transformed those conditions to their advantage. On the other hand, what Weber identified as the 'rationalization' of modern societies-typified by the rise of bureaucracy-comes closer to Weismann's sense of evolutionary progress. In effect, the unique sort of complexity evidenced in the behavior of distinctive individuals is a temporary phase that is eventually channeled and routinized, as when a religion's founding prophetic insight yields to a clerical mode of reproduction and expansion. More generally, if a society is to extend its membership and reach, the horizons of each member need to narrow in its own reach. The outward sign of this process is explicit rule-following and role-playing, the performance standards of which become increasingly detached from the specific individuals, who in turn come to be seen as 'functionally equivalent'. This sense of 'evolutionary progress' is what Weber dubbed the 'iron cage', which his early US champion Talcott Parsons-and Parsons' postwar German student Niklas Luhmann-received more enthusiastically than Weber probably intended.

The Individual as the Site of Contestation in Utilitarianism
The difference between Huxley and Weismann over the biological individual, which Weber then internalized in his sociology, harks back to a founding disagreement in utilitarianism, which has had major repercussions for what counts as a 'democratic' approach to politics-a major bone of contention in the posttruth condition. All students of utilitarianism know that John Stuart Mill objected to Jeremy Bentham's apparent endorsement of majoritarian rule as a consequence of the 'one person, one vote' principle, which Bentham took to be the cornerstone of democracy. But behind this disagreement is a deeper one over the nature of the individual in a democracy. For Mill, the individual was ultimately about self-expression, whereas for Bentham it was about self-interest. Mill wanted a 'democracy' in which people represented themselves publicly, not in secret ballots. Only people capable of doing so were worthy of the franchise. (The fact that Mill believed that men and women were equally up to the task sometimes obscures this more fundamental point.) To be sure, Mill believed that his society had only begun to implement the requisite reforms to enable this level of individual self-expression. On Liberty provides a philosophical justification for this particular brand of 'liberalism', which was criticized in its day for being 'antidemocratic' because of the high-performance standard it set. In effect, Mill's liberalism decomposed the concept of 'elite', which normally means both 'small' and 'superior'. He wanted to spread the superior sensibility to a wider portion of the population, resulting in an 'aristocracy of everyone', to paraphrase Thomas Jefferson (Barber 1992). A good way to characterize the difference between Mill's and Bentham's zeal for legislative reform is that Bentham was driven less by enabling individuals to speak than to be heard. This distinction is familiar in the law as alternative grounds for the assignment of rights. Mill treats a right as a matter of self-empowerment and Bentham as a matter of self-protection. Behind this difference lay a deeper metaphysical difference: Where Mill sees hidden potential, Bentham sees revealed strengths and weaknesses. The former operates with a more open sense of the boundaries of the individual than the latter, which perhaps helps explain why Mill preferred open and Bentham secret ballots: Mill was focused on the capacity of individuals to change their minds, while Bentham was concerned with an accurate capture of their current self-representation. Bentham believed that people possessed fixed interests that had to be ascertained without distortion from their elected representatives, whereas Mill believed that people's interests were fluid and hence open to persuasion through deliberation. (This is the context for understanding Bentham's notorious characterization of rights as 'nonsense on stilts'. He distrusted politicians much more than Mill, who even briefly became one.) While neither accepted the brand of rigid class-based politics that would come to be associated with Marxism, Bentham could accept 'class' as an aggregate social classification for analytical purposes, whereas Mill abhorred the category altogether. In metaphorically quantum terms, Bentham was about the 'position' and Mill the 'momentum' of individuals vis-à-vis the 'measurement problem' of establishing social order.
I have cast the above distinction in terms of Kant's divided settlement on the future of metaphysics between the legacies of the Epicurean well-bounded 'atomic individual' (Bentham) and the more indeterminate Stoic 'universal individual' (Mill) in his first and second Critiques, respectively (Fuller 2021a). In the modern philosophy curriculum, the former belongs to epistemology, the latter to ethics. However, in terms of democratic politics, the difference may even be more profound. Mill regarded what he dubbed the 'moral sciences' as the scientific successor to moral philosophy. It should be part of the training of any aspiring legislator, who ideally should be a self-legislator-that is, not a mouthpiece of his or her constituency. It was a humanistic discipline that added statistics to its evidence base, alongside the study of history and customs. The curriculum still pursued at Oxford by future UK parliamentarians-'Philosophy, Politics and Economics'-captures the spirit of this largely normative approach to social science, whereby 'welfare' stands as the scientific correlate to the 'common good'. In contrast, Bentham's self-styled 'science of legislation' was much closer to 'social science' as understood today. It strikes a more explicitly detached pose toward the society it studies and is ideally administered by experts who are not directly accountable to the people but nevertheless are entrusted with serving their interests: in short, 'civil servants'. Whereas Mill's legislators would be most at home deliberating on the floor of the House of Commons, Bentham's legislators would be more comfortable behind the scenes in polling organizations and 'nudge units', greasing the wheels of government.

Conclusion: Revisiting Rawls to Recognize the Post-truth Individual
According to Karl Popper, the 'rational' character of democracy lies not in the representativeness of legislators but in the reversibility of legislative outcomes: It is less about who makes the laws or even what the laws say than the reliability with which both can be replaced, should they prove wanting. One way to think about this idea is that all policies and politicians carry a 'sunset clause', which shifts the burden of proof of worth from the opponent to the incumbent (Kouroutakis 2017). In effect, democracy contains an automatic 'reset' mechanism that invites those outside of government to challenge those who are inside of it on a level playing field. Thus, someone who had been seen to have a brilliant track record for many years may be booted out of office because the field of play at election time places the incumbent at a strategic disadvantage. In this respect, the periodic timing of elections combined with events that are often beyond the control of politicians can conspire to produce a 'perfect storm'. Consider the sorts of events that in retrospect seem 'minor' that have triggered major political change: a sudden economic downturn, diplomatic row, act of violence or rhetorical misstep-to which might now be added climate catastrophe and health emergency. Nevertheless, the outcome of such oblique situations may effectively change the rules of the game, resulting in a new party in government, with a new legislative agenda that casts its predecessor in a different, often negative light. All of this is comparable to Popper's 'falsifiability' ethic of scientific theory choice, whereby the election functions as the 'crucial experiment'. Note that in both the political and scientific cases, there is a preoccupation with 'fairness' to ensure that the right balance of design and accident are at play. This modus operandi helps to explain Popper's use of the Trotskyite phrase 'permanent revolution' to characterize his difference from Kuhn's theory of scientific change (Fuller 2021b). Whereas Kuhn took a more classically dynastic view of scientific change, which presumes the dominant paradigm's entitlement to determine its own successor, Popper associated the dynamism of science with the more proactive attitude of electoral democracies that invites periodic changes in order. Thus, depending on the election's exact timing, an incumbent's track record may appear well or poorly suited to the task ahead. Such is the 'grue' like character of electoral politics, to recall Goodman's (1955) 'new riddle of induction'. Moreover, this indeterminacy extends to the identities of the voters themselves.
In this regard, Rawls' (1971) 'veil of ignorance', under which one is invited to select the ideal constitution, amounts to a crucial experiment for testing one's sense of 'morphological freedom', to use the phrase favored by transhumanists today (Fuller 2019, chap. 2). Understood this way, Rawls is suggesting that the just society is one in which we could live as any of its members, since the 'veil' in question is cast specifically over our personal identity.
From that standpoint, Rawls' original critics somewhat missed the point. They tended to argue either that the veil of ignorance was psychologically unrealistic (i.e., people could not abstract from their actual self-identity in the way he imagined) or that the conclusions that Rawls drew from it -in effect, a justification for the social democratic welfare state -were needlessly risk averse. Nevertheless, the latter critics were closer to the mark. Rawls' veil of ignorance is perfectly fine as the frame of mind from which to derive the principles of a just social order, but not because it allows Rawls to elicit exactly the principles he wants. Indeed, Hare (1973) was right that it doesn't. Rather, the veil of ignorance simulates the changes in fortunes that the average person is likely to undergo in a modern liberal capitalist society. It is unrealistic for a member of such a society to imagine that once the veil of ignorance is lifted, they will discover some fixed position where they will be for the rest of their lives. Assuming that the veil is meant to be an updated version of the social contract from early modern philosophy, its lifting simply reveals the respective starting points of those who have agreed to play the game, the rules of which they have just decided. Under those circumstances, the rules should prescribe maximum latitude, or Spielraum. Of course, one might wish to describe this field of play as 'insured' or a 'safety net'. Nevertheless, it's separate from any larger concerns about 'social welfare' or 'social justice'. It's simply about ensuring that people are not condemned to a lifetime of failure simply based on one mistake, including the mistake of their birth. But it doesn't mean that they are guaranteed success, only that they shall continue until they are 'unredeemable', the definition of which would then be the focus of serious policy discussions. Bankruptcy law provides a benchmark for these discussions.
The post-truth condition makes sense in terms of this understanding of Rawls' veil of ignorance (Fuller 2020a, ch. 11). The veil upholds people's sense of 'morphological freedom', at least to a limited extent. Interestingly, the classical root of this sensibility-the Stoic's indeterminate sense of individual identity-also lay behind John Locke's theory of personhood (Hill and Nidumolu 2021). Locke's impact on the emerging 'modern European' (i.e., post-Reformation) legal framework was to define people's 'entitlements' in terms of their divine rather than their natural birthright, which even stripped of the theology amounted to stipulating that a 'person' is not a fixed social position but a dynamic locus of action.
Thus, we arrive at the classical liberal conundrum of how to prescribe 'rights' in a society of such persons to enable 'jointly realizable maximum liberty'. In the absence of prior binding legislation, this indeterminately expansive individual calls for a conception of rights based on liability rather than property: It makes freedom the default position, subject to the payment of damages to any harmed parties, however a court defines them in a case (Calabresi and Melamed 1972).
Recent events put this point in perspective. The reason that firms such as Cambridge Analytica have been so successful in predicting electoral outcomes is that voters 'float' more than ever before. By gathering and processing personal data at such unprecedented levels, these firms can discover the means to 'swing' the voters towards a desired outcome (Fuller 2018, chap. 1). But it does not follow that the voters somehow succumbed to diabolically deceptive marketing campaigns. On the contrary, it is precisely due to their advanced education and access to information that voters increasingly confound 'expert' expectations. All that follows is that marketing campaigns need to do more research to make more sophisticated appeals to target audiences who have themselves become more informed, not least about the means being used to persuade them. As the father of public relations, Bernays (1928) observed nearly a century ago, this is what democracy looks like, once the populace is both smartened and empowered-though he underplayed its potentially volatile consequences. Put philosophically, to be free is to be sufficiently open-minded to possibly become someone else through a series of choices. It is this prospect, consonant with Locke's Stoic-inspired conception of personal identity, of which Rawls' veil of ignorance underwrites the insurance policy. More prosaically, it characterizes the game of 'cat and mouse' unleashed by Bernays and now exploited by data analytic firms as post-truth in action. In any case, the players are best placed to determine whether they're winning or losing, which perhaps explains why-like the original heretics and Protestants-they are increasingly inclined to live and die by their beliefs.