Abstract
The evidence that we get from peer disagreement is especially problematic from a Bayesian point of view since the belief revision caused by a piece of such evidence cannot be modelled along the lines of Bayesian conditionalisation. This paper explains how exactly this problem arises, what features of peer disagreements are responsible for it, and what lessons should be drawn for both the analysis of peer disagreements and Bayesian conditionalisation as a model of evidence acquisition. In particular, it is pointed out that the same characteristic of evidence from disagreement that explains the problems with Bayesian conditionalisation also suggests an interpretation of suspension of belief in terms of imprecise probabilities.
1 Introduction
Let us assume that Richard’s credence towards the proposition that Pete will drink more than three beers tonight is
Richard and Siena are epistemic peers concerning Pete’s drinking behaviour: they are equally competent and knowledgeable with regard to predicting it.[1] So-called conciliatory views hold that they should revise their original credences at least a bit as soon as they become aware of their peer disagreement. An important special case of a conciliatory view is the Equal Weight View (henceforth EW), which holds that they should give the credences of their respective epistemic peer the same weight that they give their own.
Conciliatory views are often said to apply, not only when it comes to predicting specific aspects of human behaviour, but also in those peer disagreement cases in which we seem unable to find out who is right. Thus, if Richard and Siena are epistemic peers concerning 19th-century Russian literature and Richard thinks that Gogol is a greater writer than Dostoevsky, while Siena does not, they should revise their respective credences; if they are epistemic peers concerning political matters and Siena thinks that their country should impose sanctions against Iran, while Richard does not, they should revise their respective credences; and if they are epistemic peers concerning ethics and Richard thinks that average utilitarianism is the best moral theory, while Siena does not, they should revise their respective credences as well. Fortunately, we need not care here whether conciliatory views cover those kinds of peer disagreements, too; my concern in this paper is their reconciliation with Bayesianism.
In more detail: In Section 2, I will point out that the standard interpretation of conciliatory views is incompatible with Bayesian conditionalisation because the order in which one acquires new evidence matters for the former but not for the latter. In Section 3, I will argue that a specific feature of evidence from disagreement, its so-called retrospective aspect, suggests that a particular order of evidence acquisition is preferable in many cases, and will indicate which cases are exceptions. Finally, in Section 4, I will present an alternative interpretation of EW that is more in line with the retrospective aspect, and will explore this interpretation’s consequences for updating beliefs in a broadly Bayesian way.
In order to tackle these issues, using credence talk is helpful but ultimately inessential. It is helpful insofar as it simplifies the presentation a lot; it is inessential insofar as all that follows could be reformulated in terms of just three doxastic attitudes – belief, disbelief, and suspension of belief – instead of continuum many. We could, for example, say that conciliatory views require epistemic peers to give at least some weight to the others’ beliefs, thereby leaving it open whether a specific disagreement would call for a change of doxastic attitude. I will take up this point towards the end of the paper.
It is also simplifies matters if we focus, not on conciliatory views in general, but on EW. At least the results in Sections 2 and 3 could be reformulated such that they apply to all conciliatory views.
2 Splitting the Difference and Bayesian Conditionalisation
According to the standard interpretation of EW, ‘give the credences of your epistemic peers the same weight that you give your own’ just means that you should adopt a credence that equals the arithmetic mean of your own and your epistemic peers’ original credences. This is called splitting the difference. Let us assume in this and the following section that splitting the difference is the correct interpretation of EW. So, if
If we apply splitting the difference to the example with which we began, both Richard and Siena should adopt credence
More generally, conciliatory views require Richard to adopt
and Siena to adopt
Now, assume that Richard and Siena receive new evidence e for assessing h. Then, according to Bayesian conditionalisation:
where the subscript
Example 1.
Assume that
In (5) and (6), an instance of the conditionalisation rule, namely (4), has been applied, and in (7), we have used splitting the difference, as in (1).[2] If we proceed the other way around, that is, if we first split the difference twice and apply (4) only afterwards, we get
Since
Example 1 shows that, given splitting the difference, it matters whether one first gets higher-order evidence from disagreement (D) or a piece of non-disagreement evidence (e). It does make a difference whether two peers first acquire, separately from each other, some additional piece of first-order evidence and then come to know what the other one thinks of the totality of the evidence, or whether they first come to know what the other one thinks of the totality of the evidence minus that additional piece and then acquire, separately from each other, the additional piece of evidence. In other words, given splitting the difference, the order of evidence acquisition matters for what we should believe.
This contradicts the Bayesian model of belief revision, since within this model, it does not matter which piece of evidence one gets first – and, some would add, this is an intuitively correct understanding of how we process evidence. Take, as an example, a criminal case. It appears to be irrelevant whether the detectives first interview witness A and then witness B, or vice versa.
Our problem is twofold. First, since splitting the difference and the Bayesian framework are not compatible, the question is which has to go.[4] (Surprisingly, my answer will be: both.)
Second, since we get different results for different orders of evidence acquisition, the question arises which order of evidence acquisition is better, epistemically speaking. Assume, for example, that Richard and Siena are detectives who investigate, independently from each other, in a murder case. Let h be the proposition that the butler is the murderer, and e the proposition that the gardener saw the butler in the billard room just before the crime happened. Should Richard and Siena exchange views first, that is, first discuss matters and afterwards interview the gardener separately in order to find out whether e holds, or should they exchange views last, that is, first incorporate the evidence that they get from talking to the gardener, and then discuss matters?[5] Put differently: If we assume again that
3 The Optimal Order of Evidence Acquisition
The information one gets from becoming aware of a peer disagreement is not simply an additional piece of (first- or higher-order) evidence; rather, it has a retrospective aspect.[7] This means that, unlike other first- or higher-order evidence, the evidence from disagreement calls into question the way we have evaluated our original evidence.
Take, as an example, a history professor who seems to recall that Julius Caesar died in 42 BC. Before mentioning the date in his lecture, however, he checks it in an encyclopedia and reads, to his utmost astonishment, that Caesar died in 44 BC. As the new piece of information is presumably more reliable than his memory evidence, he should revise his original belief. This does not imply, however, that it was irrational for him to believe that Julius Caesar died in 42 BC before he looked up the date; on the contrary, we can assume that our history professor is usually quite good at remembering historical dates and thus justified in trusting his memory, as long as there are no defeaters.
We can make an analogous observation for acquiring new pieces of higher-order evidence: If the history professor learns that in the recent past he has been far less reliable at remembering historical dates than he once was, he should stop trusting his original belief that Julius Caesar died in 42 BC. But again, this does not imply that it was irrational for him to have this belief before he learned about the recent failures of his memory; on the contrary, his then-undefeated evidence supported the view that his memory was to be trusted.
Things are different with evidence from peer disagreement. This evidence raises the question whether one has evaluated the original evidence correctly. The fact that, on the basis of the same first-order evidence, one’s epistemic peer comes to a different conclusion indicates, according to conciliatory views, that it has never been rational to evaluate this evidence as one did. If our history professor arrives, after careful examination of all relevant data, at the view that Julius Caesar, had he continued to live for another 10 or 20 years, would have ruined the Roman Empire, while a colleague, after carefully analysing the same data, defends the opposite conclusion, both should learn from their disagreement, according to conciliatory views, that they overestimated the conclusiveness of their data and should not have drawn their original conclusions in the first place. Unlike first-order evidence and ordinary higher-order evidence, evidence from peer disagreement calls into question the reasoning on whose basis one formed one’s relevant beliefs before one acquired the evidence from peer disagreement. This is the retrospective aspect.
Note that I use the term ‘retrospective aspect’ in an internalist way.[8] As the history professor who has recently been unreliable at remembering historical dates had, before learning about these failures, no evidence for distrusting his memory, he was, before learning it, internalistically justified in believing that Julius Caesar died in 42 BC. But he had at no time an externalist justification for this belief because his cognitive capacities had not been working well all along.[9] Understood externalistically, a retrospective aspect is a common feature of higher-order evidence, insofar as higher-order evidence often suggests (perhaps misleadingly) that an external justification was missing all the time. Understood internalistically, a retrospective aspect is a specific feature of evidence from disagreement, insofar as this evidence tells one that one has misevaluated one’s first-order evidence, or has overestimated its conclusiveness, and hence was not even internalistically justified in believing what one originally believed (as in the case of the history professor who thinks that Julius Caesar, had he lived longer, would have ruined the Roman Empire). In this internalist understanding, the retrospective aspect marks a categorical difference between non-disagreement evidence and evidence from disagreement.
This categorical difference explains why Bayesian conditionalisation does not work for the former: the newly gained piece of evidence from disagreement cannot just be added to the old evidence; rather, it calls for a reassessment of it. Bayesians can deal with such reassessments, not by conditionalisation, but by calling into question the agents’ prior probabilities (see Rosenkranz and Schulz 2015, Section 7). This means, however, that they have to treat evidence from disagreement quite unlike other kinds of evidence. Moreover, a revision of prior probabilities raises the questions how exactly the revision should be conducted, and whether it is, like conditionalisation, a deterministic process, the alternative being that there are several rationally permissible ways of revising the prior probabilities (see Rosenkranz and Schulz 2015, Section 8).
Remember that h is the proposition that the butler is the murderer, and e the proposition that the gardener saw the butler in the billard room just before the crime happened. If Richard and Siena form different credences towards h, or towards h given e, they are already making a mistake, because the fact that a peer, while evaluating the same evidence, intuitively favours a credence unlike the one oneself intuitively favours implies that the available evidence is not sufficiently conclusive, and that it is not rational to adopt the credence which one tends to find most plausible. One should rather become agnostic about what the right credence is (see Section 4).
Exchanging views may help them to avoid the mistake of overestimating the conclusiveness of their evidence. For if, say, Richard has a specific credence towards h and then notices that Siena has a different credence towards h, he learns that it has never been rational for him to evaluate the evidence as he did, and that he should rather have been agnostic.
But when should Richard and Siena exchange views? If they exchange views before they interview the gardener and come to know whether e, the information they get from a potential disagreement is that at least one of them has misevaluated the original evidence concerning h (not including e). If, on the other hand, they first learn that e and then discuss their views, the information they get from a potential disagreement is that at least one of them has misevaluated the total evidence concerning h (including e). Hence, they do not get exactly the same information from a potential disagreement. This explains the different result that we get from different orders of evidence acquisition.
What is more, this shows that the information we get from exchanging views last is more encompassing: it entails not only how an epistemic peer interprets the original evidence without the new piece that we are about to get, but also how the peer assesses this new piece and how she incorporates it into her view.
Does that mean that Richard and Siena should proceed as in (5)–(7)? Not quite. What complicates matters here is that in (8) and (9), Richard and Siena do not discuss and revise their respective beliefs concerning h, the hypothesis they really care about, but those concerning e and
Example 2.
Let p and q be two probabilistically independent propositions and
If they first calculate their resulting credences and then exchange views, we have
If they first exchange views and then calculate, we have
The problem is that it is certainly wrong for Richard and Siena to have credences
Arguably, results of calculations or inferences do not qualify as new evidence. But even if they do, they are certainly unlike other pieces of evidence insofar as they are entailed by the original evidence. As a consequence, we only get more information from exchanging views after doing the calculations or inferences if there is disagreement about how to carry them out. We can, however, plausibly assume that Richard and Siena are not split over such things. Therefore, my argument in favour of exchanging views last does not apply to Example 2.
Quite to the contrary: Example 2 shows that Richard and Siena should first exchange views about p and q and then calculate their credences for
In general, if we apply conciliatory views to results of calculations or inferences, and not only to initial values or premises, we take the risk of increasing whatever mistakes have so far been made in forming beliefs that are in accordance with the available evidence. The reason is that we may add some internal incoherence to the initial errors: If the parties to a disagreement draw inferences from their original controversial credences, they usually commit consequential errors of different extents. Splitting the difference after drawing such inferences then leads, for each party, to an incongruity between their revised credences in the premises and their revised credences in the conclusion. This is also what goes wrong in (11), according to splitting the difference: Richard’s and Siena’s revised credences in p and q do not match their revised credences in
Let us go back to Example 1 and to the question of whether Richard and Siena should proceed as in (5)–(7). Given that Richard and Siena agree on how the conditionalisation in (10) has to be carried out, they do not gain anything from exchanging views after learning that e and updating their beliefs by conditionalisation. If, on the other hand, they exchange views first, they avoid the risk of amplifying potential errors to different degrees and thereby becoming internally incoherent. Hence, if we stay within the limits of splitting the difference, Richard and Siena should revise their beliefs as suggested by (8)–(10).
This holds under the assumption that they discuss their beliefs concerning e and
Example 3.
Assume that Richard thinks that the Butler probably is the murderer, and that the probability would be even higher if he was seen in the billard room at an inopportune moment, while Siena thinks it unlikely that the Butler is the murderer and is of the opinion that it does not alter anything whether or not he has been seen in the Billard room at whatever time. In numbers:
If they first exchange views, however, they both form, according to splitting the difference, the following credence:
This would obviously change their credences towards h given e. For example, Siena, who thinks that e does not have any effect on h, should now revise her original credence
Again, we get different results for exchanging views first and exchanging views last:
To sum up: If (i) epistemic peers get two additional pieces of evidence for or against some hypothesis h, (ii) one of those pieces consists in learning that they have different credences concerning h, while the other is just a normal piece of evidence, and (iii) they are able to choose the order by which they take in the evidence, then the epistemic peers are well advised to exchange views last, that is, to consider the normal piece of evidence first and the evidence from disagreement afterwards. If (i) and (iii) hold and if (ii
Throughout this and the foregoing section, we presupposed that splitting the difference is the correct interpretation of conciliatory views. In the next section, we question this presupposition.
4 Disagreement-Induced Suspension of Belief and Imprecise Probabilities
As pointed out at the beginning of Section 3, the information we get from peer disagreement is that at least one peer has misinterpreted the evidence. This does not mean that all credences that deviate from the mean value are wrong; it only means that we do not know which, if any, of the credences is correct. Hence, a more natural interpretation of EW than splitting the difference is what I will call spreading the difference: we should become agnostic about which credence is correct, rather than adopt a specific credence that is supposed to reflect our agnosticism.
In a bit more detail, spreading the difference requires the epistemic peers to entirely withhold judgment regarding propositions that are controversial between them; all of the peers’ original credences – and, for plausibility’s sake, all intermediate values as well (but see note 16 below) – should be considered equally good options, between which one should not choose. Hence, the kind of suspension of belief that spreading the difference suggests is of higher-order insofar as it does not identify suspension of belief with a specific point or area in the credence interval
Take our question from the beginning, whether Pete will drink more than three beers tonight. Richard’s credence here is
We can model spreading the difference by using imprecise probabilities (cf. Elkin and Wheeler 2018). Instead of representing an epistemic subject’s belief state by a single probability function C, which maps beliefs to real numbers, we can represent it by a (non-empty) set of probability functions C, the so-called credal set. In the case of peer disagreement, we use credal sets to model the higher-order kind of suspension of belief that results from spreading the difference; whether or not the peers’ original credences are given by ordinary probability functions or credal sets does not matter. For instance, after discussing the matter Richard and Siena should not form a specific credence about Pete’s thirst for beer tonight; as the totality of their evidence is inconclusive, they should be agnostic, and their correct belief states are then best represented by the closed interval
Let us apply this model to the examples that we discussed in Sections 2 and 3. In those sections, my aim was to present counterexamples; for this purpose, it was helpful to use concrete numbers. Now my aim is to disclose deeper interrelations; it is thus best to use variables instead of numbers to increase generality. Let us start with the observation that (5)–(7) lead to a different result than (8)–(10).
Example 1*.
Assume that
For exchanging views first, we get
and from this[17]
In the last expression, ‘1’ is included because it might happen that, for example,
It follows that
Since we can without loss of generality assume that
As argued above, Richard and Siena should first exchange views. So (14), not (17), states the correct credal set. The reason is that the calculation carried out in (15)–(17), which is by itself uncontroversial between Richard and Siena, may increase errors inherent in the initial values. Example 1* shows that this happens iff
Example 2*.
Let p and q be two probabilistically independent propositions and
If they first exchange views and then calculate, we have
It follows that (20) and (21) agree if and only if
Since we can without loss of generality assume that
Again, Richard and Siena should first exchange views. So (21), not (20), states the correct credal set, because the (by itself uncontroversial) calculation that leads to (20) may falsely ignore possibilities that would not have been overlooked if Richard and Siena had disclosed their credences to each other in advance. Example 2* shows that this kind of ignorance happens iff
Example 3*.
To complete our overview of the different cases, assume
If Richard and Siena exchange views first, they both form, according to spreading the difference, the following credence:
As a consequence of (25), they would probably have to change their credences towards h given e in order to keep them plausible. Since we cannot state in general how these credences will be modified, there are no general results for exchanging views first, and thus no general results about the conditions that a, b, c, and d must satisfy in order to make it irrelevant whether e or D is first taken in.
As argued in Section 3, Richard and Siena should exchange their views after each of them has considered the significance of the new piece of evidence e; they should proceed as in (24). Only then do they also learn how the other one assesses this significance. Whether we split or spread the difference does not affect this reasoning.
We need not presuppose credence talk to make spreading the difference work. If belief, disbelief, and (first-order) suspension of belief are the only doxastic attitudes, epistemic peers should take their disagreements as evidence that they should not adopt a specific doxastic attitude, but rather remain undecided between those doxastic attitudes that have originally been held by one of them. By identifying belief with 1, disbelief with 0, and (first-order) suspension of belief with
5 Conclusion
The retrospective aspect of evidence from disagreement explains why it matters whether epistemic peers exchange views first or last, before or after taking in a piece of non-disagreement evidence. It thereby also explains why we cannot use Bayesian conditionalisation for modelling the acquisition of evidence from disagreement, since Bayesian conditionalisation presumes that the order of evidence acquisition is irrelevant.
Moreover, the retrospective aspect indicates two further points. First, epistemic peers should usually discuss their views after incorporating non-disagreement evidence since they otherwise lack the information how the other interprets this evidence. Exceptions are cases like examples 1, 1*, 2, and 2*, in which we have to incorporate the results of calculations or inferences rather than original pieces of evidence, and in which it is uncontroversial among the peers (and known to be so) how to do the calculations or inferences. In such cases, epistemic peers avoid the risk of increasing mistakes by exchanging views first.
Second, we should abandon the splitting the difference interpretation of conciliatory views in favour of the spreading the difference interpretation. Given suitable original credences (as in Example 1 and Example 2), spreading the difference lets the problem of the order of evidence acquisition vanish. However, the problem is not dissolved in general, as Example 1* and Example 2* show. In cases in which spreading the difference yields diverging results for distinct orders of evidence acquistion, epistemic peers should proceed as explained earlier and exchange views last in normal cases and first in cases of uncontroversial calculations or inferences.
References
Christensen, D. 2007. “Epistemology of Disagreement: The Good News.” Philosophical Review 116 (2): 187–217, https://doi.org/10.1215/00318108-2006-035.Search in Google Scholar
Elga, A. 2008. Lucky to Be Rational. Unpublished Manuscript. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.404.9537&rep=rep1&type=pdf (accessed August 29, 2021).Search in Google Scholar
Elkine, L., and G. Wheeler. 2018. “Resolving Peer Disagreements Through Imprecise Probabilities.” Noûs 52 (2): 260–78.10.1111/nous.12143Search in Google Scholar
Jehle, D., and B. Fitelson. 2009. “What is the ‘Equal Weight View’?” Episteme 6 (3): 280–93, https://doi.org/10.3366/e1742360009000719.Search in Google Scholar
Lasonen-Aarnio, M. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88 (2): 314–45, https://doi.org/10.1111/phpr.12090.Search in Google Scholar
Moss, S. 2012. “Scoring Rules and Epistemic Compromise.” Mind 120 (480): 1053–69, https://doi.org/10.1093/mind/fzs007.Search in Google Scholar
Rosenkranz, S., and M. Schulz. 2015. “Peer Disagreement: A Call for the Revision of Prior Probabilities.” Dialectica 69 (4): 551–86, https://doi.org/10.1111/1746-8361.12103.Search in Google Scholar
Shogenji, T. 2007. A Conundrum in Bayesian Epistemology of Disagreement. Unpublished Manuscript. http://www.fitelson.org/few/few07/shogenji.pdf (accessed August 29, 2021).Search in Google Scholar
Staffel, J. 2015. “Disagreement and Epistemic Utility-Based Compromise.” Journal of Philosophical Logic 44 (3): 273–86, https://doi.org/10.1007/s10992-014-9318-6.Search in Google Scholar
Wagner, C. 1985. “On the Formal Properties of Weighted Averaging as a Method of Aggregation.” Synthese 62 (1): 97–108, https://doi.org/10.1007/bf00485389.Search in Google Scholar
Weatherson, B. 2019. Normative Externalism. Oxford: OUP.10.1093/oso/9780199696536.001.0001Search in Google Scholar
Weber, M. A. 2017. “Epistemic Peerhood, Likelihood, and Equal Weight.” Logos & Episteme 8 (3): 307–44, https://doi.org/10.5840/logos-episteme20178325.Search in Google Scholar
Weber, M. A. 2019. Meinungsverschiedenheiten. Frankfurt/Main: Klostermann.10.5771/9783465143956Search in Google Scholar
Wilson, A. 2010. “Disagreement, Equal Weight and Commutativity.” Philosophical Studies 149 (3): 321–6, https://doi.org/10.1007/s11098-009-9362-1.Search in Google Scholar
© 2021 Marc Andree Weber, published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.