Skip to content
BY 4.0 license Open Access Published by De Gruyter December 7, 2021

Coherence and Reduction

  • Andrea Giuseppe Ragno EMAIL logo

Abstract

Synchronic intertheoretic reductions are an important field of research in science. Arguably, the best model able to represent the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction, a model further developed by Schaffner and nowadays known as the generalized version of the Nagel–Schaffner model (GNS). In their article (2010), Dizadji-Bahmani, Frigg, and Hartmann (DFH) specified the two main desiderata of a reduction á la GNS: confirmation and coherence. DFH first and, more rigorously, Tešic (2017) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. The purpose of this article is to analyse and compare the degree of coherence between the two theories involved in the GNS before and after the reduction. For this reason, in the first section, I will be looking at the reduction of thermodynamics to statistical mechanics and use it as an example to describe the GNS. In the second section, I will introduce three coherence measures which will then be employed in the comparison. Finally, in the last two sections, I will compare the degrees of coherence between the reducing and the reduced theory before and after the reduction and use a few numerical examples to understand the relation between coherence and confirmation measures.

1 Introduction

Synchronic intertheoretic reductions, namely those reductions between pairs of theories whose domains of applicability largely overlap, are an important field of research in science. Arguably, the best model offered in the philosophical literature which is able to track the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction (Nagel 1961, 1974), a model which was further developed by Schaffner (1967, 1969, 1974, 1977, 1993) and is nowadays known as the GNS, acronym for generalized version of the Nagel–Schaffner model. In short, according to the GNS, reducing a theory T A to another theory T B is possible if and only if the laws of T A are derivable from T B with the help of bridge laws. Seven classes of criticisms were put forward against this model of reduction. In their article, Dizadji-Bahmani, Frigg, and Hartmann (henceforth ‘DFH’) respond to these attacks by offering some needed modifications to the GNS (Dizadji-Bahmani, Frigg, and Hartmann 2010, 2011). Amongst these clarifications, DFH specify the two main desiderata one might want from the GNS: confirmation and coherence. So, DFH first and, more rigorously, Tešic (2019) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. More precisely, evidence confirming one theory also confirms the other theory and vice versa given that after the reduction of one to the other they become connected and share their evidence. The purpose of this article is, instead, to compare the different degrees of coherence between the reducing and the reduced theory before and after the reduction through the Bayesian analysis firstly sketched by DFH and then corrected by Tešic.

For this reason, I will prepare the ground by looking at the classic putative example of a reduction á la GNS: the reduction of thermodynamics (henceforth ‘TD’) to statistical mechanics (henceforth ‘SM’) (Section 2.1). Then, I briefly present the Bayesian analysis of the relation between the reducing and the reduced theory given by DFH (Section 2.2) and corrected by Tešic (Section 2.3): this analysis is crucial to probabilistically represent the GNS. Such Bayesian representation will indeed be used in the comparison of the two different degrees of coherence amongst the two theories in question: in fact, the coherence measures which I will take into account for comparing the degrees of coherence are probabilistic. In particular, three coherence measures will be discussed (Section 3); these measures might present counter-intuitive results in certain contexts. The issue here is whether they will report different outcomes in the case of GNS or not, that is, whether one coherence measure might report that the set of theories after the reduction coheres more than the set of theories prior to the reduction, and another reports the opposite. The goal would be having all the coherence measure reporting the same result. In the fourth section, I will show that they actually outline similar results under an assumption formulated by Bovens and Hartmann for their coherence measure (Bovens and Hartmann 2003): when ones does considerations on coherence, the several sources reporting evidence should be understood as equally and partially reliable, and taken into account independently because the observations being reported is what is at stake when analysing its coherence. I will therefore compare the degrees of coherence among the reducing and the reduced theory pre- and post-reduction (Section 4). Under the assumptions of the Bayesian analysis provided by DFH and Tešic, and under the assumption of the coherence measure of Bovens and Hartmann, the coherence measures will provide two conditions which the two theories of the GNS likely meet in light of their conditional dependency, which is in turn due to the reduction of a theory to the other. Finally, in the last section, I present some numerical examples aimed at analysing the relation between coherence measures and confirmation measures in the context of intertheoretic reduction as designed by DFH and Tešic (Section 5).

2 The Generalized Version of the Nagel–Schaffner Model

In this section, I will present the GNS by looking at a putative example of GNS reduction. Then, I will outline the discussion between DFH’s and Tešic’s Bayesian analysis. The latter does not present knockdown arguments for the former. Rather, it corrects a couple of flaws DFH’s analysis has by reviewing the conceptual character of bridge laws.

2.1 An Example of a GNS Reduction

TD is a branch of physics which describes those phenomena observable in macroscopic systems (e.g. solids, liquids, gases, plasma) and their relation to energy, radiation, and properties of matter. The behaviour of these entities can be expressed by the four laws of thermodynamics which make use of macroscopic properties (e.g. pressure, temperature). Yet, such behaviour can also be explained in terms of their microscopic constituents by SM. In fact, based on statistical methods, probability theory and microscopic physical laws, SM explains the behaviour of macroscopic systems in terms of those dynamical laws governing its microscopic quantities (e.g. molecules, particles). The laws of TD can therefore be expressed in terms of the laws of SM. This hints at our first definition of Nagelian reduction, according to which reducing a theory T A to another theory T B is possible if and only if the laws of T A are derivable from T B with the help of bridge laws, which are empirical facts linking concepts of the reduced theory to terms of the reducing theory. This, however, is not sufficient to properly describe a successful reduction. Consider an example of reduction between TD and SM, namely the Boyle–Charles law (Dizadji-Bahmani, Frigg, and Hartmann 2010, pp. 395–396),[1] to better understand the role of bridge laws. The Boyle–Charles law states that the temperature T of a gas is directly proportional to the product of the values of its pressure p and the volume V over which it evenly distributed:

(1) p V = k T ,

where k is a constant. This law together with some specific conditions (i.e. gas in thermodynamic equilibrium with the surrounding environment and relatively low pressure) forms the core of the thermal theory of the ideal gas. In SM, there is a corresponding theory for ideal gases: the kinetic theory of the ideal gas. This theory describes the motion of n particles with mass m of a gas spread over the volume of, for example, a vessel according to Newtonian mechanics. The theory includes two assumptions:

  1. the gas should be ideal to the extent that its molecules, which collide elastically, are point particles;

  2. the three components of the velocity v (v x , v y , v z ) should be evenly distributed (i.e. there is no favoured direction).

Following the definition of pressure in Newtonian physics and the first assumption, the gas hitting a wall of the vessel exerts a pressure:

(2) p m n V v z 2 ,

where v z 2 is the average of the square of v z , a particle’s velocity in z-direction with respect to the x–y plane of the wall. After few other calculations[2] and following the second assumption, the left-hand term of the equation in the Boyle–Charles law can be expressed as:

(3) p V 2 n 3 E kin ,

where n E kin is the average kinetic energy of the gas. T can therefore be seen as:

(4) T = 2 n 3 k E kin .

This process shows how to derive the Boyle–Charles law from the laws of Newtonian physics. In fact, first, a particular theory, the thermal theory of the ideal gas, (here, eq. (3)), was derived by combining Newtonian physics with the two assumptions of the kinetic theory of the ideal gas. Second, eq. (4), that would stand as a bridge law in the GNS, has connected relevant terms, such as T and E kin , and it has yielded a version of the Boyle–Charles law bounded to some conditions. Finally, it has been shown that this particular version of the Boyle–Charles law bounded by particular conditions is strongly analogous (or even coincide) to the standard version of the Boyle–Charles law. Nowadays, scientists consider the reduction of TD to SM successful.[3] The reduction of TD to SM is considered to be a synchronic intertheoretic reduction, namely a reductive relation between two coexisting theories which deal with different levels of a largely overlapping domain. In this reduction, the concepts and the laws of one theory can respectively be expressed as the concepts and be derived from the laws of the more fundamental theory. Accordingly, a correct reduction of TD to SM involves the derivations of the laws of TD from the laws governing the microconstituents of macroscopic systems together with probabilistic assumptions. For this reason, DFH suggest that this reductive relation resembles the GNS, which applies to synchronic intertheoretic reductions.[4]

In the reductive relation, TD is the reduced theory T P and SM is the reducing one T F .[5] According to the GNS, T P corresponds to the set of empirical propositions { T P ( 1 ) , , T P ( n ) } associated with T P . Sp, T F is the set of empirical propositions { T F ( 1 ) , , T F ( n ) } associated with T F . Here, the empirical propositions of T P and T F are the various laws of the theories.[6] As shown in the case of the Boyle–Charles law, the reduction of T P to T F would then follow three steps.

  1. Use auxiliary assumptions to help deriving a restricted version of each element T F ( i ) of T F .[7] Let T F { T F ( 1 ) T F ( n F ) } be the set of the restricted versions.

  2. Adopt bridge laws in order to connect the relevant terms which are not share by the vocabularies of the theories involved.[8] Substituting the terms in T F * with the terms from T F shows that the bridge laws yield the set T P { T P ( 1 ) T P ( n P ) } .

  3. Show that each element of T P * is strongly analogous to the corresponding element of T P .

If these conditions are satisfied, it is believed that T P is reduced to T F with respect to the GNS.

2.2 The Bayesian Analysis of DFH

Amongst the desiderata of reductions in science, coherence and confirmation are the main ones (Dizadji-Bahmani, Frigg, and Hartmann 2010, 2011).[9] Nagel (1961, p. 341)[10] himself sensed that reduction should reconcile two self-consistent and well-confirmed theories whose domains of application (largely) overlap whenever the two sketch a contradictory view of the world. The example of the reduction of TD to SM perfectly fits into this picture. In fact, TD (here, T P ) and SM (here, T F ) should be consistent with each other and evidence confirming TD should support SM, and vice versa. Obviously, both criteria must be met after the reduction occurs. DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011) use Bayesian networks to represent the relation between reduction and confirmation.[11] In fact, Bayesian networks, as illustrated in Nagel (1974), exploit (un)conditional independences and dependencies in order to represent large instances in little space by graphical structures and to perform probabilistic inferences in little time. The type of statistical calculation involved in Bayesian networks is called Bayesian inference, that is, an inference in which Bayes’ theorem is one of the main rules used to update the probability for a hypothesis as more evidence and information become available.[12] Bayesian networks are a type of probabilistic graphical model that uses Bayesian inference for probability computations. Confirming an hypothesis H with a piece of evidence E in Bayesian terms means having a conditional probability P ( H | E ) larger than the prior probability P ( H ) . In other words, an hypothesis is confirmed by E if:

(6) P ( H | E ) > P ( H )  .

More precisely, a Bayesian Network is a directed acyclical graph (DAG),[13] which satisfies the Markov condition[14] and whose nodes represent discrete propositional variables, and edges capture their conditional independences and dependencies.[15] To frame GNS in Bayesian terms, DFH introduce few simplifications which I will use throughout the article.

  1. To simplify the calculations, DFH assume that T F and T P have only one element, namely T F and T P respectively. Their corresponding propositional variables will be T F and T P respectively.

  2. The propositional variables represented by the nodes of a Bayesian network can take two values, i.e. T F and ¬ T F . While the latter means that the proposition T F is false, the former asserts that it is the case that T F is true.

  3. The probability of every node can lie in the open interval (0, 1). I set j = 1 j for all parameters j, unless a parameter is a logical consequences of another variable: in this case, its conditional probability on such variable is 1.

Furthermore, three different pieces of evidence supporting the theories in the reduction relation are gained from experimental tests. They are defined with the propositional variables E, E F , and E P by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011, p. 324). The same simplifications introduced for T F and T P are applied to E , E F , and E P . These three respectively support both theories, only the fundamental theory, and only the reduced theory.

  1. Evidence supporting only TD, e.g. the Joule–Thomson process.[16]

  2. Evidence supporting only SM, e.g. the dependence of a metal’s electrical conductivity on temperature.[17]

  3. Evidence supporting TD and SM simultaneously, e.g. the second law in TD.[18]

According to DFH, the situation before the reduction would then look like the network in Figure 1. Let P 1 be the probability distribution over the variables in such network. The relevant probabilities specifying the network are:

(7) P 1 ( T F ) = t F , P 1 ( T P ) = t P , P 1 ( E F | T F ) = p F , P 1 ( E F | ¬ T F ) = q F , P 1 ( E P | T P ) = p P , P 1 ( E P | ¬ T P ) = q P , P 1 ( E | T F , T P ) = α , P 1 ( E | T F , ¬ T P ) = β , P 1 ( E | ¬ T F , T P ) = γ , P 1 ( E | ¬ T F , ¬ T P ) = δ .

Figure 1: 
The Bayesian Network representing the situation before T

P
 is reduced to T

F
.
Figure 1:

The Bayesian Network representing the situation before T P is reduced to T F .

Before the reduction occurs, T F and T P are probabilistically independent because they do not share the same vocabulary and they are not supported by the same evidence. In fact, E F is independent of T P given T F and, vice versa, E P is independent of T F given T P . Formally:

(8) E F T P | T F , E P T F | T P .

The independences in (8) hold because, in the aforementioned Bayesian network, the paths E F T F E T P and E P T P E T F are respectively blocked at T F and T P by { T F } and { T P } . So, E F and T P are d-separated,[19] and E P and T F as well. Therefore, the conjunction of the prior probabilities of the root nodes T F and T P looks like the following:

(9) P 1 ( T F T P ) = P 1 ( T F ) P 1 ( T P ) = t F t P .

Already before the reduction, one notices that in the Bayesian network there is a connection between T P and T F , namely evidence E. Such link has lead scientists to investigate the intimate relation among those two theories. DFH present then the network for the situation after the reduction (see Figure 2). To reduce one theory to the other, DFH complete three steps: derive T F from T F together with some auxiliary assumptions; introduce bridge laws which, together with T F , yield T P ; show that T P is strongly analogous to together with T P . They make important remarks which help defining the values of the conditional probabilities of the three nodes which follow from the only remaining root node T F . The derivation of T F from T F and the interpretation of strong analogy between T P and T P depend on the judgment of the scientists and on the specific context in which the reduction occurs (Dizadji-Bahmani, Frigg, and Hartmann 2011, p. 328). Regarding bridge laws, which are not factual claims in a rigorous sense, T P is a logical consequence of T F , according to DFH. Let then P 2 be the probability distribution over the propositions one has after reducing T P to T F . The same simplifications introduced for T F and T P are applied to T F * and T P * . The relevant probabilities specifying the second network in Figure 2 are:

(10) P 2 ( T F | T F ) = p F , P 2 ( T F | ¬ T F ) = q F , P 2 ( T P | T F ) = 1 , P 2 ( T P | ¬ T F ) = 0 , P 2 ( T P | T P ) = p P , P 2 ( T P | ¬ T P ) = q P .

Figure 2: 
The Bayesian Network representing the situation after the reduction of T

P
 to T

F
, according to DFH.
Figure 2:

The Bayesian Network representing the situation after the reduction of T P to T F , according to DFH.

Following such network, DFH show that, after the reduction, evidence confirming one theory confirms the other and vice versa. In fact:

Theorem 2.1

E F confirms T P iff i) ( p F q F ) > 0 and ii) ( p F q F ) ( p P q P ) > 0 .

Theorem 2.2

E P confirms T F iff i) ( p P q P ) > 0 and ii) ( p F q F ) ( p P q P ) > 0 .

The two theorems maintain that in order to have a confirmation flow from E F to T P and from E P to T F : i) E F should confirm T F and E P should confirm T P ; ii) T F should confirm T F and T P should confirm T P . The two conditions are satisfied because:

  1. one of the original assumptions of the network is the fact that two evidence support their own respective theory;

  2. T F likely confirms T F and T R likely confirms T P , because T F was derived by T F , and T R is strongly analogous to T P . Again, such construction of the GNS model is justified by the example in 2.1.

Once one constructs a Bayesian network like Figure 2 and assumes the presence of a confirmatory flow from T F to T P via T F and T P , she can prove that, after reducing T P to T F , E F confirms T P , and E P confirms T F .

2.3 The Bayesian Analysis of Tešic

The Bayesian analysis offered by DFH helps representing the reductive relation among T P and T F . However, Tešic (2019) points out to two difficulties faced by their analysis. I omit the details of the less relevant difficulty,[20] because it does not undercut the main project laid out by DFH.[21] I focus, instead, on the main critique Tešic presents, which regards the value of the probabilities P 2 ( T P | T F ) and P 2 ( T P | ¬ T F ) . As showed in (10), while P 2 ( T P | T F ) = 1 , P 2 ( T P | ¬ T F ) = 0 . Tešic claims that thinking the propositional variables T P * and T F * as interchangeable with each other (Dizadji-Bahmani, Frigg, and Hartmann 2011, pp. 329–330) without explicitly stating that bridge laws are assumed is misleading. In fact, DFH’s representation of the bridge law in their Bayesian network suffers from three problems (Tešic 2019, pp. 1108–1111):

  1. Recall eq. (4), that is, the bridge law in the example of the Boyle–Charles law. According to the Bayesian network in Figure 2, one should then have not only P 2 ( T P | T F ) = 1 , but also P 2 ( T P | T F ) = 1 , because p V = 2 n 3 E kin . Instead of the incorrect entailment p V = 2 n 3 E kin | = p V = k T , Tešic notices that such entailment is only possible by supposing the bridge law B:

(11) p V = 2 n 3 E kin | = B p V = k T .

The problem arising in Figure 2 is that it does not incorporate B in the background, according to Tešic. B therefore needs to be included in the probability function P 2 for the entailment to hold.

  1. The fact that from eq. (4) P 2 ( T P | T F ) = P 2 ( T P | T F ) = 1 and P 2 ( T P | ¬ T F ) = P 2 ( T P | ¬ T F ) = 0 follow (Tešic 2019, p. 1118) makes the reduction symmetric. The Bayesian network in Figure 2 seems to imply that the Boyle–Charles law is reduced to the kinetic theory of gases, and vice versa. This clearly goes against the main idea behind every kind of scientific reduction: reduction is anti-symmetric. Furthermore, the interchangeability between T P and T F would prevent partial reductions, which are still important in science, according to DFH (Dizadji-Bahmani, Frigg, and Hartmann 2010, p. 399). In fact, scientists are not always able to connect every term of T P to T F and deduce every law of T P from T F plus bridge laws.

  2. From eq. (4) it also follows that the marginal probabilities P 2 ( T P ) , P 2 ( T P ) and P 2 ( T F ) are equal (Tešic 2019, p. 1118). It is hard to conceive of them as equal given that the equation p V = 2 n 3 E kin is deduced from the kinetic theory of gases together with some auxiliary assumptions, and that the equation p V = k T is then deduce from the equation p V = 2 n 3 E kin and the bridge laws. This suggests that P 2 ( T P ) and P 2 ( T F ) should be left open.

Because of these three problems, Tešic presents an alternative Bayesian network to Figure 2. The main idea behind his network is to explicitly include the propositional variable B representing the bridge law as a root node. Let P 3 be a probability distribution over the variables in Figure 3. The same simplification applied to T F and T P applies to B . Thus, assume that the only element of B is B, and that the two values assignable to the propositional variable B are: B and ¬ B , Then:

(12) P 3 ( B ) = b .

Figure 3: 
The Bayesian Network representing the situation after reducing T

P
 to T

F
, according to Tešic.
Figure 3:

The Bayesian Network representing the situation after reducing T P to T F , according to Tešic.

Two reasons motivate this explicit specification of the bridge laws (Tešic 2019, p. 1121): a) different scientists (often) have different credence about a particular bridge law (e.g. scientists share in fact different degree of belief on eq. (4)); b) the flow of confirmation depends on the value assigned to the probability of the bridge law. Thus, Tešic assigns the following values to the probabilities P 3 ( T P | T F , B ) , P 3 ( T P | T F , ¬ B ) , P 3 ( T P | ¬ T F , B ) , and P 3 ( T P | ¬ T F , ¬ B ) :

(13) P 3 ( T P | T F , B ) = 1 P 3 ( T P | T F , ¬ B ) = P 3 ( T P | ¬ T F , B ) = P 3 ( T P | ¬ T F , ¬ B ) = a ,

where a ( 0 , 1 ) . Accordingly, the new probability assignments do not face the three problems noticed by Tešic. In fact, the first problem is avoided because now one has P 3 ( T P | T F , B ) = 1 and, thus, T F , B | = T P , instead of simply having T F | = T P . The second problem is evaded by showing that the values in (13) entail 0 < P 3 ( T F | T P , B ) < 1 : this means the reduction represented by Tešic’s network is not symmetric. Finally, the third problem is successfully addressed because it is proved that the prior probabilities P 3 ( T P ) and P 3 ( T F ) can be either different or equal: in fact, it depends on the particular values one assigns to the relevant probabilities. According to this analysis, Theorems 2.1 and 2.2, which have already appeared in DFH’s network, follow.

For all the reasons mentioned in this section, I will compare the coherence pre- and post-reduction between the reducing theory and the reduced one by following the probability assignments specified in the Bayesian network in Figure 3.

3 Coherence Measures

Coherence measures are probabilistic measures of the degree of coherence of information sets. They are real-valued functions and the value they assign to each set of propositions represents the degree of coherence of such a set. Coherence might not have a fixed meaning and quantifying coherence suffers from this presumed vagueness behind its notion. Because of this, there is not a single coherence measure: different coherence measures try to grasp several conceptions one might have of coherence. Regarding GNS, it might seem obvious that the reduction of T P to T F establishes some sort of coherence between the two theories (Sarkar 2015, p. 47) because of the way T P is logically derived from T F and the bridge law. In particular, from the perspective of a confirmation-laden coherence measure such as the Shogenji-Schupbach’s measure, the condition which may seem to confirm an improved agreement between T P and T F after the reduction of one to other is the positive confirmation flow that goes from T F to T P via the bridge laws and the auxiliary assumptions. The measure, in fact, treats the coherence of an information set as the mutual support of the propositions in it, which is the view that coherence corresponds to the probabilistic dependence between propositions in a set. As opposed to this view, the Olsonn’s measure intends coherence as the relative overlap amongst those propositions. The higher their overlap over their conjunction the higher the degree of coherence and the higher the agreement of the propositions. These two coherence measures are the main ones in the literature and each of them corresponds to two different properties, i.e. dependence and agreement. Dependence and agreement cannot however be fulfilled at the same time.[22]

In Nagel’s words, reduction makes sure that two theories with largely overlapping domains are mutually consistent when they describe the same event. It appears that the properties aforementioned are in this context desirable. Therefore, what one would need from these coherence measures are stable results to the extent that they will all give similar results. Furthermore, while one would expect no coherence at all (or, a very low degree of coherence) prior to the reduction, after the reduction T F should cohere with T P .[23] In fact, obtaining the same result through different means count as a valid way to further support Nagel’s view on coherence in scientific reductions. Here, it is important to remark that I am interested in the notion of relative coherence rather than an absolute one; what is relevant is to check that the set containing the theories prior to the reduction is less coherent than the set of theories after the reduction occurs. The three coherence measures (Bovens and Hartmann 2003; Olsson 1999; Schupbach 2011) which I will now present, might yield different results in certain contexts:[24] hopefully, in the case of the Bayesian network representing the GNS, they will not.

3.1 Schupbach’s Measure

To introduce Schupbach’s measure, consider a finite and non-empty information set S , that is a set of ordered pairs:

(14) { R 1 , A 1 R n , A n }  ,

where R i is a source reporting that A i , the content of the report R i , is true. The content of the information system S is the ordered set of report contents A 1 A n . Let P be the probability distribution over the propositions A 1 A n , where P ( A i ) gives the degree of confidence of a rational agent in A i . A coherence measure is a function C which maps every finite and non-empty set S of propositions with positive probability to a single real-valued outcome C ( S ) . In other words, the degree of coherence C ( S ) consists precisely in the degree of coherence of its content A 1 A n (Bovens and Hartmann 2003). Then, let S be the set of all finite and non-empty information sets S of propositions with positive probability. Shogenji (1999) proposes to define the coherence of a set S S as:

(15) J ( S ) P ( A 1 A n ) i = 1 n P ( A i )  .

Shogenji conceives of the coherence of S as a mutual support between the propositions in S . The measure is sensitive to the number n of sources in cases of logically equivalent propositions. Given that the join probability of the propositions of S equals to 1 when all sources reports the same proposition, as n, the number of agreeing reports, tends to infinity, so does the degree of coherence: indeed, ”the more coherent beliefs are, the more likely they are together” (Shogenji 1999, p. 338). Instead, if the propositions of a set are independent, the variables representing them will be probabilistically independent and the coherence measure will equal to 1: in this case, the set S is neither coherent nor incoherent. Finally, if the joint probability of the propositions is lower than the probability of their products, the set will be incoherent. A series of criticisms[25] has been offered to the way this measure tries to capture the meaning of coherence (Schupbach 2011, pp. 3–5). For this reason, Schupbach suggests to generalise Shogenji’s measure by calculating the degree of coherence as a weighted average of the degree of coherence of all subsets of S :

(16) S ( S ) Log [ P ( A 1 A n ) i = 1 n P ( A i ) ]  .

Schupbach then proposes several definitions which are needed to avoid the problems arising with Shogenji’s original measure and, therefore, to measure the degree of coherence in a more precise manner. As I will show in the next section, I am only interested in information sets with cardinality two. If sets have cardinality two, the subset-sensitive generalisation of Shogenji’s measure offered by Schupbach can be simply represented by the Log-normalised version of the Shogenji’s measure (Schupbach 2011, p. 9), namely eq. (16).[26] In fact, sets with cardinality two avoid all the problems faced by sets with cardinality strictly larger than two. So, if n = 2 and S = { A 1 , A 2 } , then:

(17) S ( S ) = Log [ P ( A 1 A 2 ) P ( A 1 ) × P ( A 2 ) ]  .

If one measures the degree of coherence of a set containing probabilistically independent propositions as members, 0 will be the outcome since the joint probability of probabilistically independent variables is their product and therefore the logarithm of the ratio of two same values is 0. The outcome of S ( S ) will tend to negative infinity if the set S is not coherent. Vice versa, it will tend towards positive infinity if S is coherent.

3.2 Olsson’s Measure

Olsson (1999) understands coherence as a total agreement between the propositions in an information set S S . In the case of a set S = { A 1 , A 2 } , Olsson then proposes the following coherence measure:

(18) O ( S ) = P ( A 1 A 2 ) P ( A 1 A 2 )  ,

which would look like the following for a set S = { A 1 , , A n } :[27]

(19) O ( S ) P ( A 1 A n ) P ( A 1 A n )

This measure ranges over the closed interval [0, 1]. O indeed assigns 0 to cases of minimal agreement, that is, when the propositions involved are logically inconsistent. Regarding eq. (18), A 1 and A 2 would therefore not overlap in cases of minimal agreement. Whilst, in cases of maximal agreement between the propositions reported, the measure will give 1 as outcome. This means that the propositions in the set are logically equivalent. This measure also presents counter-intuitive results (Dietrich and Moretti 2005, p. 407), such as the possibility of reporting the degree of coherence of two positively dependent propositions as lower than that of two negatively dependent propositions. This remark about O is not relevant for the aim of this article: in next section, I will compare the degree of coherence before and after reduction by exploiting (and assuming), respectively, the independence and the positive dependence amongst the two theories.

3.3 Bovens and Hartmann’s Measure

Bovens and Hartmann (2003)[28] change the course of previous coherence measures. Instead of trying to make the intuition one can have on the notion of coherence precise (i.e. mutual support, total agreement, relative overlap), they focus on the role that coherence, as a property of information sets, plays: boosting our confidence that the content of an information set is true ceteris paribus once the information is received from independent and partially reliable source. The model Bovens and Hartmann construct aims to measure the degree of confidence in the joint truth of an information set. Such degree is determined by the combination of conditions such as results’ expectation, tests’ reliability, coherence of the information. Each of these conditions has a specific measure. Respectively, one has i) an expectance measure, which is about the degree of prior expectance of the joint truth of an information set, ii) a reliability measure, which computes the degree of reliability of the sources, and iii) the coherence measure, which is needed to assess to degree of coherence of the set.

Consider again n partially reliable sources i which report proposition A i , for i = 1 , , n , so that the information set is { A 1 , , A n } . The propositional variable A i is defined for the proposition A i and it can take on two values: A i and ¬ A i . Similarly, the propositional variable R i can also take on two values: R i and ¬ R i . If the report mentions that A i is the case after consulting the proper source, then R i ; otherwise, ¬ R i . Then, let P be a probability distribution over the variables A 1 , , A n , R 1 , , R n , which satisfy the constraints of having independent and partially reliable sources. One needs sources to be:

  1. independent, or else, sources would present information either by looking at other reports and bring additional information or by conveying what they think a coherent report is. For coherence to play a role as confidence boosting, the sources should rather gather information through and only through their own observations which they will report without biasedly inferring what they think a coherent report would look like;

  2. partially reliable, or else, sources would either be truth-tellers or randomisers. The latter would report the information needed in entirely random manner and assessing the degree of coherence of information reported without any degree of reliability is useless. The former would report only true information making the property of coherence redundant, because it would no longer be important whether the report is coherent or not as it is already true given that its information comes from a fully reliable source.

These two points can be formally translated.

  1. Having independent sources means that R i should only report that A i is the case given the she has (likely) observed A i without her observations being affected by additional facts. Probabilistically speaking, A i screens off R i from all other variables A j and R j . Thus, there is a conditional independence between R i and A 1 , R 1 , , A i 1 , R i 1 , A i + 1 , R i + 1 , , A n , R n , given A i , for i = 1 , , n :

(20) R i A 1 , R 1 , , A i 1 , R i 1 , A i + 1 , R i + 1 , , A n , R n | A i .

  1. Partial reliability can be specified with two parameters, namely the true positive rate P ( R i | A i ) = p and the negative positive rate P ( R i | ¬ A i ) = q . Bovens and Hartmann then assume that all sources are equally reliable, that is, all sources have the same p and the same q. The assumptions is introduced because knowing how much one trusts a source is not relevant to assessing the degree of coherence of an information set. For the reasons stated above, all sources in this model are deemed epistemically imperfect, which means that they are more reliable than randomisers, but less reliable than truth-tellers. Thus, the following constraint is imposed on P:

(21) p > q > 0 .

Bovens and Hartmann then define the degree of confidence in the information set as equal to the posterior joint probability of the propositions in the set after all reports have been collected:

(22) P ( A 1 A n ) = P ( A 1 A n | R 1 R n ) .

Then, they apply the Bayes rule on eq. (22) and simplify it with respect to the independence constraint (eq. (20)):

(23) P ( A 1 A n ) = P ( R 1 | A 1 ) × × P ( R n | A n ) × P ( A 1 A n ) A 1 , , A n P ( R 1 | A 1 ) × × P ( R n | A n ) × P ( A 1 A n ) .

While the numerator can be seen as:

(24) P ( R 1 | A 1 ) × × P ( R n | A n ) × P ( A 1 A n ) = p n ξ 0 ,

where p n is the positive rate P ( R i | A i ) = p to the nth degree, and ξ 0 is the prior probability of the conjunction of n 0 positive values and 0 negative values of the variables A 1 , , A n . The denominator looks like:

(25) A 1 , , A n P ( R 1 | A 1 ) × × P ( R n | A n ) × P ( A 1 A n ) = p n ξ 0 + q p n 1 ξ 1 + + q n ξ n .

In the denominator, Bovens and Hartmann gather all terms in which the variables A 1 , A n take, first, n positive values and 0 negative values, then n 1 positive values and 1 negative value and so on, until the term in which those variables take 0 positive values and n negative values is reached. This means that, for instance, ξ 1 is a prior probability where one proposition is false. Finally, if both numerator and denominator are divided by p n , the posterior probability P * ( A 1 A n ) would be:

(26) P * ( A 1 A n ) = ξ 0 i = 0 n ξ i x i ,

where x = q p , that is, the likelihood ratio, and i = 0 n ξ i = 1 . ξ i is therefore the conjunction of all the joint probabilities of the instances of n i positive values and i negative values of the variables A 1 , , A n .

According to the information collected so far, the three measures of Bovens and Hartmann can be finally presented. The expectance measure is defined by the prior joint probability of the propositions in the information set, i.e. the probability before any report was received:

(27) ξ 0 = P ( A i A n ) ,

The more ξ 0 increases, the more the degree of coherence of the set increases. The degree of coherence of the information set, i.e. eq. (22), is a monotonically increasing function of r, that is, the reliability measure:

(28) r 1 x  ,

where x is the likelihood ratio. This measure ranges over the open interval (0, 1), because the sources are neither fully reliable ( r 1 ), nor entirely unreliable ( r 0 ). The last relevant measure is the coherence measure. In order to evaluate the coherence of an information set, they measure the proportion of the confidence boost b, defined by the ratio P * ( A 1 A n ) P ( A 1 A n ) , relative to the confidence boost b max : a confidence boost which would have been received if the same information had been received in the form of maximally coherent information. A maximally coherent information set would contain only logically equivalent propositions and it has a specific distribution of ξ:

(29) < ξ 0 , 0 , , 0 , 1 ξ 0 > .

After calculating the posterior joint probability of a maximally coherent information set and its confidence boost, Bovens and Hartmann compute the coherence measure of an information set S = { A 1 , , A n } . This measure is functionally dependent on the expectance and the reliability measure (cf. Bovens and Hartmann 2003, p. 612):

(30) B ( S ) = b ( S ) b max ( S ) = ξ 0 + ( 1 ξ 0 ) ( 1 r ) n i = 0 n ξ i ( 1 r ) i .

According to Meijs (2005), the maximal requirement is what makes Bovens and Hartmann’s measure produce counter-intuitive results, because B may, in some cases, consider the degree of coherence of a set containing independent proposition as higher of the degree of coherence of a set whose members are positively dependent. This feature of B can be threatening for the aim of the article: comparing the degree of coherence pre- and post-reduction among the two theories. Therefore, it will be necessary to check whether or not the results which come from the comparisons of the degree of coherence pre- and post-reduction, and which are obtained through different coherence measures, are similar. In case the results will contrast with each other, further remarks regarding the choice of the proper coherence measure will have to be made. It would, in fact, become a contextual question of which coherence measure one should prioritise with respect to the epistemic conception of coherence (e.g. mutual support, boosting confidence) they represent.

4 Comparing the Degree of Coherence

In this section, I will compare the degree of coherence of the two theories before and after the reduction. To calculate the degree of coherence with the measures aforementioned, consider the information set S = { T F , T P } containing the two theories before the reduction, and the information set S = { T F , T P } containing the two theories after the reduction. Then, let be a quasi-ordering relation over the set S = { S , S } , which denotes the binary relation at least as coherent as, such that if S S , then S will be at least as coherent as S . To compare S and S is important to use the assumption Bovens and Hartmann makes for the sources: they need to be partially and equally reliable, and independent. So, I am implying that, for example, a scientist has the same credence for P 1 ( E F | T F ) , P 3 ( E F | T F ) , P 1 ( E P | T P ) , P 3 ( E P | T P ) , and so on.[29] Having such an assumption regarding the evidence is important as neither Schupbach nor Olsson consider the role coherence plays for the evidence supporting theories.

4.1 Schupbach’s Measure

Recall Schupbach’s coherence measure. Before the reduction, T P and T P are probabilistically independent. Hence, according to Schupbach, the value of coherence will be 0:

(31) S ( S ) = Log [ P 1 ( T F T P ) P 1 ( T F ) × P 1 ( T P ) ] = Log [ P 1 ( T F ) × P 1 ( T P ) P 1 ( T F ) × P 1 ( T P ) ] = 0 .

The situation changes after T P is reduced to T P as the two become probabilistically dependent:

(32) S ( S ) = Log [ P 3 ( T F T P ) P 3 ( T F ) × P 3 ( T P ) ] .

Therefore, according to Schupbach, the two theories will be consistent with each other after the reduction if and only if what is inside the logarithm is strictly higher than 1, that is, if and only if the joint probability of T F and T P is higher than the product of their prior probabilities. Thus, one can have the following theorem (see Appendix A):

Theorem 4.1

S ( S ) = S ( S ) iff p F * = q F * or p P * = q P * . S ( S ) > S ( S ) iff p F * > q F * and p P * > q P * .

The first part of theorem means that if T P and T P are independent or T F and T F are independent, therefore T P and T F remain independent after the reduction and their coherence does not improve. The second part of the theorem, instead, means that coherence between T P and T F is gained after the reduction if and only if: i) the probability of the conjunction of T P and T P is higher than the conditional probability of T P on ¬ T P ; and ii) the probability of the conjunction of T F and T F is higher than the conditional probability of T F on ¬ T F .

4.2 Olsson’s Measure

The same theorem will appear with the comparison made with Olsson’s measure. Before T P is reduced to T F , one have to maintain the independence between their variables. Thus, the coherence measure formulated by Olsson would look like the following:

(33) O ( S ) = P 1 ( T F T P ) P 1 ( T F T P ) = P 1 ( T F ) × P 1 ( T P ) P 1 ( T F ) + P 1 ( T P ) P 1 ( T F ) × P 1 ( T P ) .

Once the reduction happens, O for S is:

(34) O ( S ) = P 3 ( T F T P ) P 3 ( T F ) + P 3 ( T P ) P 3 ( T F T P ) .

Here, one would need to fix the prior probability P 1 ( T P ) , i.e. P 1 ( T P ) = P 3 ( T P ) , to meaningfully compare the two measures. Then, the sufficient and necessary condition for S S to hold is that the denominator of O ( S ) is less than or equal to the denominator of O ( S ) , because the numerator of O ( S ) is greater than the numerator of O ( S ) .[30] Once the values are substituted in the coherence measure O ,[31] the following theorem is obtained:

Theorem 4.2

O ( S ) = O ( S ) iff p F * = q F * or p P * = q P * . O ( S ) > O ( S ) iff p F * > q F * and p P * > q P * .

This is the same condition one finds with Schupbach’s measure. The higher the difference P 3 ( T F | T F ) P 3 ( T F | ¬ T F ) and the difference P 3 ( T P | T P ) P 3 ( T P | ¬ T P ) , the higher the degree of coherence.

4.3 Bovens and Hartmann’s Measure

To evaluate the degree of coherence pre- and post-reduction in light of the analysis of Bovens and Hartmann, the reliability measure they formulate should not play any role.[32] In fact, B construct a quasi-ordering relation over the set { S , S } and this binary relation is formally independent of the reliability measure. Reconsider the two sets S and S which have the same size, i.e. 2. Recall that while P 1 is the joint probability distribution for the pre-reduction propositions T F and T P , P 3 is the joint probability distribution over the post-reduction propositions T F and T P . By calculating the weight vectors < ξ 0 , ξ 1 > for P 1 and < ξ 0 , ξ 1 > for P 3, the following difference function can be constructed:

(35) f r ( S , S ) = B ( S ) B ( S ) .

The relation which induces a quasi-ordering over the set of the two information sets S and S is then defined as:

(36) For  S , S , S S  iff  f r ( S , S ) 0  for  all  values of  r ( 0 , 1 ) .

Finally, to determine whether f r 0 , one needs to assess the conditions under which the sign of f r is actually positive for all values of r ( 0 , 1 ) . For this reason, Bovens and Hartmann calculate the following condition:

  1. ξ 0 ξ 0 ξ 1 ξ 1 , or

  2. ξ 0 ξ 0 ξ 1 ξ 1 ξ 0 ξ 0 .

This condition is necessary and sufficient for S S to hold. First, consider the first condition. ξ 0 ξ 0 resembles what has been showed above: P 1 ( T F ) × P 1 ( T P ) P 3 ( T F T P ) . This is our usual condition, which reports:

(37) a b t F t F ( p F q F ) ( p P q P ) > 0 .

Then, ξ 1 ξ 1 means that:

(38) P 1 ( T F ¬ T P ) + P 1 ( ¬ T F T P ) P 3 ( T F ¬ T P ) + P 3 ( ¬ T F T P ) .

Once P 1 ( T P ) is fixed and made equal to P 3 ( T P ) (for details, see Appendix C), a similar theorem to the ones seen above follows:

Theorem 4.3

B ( S ) = B ( S ) iff p F * = q F * or p P * = q P * . B ( S ) > B ( S ) iff p F * > q F * and p P * > q P * .

These are the same conditions one has with Olsson and Schupbach’s measures. The second part of the condition mentioned by Bovens and Hartmann is ruled out because it has been showed earlier that the probability of the conjunction of the two propositions T F and T P is higher after the reduction rather than before. Thus, the second part of Bovens and Hartmann’s condition does not hold.

5 Coherence and Confirmation

The conditions upon which the coherence measures report that S is more coherent than S are the same reported by the confirmation measure used by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011) which say that evidence confirming T P also confirms T F and vice versa. This raises an important issue about the nature of coherence measures:[33] do the coherence measures employed here simply track the positive flow of information in the model of the intertheoretic reduction designed by DFH and Tešić? To answer this question, I now consider a couple of numerical examples for Schupbach’s and Olsson’s coherence measures. These examples are aimed at not only understanding the relationship between confirmation and coherence, but also the different shapes of the functions representing the two coherence measures. In these examples, I assume that there is a positive flow of confirmation from T F to T P , that is, p F * > q F * and p P * > q P * .

The first two examples show the different degrees of coherence of S according to Schupbach’s and Olsson’s coherence measure as a function of the amount of confirmation between T F and T F . The examples have p F * and q F * set as variables and t F , a, b, p P * , and q P * as parameters. The value assigned to t F , a, b and p P * is 0.5, while q P * has 0.1.[34] Interestingly, both graphs reveals the degree of coherence of the two theories after the reduction and the amount of confirmation between T F and T F are positively related (Figure 4).

Figure 4: 
The x-axis and the y-axis in the two pictures are respectively the amount of confirmation between T

F
 and 




T
F
∗




${T}_{F}^{\ast }$



 (i.e. 




P
3


(



T
F
∗

|


T
F


)

−

P
3


(

T
F
∗

)




${P}_{3}\left({T}_{F}^{\ast }\vert {T}_{F}\right)-{P}_{3}\left({T}_{F}^{\ast }\right)$



) and the degree of coherence of 




S
′




${\mathcal{S}}^{\prime }$



 (in the first graph, there is 



O

(

S
′

)




$\mathfrak{O}\left({\mathcal{S}}^{\prime }\right)$



, while in the second there is 



S

(

S
′

)




$\mathfrak{S}\left({\mathcal{S}}^{\prime }\right)$



. The variables 




p
F
*




${p}_{F}^{\text{{\ast}}}$



 and 




q
F
*




${q}_{F}^{\text{{\ast}}}$



 take any value that goes from 0.01 to 0.99. When the difference 




p
F
*

−

q
F
*




${p}_{F}^{\text{{\ast}}}-{q}_{F}^{\text{{\ast}}}$



 tends to 1, the area showed in the graphs converges towards a single value since only two values, i.e. 




p
F
*

=
1



${p}_{F}^{\text{{\ast}}}=1$



 and 




q
F
*

=
0



${q}_{F}^{\text{{\ast}}}=0$



, can give 




p
F
*

−

q
F
*

=
1



${p}_{F}^{\text{{\ast}}}-{q}_{F}^{\text{{\ast}}}=1$



. The highlighted red lines are for 




p
F
*

=
0.5



${p}_{F}^{\text{{\ast}}}=0.5$



 and they show the positive correlation between coherence and confirmation.
Figure 4:

The x-axis and the y-axis in the two pictures are respectively the amount of confirmation between T F and T F (i.e. P 3 ( T F | T F ) P 3 ( T F ) ) and the degree of coherence of S (in the first graph, there is O ( S ) , while in the second there is S ( S ) . The variables p F * and q F * take any value that goes from 0.01 to 0.99. When the difference p F * q F * tends to 1, the area showed in the graphs converges towards a single value since only two values, i.e. p F * = 1 and q F * = 0 , can give p F * q F * = 1 . The highlighted red lines are for p F * = 0.5 and they show the positive correlation between coherence and confirmation.

The other two examples in Figure 5 show, instead, the different degrees of coherence of S according to Schupbach’s and Olsson’s coherence measure as a function of the amount of confirmation between T P and T P . Here, I set p P * and q P * as variables and t F , a, b, p F * , and q F * as parameters. Like in the previous example, the value assigned to t F , a, b and p F * is 0.5, while q F * has 0.1. As opposed to the two graphs showed beforehand, the graphs display a negative correlation between the degree of coherence of the two theories after the reduction and the amount of confirmation between T P and T P : the more T F and T P are coherent, the less T P confirms T P .

Figure 5: 
The x-axis and the y-axis in the two pictures are respectively the amount of confirmation between 




T
P
∗




${T}_{P}^{\ast }$



 and T

P
 (i.e. 




P
3


(



T
P

|


T
P
∗


)

−

P
3


(

T
P

)




${P}_{3}\left({T}_{P}\vert {T}_{P}^{\ast }\right)-{P}_{3}\left({T}_{P}\right)$



) and the degree of coherence of 




S
′




${\mathcal{S}}^{\prime }$



 (in the first graph, there is 



O

(

S
′

)




$\mathfrak{O}\left({\mathcal{S}}^{\prime }\right)$



, while in the second there is 



S

(

S
′

)




$\mathfrak{S}\left({\mathcal{S}}^{\prime }\right)$



. The variables 




p
P
*




${p}_{P}^{\text{{\ast}}}$



 and 




q
P
*




${q}_{P}^{\text{{\ast}}}$



 take any value that goes from 0.01 to 0.99. Also in these examples, when the difference 




p
P
*

−

q
P
*




${p}_{P}^{\text{{\ast}}}-{q}_{P}^{\text{{\ast}}}$



 tends to 1, the area showed in the graphs converge towards a single value since only two values, i.e. 




p
P
*

=
1



${p}_{P}^{\text{{\ast}}}=1$



 and 




q
F
*

=
0



${q}_{F}^{\text{{\ast}}}=0$



, can give 




p
P
*

−

q
P
*

=
1



${p}_{P}^{\text{{\ast}}}-{q}_{P}^{\text{{\ast}}}=1$



. The highlighted red lines are for 




p
P
*




${p}_{P}^{\text{{\ast}}}$



 set as 0.5 and they show the negative correlation between coherence and confirmation.
Figure 5:

The x-axis and the y-axis in the two pictures are respectively the amount of confirmation between T P and T P (i.e. P 3 ( T P | T P ) P 3 ( T P ) ) and the degree of coherence of S (in the first graph, there is O ( S ) , while in the second there is S ( S ) . The variables p P * and q P * take any value that goes from 0.01 to 0.99. Also in these examples, when the difference p P * q P * tends to 1, the area showed in the graphs converge towards a single value since only two values, i.e. p P * = 1 and q F * = 0 , can give p P * q P * = 1 . The highlighted red lines are for p P * set as 0.5 and they show the negative correlation between coherence and confirmation.

Finally, the graphs in Figure 6 reveal the positive correlation between the prior probability of the bridge laws and the degree of coherence of S . As expected, a higher assignment of b contributes to a higher degree of coherence. Bridge laws, however, do not provide a major contribution for the coherence of the set containing the two theories after the reduction.

Figure 6: 
The x-axis and the y-axis in the two pictures are respectively the different probability assignments for b and the degree of coherence of 




S
′




${\mathcal{S}}^{\prime }$



 (in the first graph, there is 



O

(

S
′

)




$\mathfrak{O}\left({\mathcal{S}}^{\prime }\right)$



, while in the second there is 



S

(

S
′

)




$\mathfrak{S}\left({\mathcal{S}}^{\prime }\right)$



. The variable b take any value that goes from 0.01 to 0.99. Here, 




t
F




${t}_{F}$



, a, 




p
F
*




${p}_{F}^{\text{{\ast}}}$



, 




p
P
*




${p}_{P}^{\text{{\ast}}}$



, 




1
F
*




${1}_{F}^{\text{{\ast}}}$



, and 




q
P
*




${q}_{P}^{\text{{\ast}}}$



 are set as parameters. Like in the previous example, the value assigned to 




t
F




${t}_{F}$



, a, 




p
F
*




${p}_{F}^{\text{{\ast}}}$



, and 




p
P
*




${p}_{P}^{\text{{\ast}}}$



 is 0.5, while 




q
F
*




${q}_{F}^{\text{{\ast}}}$



 and 




q
P
*




${q}_{P}^{\text{{\ast}}}$



 have 0.1.
Figure 6:

The x-axis and the y-axis in the two pictures are respectively the different probability assignments for b and the degree of coherence of S (in the first graph, there is O ( S ) , while in the second there is S ( S ) . The variable b take any value that goes from 0.01 to 0.99. Here, t F , a, p F * , p P * , 1 F * , and q P * are set as parameters. Like in the previous example, the value assigned to t F , a, p F * , and p P * is 0.5, while q F * and q P * have 0.1.

6 Conclusions

In this article, I have showed that, according to three coherence measures and under the assumption elaborated by Bovens and Hartmann,[35] the degree of coherence amongst two self-consistent and well-confirmed theories with largely overlapping domains of application which are involved in a reduction relation á la GNS is higher than the degree of coherence of the same theories which, before one is reduced to another, sketch a contradictory view of the world. To do this, I first presented the classic putative example where the GNS can be applied to described the main relations occurring in a scientific reduction: the reduction of TD to SM. Then, I looked at two attempts of modelling the GNS in probabilistic terms. Specifically, DFH and Tešic offer two Bayesian analyses, which mostly disagree on the probabilistic account of bridge laws, a crucial relation in a GNS. In fact, Tešic undermines the view shared by DFH that the propositional variables T P * and T F * are interchangeable. This view faces three problems: i) it proposes an incorrect entailment amongst the laws derived from the fundamental theory through the bridge laws and the fundamental theory itself; ii) it makes the reduction symmetric; iii) it omits the role of auxiliary assumptions and boundary conditions in deriving laws strongly analogous to the ones involved in the reduction. The necessity of explicitly assuming the bridge law B in order to overcome these three challenges leads Tešic then to formulate a new probability distribution P 3 and therefore new probability assignments, such as P 3 ( T P | T F , B ) = 1 , P 3 ( T P | T F , ¬ B ) = P 3 ( T P | ¬ T F , B ) = P 3 ( T P | ¬ T F , ¬ B ) = a , and P 3 ( ¬ T P | T F , B ) = 0 . This new purported probabilistic representation of GNS has been used to assess the degree of coherence between the two theories in question as the three coherence measures proposed are expressed in probabilistic terms. I briefly mentioned which epistemic notion of coherence these measures try to grasp. Schupbach’s measure understands coherence as mutual support amongst the propositions included in an information set. Olsson’s measure conceives coherence as an agreement between the propositions involved. Bovens and Hartmann, instead, focus on the role that coherence, as a property of an information set, should play whenever one wants to assess whether or not the set is true. Coherence, according to this framework, boosts one’s confidence that the set is true ceteris paribus once the information is received from independent and partially reliable source. Due to the limit breadth of the article, I did not explain in greater the counter-intuitive results these measures may offer in certain scenarios. Rather, I tried to heuristically show that, in the case of GNS constructed in function of Tešic’s suggestions, they do not report different outcomes. In the fourth section, I showed that they actually outline stable results. The three theorems point out that a set containing T F and T P have a higher degree of coherence after is reduced to than before the reduction if and only if P 3 ( T F | T F ) > P 3 ( T F | ¬ T F ) and P 3 ( T P | T P ) > P 3 ( T P | ¬ T P ) . These two conditions are likely met by the theories involved in the GNS. Thus, under the assumptions of the Bayesian analysis provided by Tešic and the one provided by Bovens and Hartmann, the coherence measures seem to report that the GNS make the two theories involved cohere with each other in light of their positive dependency. Those two inequalities highlighted in the theorems have to hold in order to consider two theories involved in the reductive relation coherent. In the last section, I finally introduced six numerical examples aimed at better understanding the relation between coherence and confirmation as well as their respective measures with respect to the context of intertheoretic reduction as designed by DFH and Tešic. Interestingly, while bridge laws and the confirmation flow between T F and T F are positively related to degree of coherence of S , the confirmation flow between T P and T P is not.

Further projects can still be proposed at the intersection of coherence and GNS. First of all, regarding Tešic’s probability assignment of the bridge law and, in general, regarding every probability assignment in the Bayesian network (see Figure 3), it would be interesting to use tools borrowed from imprecise probability (IP) to better represent the credence scientists might have towards prior probabilities and likelihoods. In fact, usually scientists disagree about the value assignments of prior probabilities and likelihoods. IP is a generalisation of probability theory, which is applicable to cases where it is hard to identify a unique probability distribution, because evidence might be scarce, vague, or conflicting. Thereby, the goal of IP is represent the available knowledge more accurately, instead of focusing on a single precise outcome. Would IP be able to represent the situation before and after the reduction and assess whether of not the two post-reduction theories would cohere amongst each other? Second, more coherence and confirmation measures should check the results I showed above and further discussion should be focused on their relation. The three coherence measures might be extensionally equivalent with the respect to the GNS, because, even if their frameworks and their main epistemic notions of coherence differed, they all confirmed the same hypothesis and provided stable results. What would happen with other coherence measures? Would they report an increase or a decrease in coherence if considered as a function of the amount of confirmation? Third, the coherence relation between evidence and theories requires a proper investigation. One might want to start this investigation from the work of Meijs (2005) and then apply it to the case of GNS. Will the evidence of a theory be coherent with the other theory, and vice versa? Fourth and finally, the third question opens up to the diatribe between foundationalism and coherentism, and to the role coherence should play in assessing a scientific theory. I showed and worked with respect to the assumption made by Bovens and Hartmann. Some philosophers have actually been skeptical of their suggestions. This means that not only new measures, but also other assumptions regarding the sources reporting crucial information (i.e. the evidence) might be employed to construct novel coherence measures. Would considerations on coherence still play a role in deciding to accept a scientific theory if one dropped the conception that comparing the degree of coherence of two (or more) information sets should assume that sources reporting the propositions contained in the sets are equally and partially reliable, and independent? These four questions highlight the fact that philosophers of science and formal epistemologists should work closely with natural scientists.


Corresponding author: Andrea Giuseppe Ragno, Munich Centre for Mathematical Philosophy, LMU, Ludwigstraße 31, 80539 Munich, Germany, E-mail:

Appendices

In what follows, I shall refer to Neapolitan (2003) in order to compute the probabilities of the values of random variables in Bayesian networks.

A Theorem 4.1

(39) P 3 ( T F T P ) = P 3 ( T F ) T P , T F , B P 3 ( T P | T P ) P 3 ( T P | T F , B ) P 3 ( B ) P 3 ( T F | T F ) = t F [ p P ( b p F + a b + a b p F ) + q P ( a b + a b p F ) ]

(40) P 3 ( T P ) = T P , T F , B , T F P 3 ( T P | T P ) P 3 ( T P | T F , B ) P 3 ( B ) P 3 ( T F | T F ) P ( T F ) = p P ( b p F t F + b q F t F + a b + a b p F t F + a b q F t F ) + q P ( a b + a b p F t F + a b q F t F )

(41) P 3 ( T F T P ) > P 3 ( T F ) × P 3 ( T P ) t F [ p P ( b p F + a b + a b p F ) + q P ( a b + a b p F ) ] > t F [ p P ( b p F t F + b q F t F + a b + a b p F t F + a b q F t F ) + q P ( a b + a b p F t F + a b q F t F ) ] a b t F t F ( p F q F ) ( p P q P ) > 0

B Theorem 4.2

(42) P 1 ( T F T P ) = t F t P = t F [ p P ( b p F t F + b q F t F + a b + a b p F t F + a b q F t F ) + q P ( a b + a b p F t F + a b q F t F ) ] ( P 3 ( T P ) i n s t e a d o f P 1 ( T P ) ) ( P 3 ( T F ) i n s t e a d o f P 1 ( T F ) )

(43) P 3 ( T F ) + P 3 ( T F ) P 3 ( T F T P ) P 1 ( T F ) + P 1 ( T P ) P 1 ( T F T P ) P 3 ( T F T P ) P 1 ( T F T P ) P 3 ( T F T P ) P 3 ( T F ) × P 3 ( T P ) t F [ p P ( b p F + a b + a b p F ) + q P ( a b + a b p F ) ] t F [ p P ( b p F t F + b q F t F + a b + a b p F t F + a b q F t F ) + q P ( a b + a b p F t F + a b q F t F ) ] a b t F t F ( p F * q F * ) ( p P * q P * ) 0

C Theorem 4.3

Recall Bovens and Hartmann’s condition. In order to assess ξ 1 and ξ 1 , one needs the sum of the joint probabilities of one theory and the negation of the other for our two information sets before and after reduction. First, let me show ξ 1 , which the pre-reduction parameter:

(44) ξ 1 = P 1 ( T F ¬ T P ) + P 1 ( ¬ T F T P ) = t F ( 1 t P ) + t P t F ,

which holds in virtue of the probabilistic independence of T F and T P . Second, I compute ξ 1 :

(45) ξ 1 = P 3 ( T F ¬ T P ) + P 3 ( ¬ T F T P )

Then, I substitute the following values in this equation.

(46) P 3 ( T F ¬ T P ) = P 3 ( T F ) T P , T F , B P 3 ( ¬ T P | T P ) P 3 ( T P | T F , B ) P 3 ( B ) P 3 ( T F | T F ) = t F [ p P ( b p F + a b + a b p F ) + q P ( a b + a b p F ) ]

(47) P 3 ( ¬ T F T P ) = P 3 ( ¬ T F ) T P , T F , B P 3 ( T P | T P ) P 3 ( T P | T F , B ) P 3 ( B ) P 3 ( T F | ¬ T F ) = t F [ p P ( b q F + a b + a b q F ) + q P ( a b + a b q F ) ]

(48) P 3 ( T F ¬ T P ) + P 3 ( ¬ T F T P ) P 1 ( T F ¬ T P ) + P 1 ( ¬ T F T P ) P 3 ( T F ¬ T P ) + P 3 ( ¬ T F T P ) P 3 ( T F ) × P 3 ( ¬ T P ) + P 3 ( ¬ T F ) × P 3 ( T P ) ( P 3 ( T F )  instead  of  P 1 ( T F ) ) ( P 3 ( T P )  instead of  P 1 ( T P ) ) a b ( p F q F ) ( p P q P ) 0

References

Bovens, L., and S. Hartmann. 2003. “Solving the Riddle of Coherence.” Mind 112: 10, https://doi.org/10.1093/mind/112.448.601.Search in Google Scholar

Dietrich, F., and L. Moretti. 2005. “On Coherent Sets and the Transmission of Confirmation.” Philosophy of Science 72 (3): 403–24, https://doi.org/10.1086/498471.Search in Google Scholar

Dizadji-Bahmani, F., R. Frigg, and S. Hartmann. 2010. “Who’s Afraid of Nagelian Reduction?” Erkenntnis 73 (3): 393–412, https://doi.org/10.1007/s10670-010-9239-x.Search in Google Scholar

Dizadji-Bahmani, F., R. Frigg, and S. Hartmann. 2011. “Confirmation and Reduction: A Bayesian Account.” Synthese 179 (2): 321–38, https://doi.org/10.1007/s11229-010-9775-6.Search in Google Scholar

Feynman, R. P., R. B. Leighton, and M. Sands. 1964. The Feynman Lectures on Physics. Reading: Addison-Wesley.10.1063/1.3051743Search in Google Scholar

Huemer, M. 2007. “Weak Bayesian Coherentism.” Synthese 157 (3): 337–46, https://doi.org/10.1007/s11229-006-9059-3.Search in Google Scholar

Koscholke, J., M. Schippers, and A. Stegmann. 2018. “New Hope for Relative Overlap Measures of Coherence.” Mind 128 (512): 1261–84, https://doi.org/10.1093/mind/fzy037.Search in Google Scholar

Meijs, W. 2005. “Probabilistic Measures of Coherence.” PhD thesis. Erasmus University, Rotterdam.Search in Google Scholar

Moretti, L. 2007. “Ways in Which Coherence is Confirmation Conducive.” Synthese 157 (3): 309–19, https://doi.org/10.1007/s11229-006-9057-5.Search in Google Scholar

Moretti, L., and K. Akiba. 2007. “Probabilistic Measures of Coherence and the Problem of Belief Individuation.” Synthese 154: 73–95, https://doi.org/10.1007/s11229-005-0193-0.Search in Google Scholar

Nagel, E. 1961. The Structure of Science. London: Routledge.10.1119/1.1937571Search in Google Scholar

Nagel, E. 1974. Teleology Revisited. New York: Columbia Press.Search in Google Scholar

Neapolitan, R. 2003. Learning Bayesian Networks. Upper Saddle River: Prentice-Hall.Search in Google Scholar

Olsson, E. J. 1999. “Cohering with.” Erkenntnis 50 (2/3): 273–91, https://doi.org/10.1023/a:1005530006938.10.1023/A:1005530006938Search in Google Scholar

Olsson, E. 2017. “Coherentist Theories of Epistemic Justification.” In The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, edited by E. N. Zalta, Stanford University, spring 2017 edition.Search in Google Scholar

Pauli, W. 1973. Pauli Lectures on Physics: Thermodynamics and the Kinetic Theory of Gases. Cambridge: MIT Press.10.1063/1.3128362Search in Google Scholar

Sarkar, S. 2015. “Nagel on Reduction.” Studies in History and Philosophy of Science Part A 53: 43–56, https://doi.org/10.1016/j.shpsa.2015.05.006.Search in Google Scholar

Schaffner, K.F. 1967. “Approaches to Reduction.” Philosophy of Science 34 (2): 137–47, https://doi.org/10.1086/288137.Search in Google Scholar

Schaffner, K. F. 1974. “Reductionism in Biology: Prospects and Problems.” In PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 613–32.Search in Google Scholar

Schaffner, K. F. 1977. “Reductionism, Values, and Progress in the Biomedical Sciences.” In Logic, Laws, and Life, edited by R. Colodny, 143–71. Pittsburgh: University of Pittsburgh Press.Search in Google Scholar

Schaffner, K. F. 1993. Discovery and Explanation in Biology and Medicine. Chicago: Chicago University Press.Search in Google Scholar

Schupbach, J. N. 2011. “New Hope for Shogenji’s Coherence Measure.” British Journal for the Philosophy of Science 62 (1): 125–42.10.1093/bjps/axq031Search in Google Scholar

Schaffner, K. F. 1969. “The Watson–Crick Model and Reductionism.” British Journal for the Philosophy of Science 20 (4): 325–48, https://doi.org/10.1093/bjps/20.4.325.Search in Google Scholar

Shogenji, T. 1999. “Is Coherence Truth Conducive?” Analysis 59 (264): 338–45, https://doi.org/10.1093/analys/59.4.338.Search in Google Scholar

Tešic, M. 2019. “Confirmation and the Generalized Nagel–Schaffner Model of Reduction: A Bayesian Analysis.” Synthese 196 (3): 1097–129.10.1007/s11229-017-1501-1Search in Google Scholar

Published Online: 2021-12-07

© 2021 Andrea Giuseppe Ragno, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 5.12.2023 from https://www.degruyter.com/document/doi/10.1515/krt-2021-0031/html
Scroll to top button