Synchronic intertheoretic reductions are an important field of research in science. Arguably, the best model able to represent the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction, a model further developed by Schaffner and nowadays known as the generalized version of the Nagel–Schaffner model (GNS). In their article (2010), Dizadji-Bahmani, Frigg, and Hartmann (DFH) specified the two main desiderata of a reduction á la GNS: confirmation and coherence. DFH first and, more rigorously, Tešic (2017) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. The purpose of this article is to analyse and compare the degree of coherence between the two theories involved in the GNS before and after the reduction. For this reason, in the first section, I will be looking at the reduction of thermodynamics to statistical mechanics and use it as an example to describe the GNS. In the second section, I will introduce three coherence measures which will then be employed in the comparison. Finally, in the last two sections, I will compare the degrees of coherence between the reducing and the reduced theory before and after the reduction and use a few numerical examples to understand the relation between coherence and confirmation measures.
Synchronic intertheoretic reductions, namely those reductions between pairs of theories whose domains of applicability largely overlap, are an important field of research in science. Arguably, the best model offered in the philosophical literature which is able to track the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction (Nagel 1961, 1974), a model which was further developed by Schaffner (1967, 1969, 1974, 1977, 1993) and is nowadays known as the GNS, acronym for generalized version of the Nagel–Schaffner model. In short, according to the GNS, reducing a theory T A to another theory T B is possible if and only if the laws of T A are derivable from T B with the help of bridge laws. Seven classes of criticisms were put forward against this model of reduction. In their article, Dizadji-Bahmani, Frigg, and Hartmann (henceforth ‘DFH’) respond to these attacks by offering some needed modifications to the GNS (Dizadji-Bahmani, Frigg, and Hartmann 2010, 2011). Amongst these clarifications, DFH specify the two main desiderata one might want from the GNS: confirmation and coherence. So, DFH first and, more rigorously, Tešic (2019) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. More precisely, evidence confirming one theory also confirms the other theory and vice versa given that after the reduction of one to the other they become connected and share their evidence. The purpose of this article is, instead, to compare the different degrees of coherence between the reducing and the reduced theory before and after the reduction through the Bayesian analysis firstly sketched by DFH and then corrected by Tešic.
For this reason, I will prepare the ground by looking at the classic putative example of a reduction á la GNS: the reduction of thermodynamics (henceforth ‘TD’) to statistical mechanics (henceforth ‘SM’) (Section 2.1). Then, I briefly present the Bayesian analysis of the relation between the reducing and the reduced theory given by DFH (Section 2.2) and corrected by Tešic (Section 2.3): this analysis is crucial to probabilistically represent the GNS. Such Bayesian representation will indeed be used in the comparison of the two different degrees of coherence amongst the two theories in question: in fact, the coherence measures which I will take into account for comparing the degrees of coherence are probabilistic. In particular, three coherence measures will be discussed (Section 3); these measures might present counter-intuitive results in certain contexts. The issue here is whether they will report different outcomes in the case of GNS or not, that is, whether one coherence measure might report that the set of theories after the reduction coheres more than the set of theories prior to the reduction, and another reports the opposite. The goal would be having all the coherence measure reporting the same result. In the fourth section, I will show that they actually outline similar results under an assumption formulated by Bovens and Hartmann for their coherence measure (Bovens and Hartmann 2003): when ones does considerations on coherence, the several sources reporting evidence should be understood as equally and partially reliable, and taken into account independently because the observations being reported is what is at stake when analysing its coherence. I will therefore compare the degrees of coherence among the reducing and the reduced theory pre- and post-reduction (Section 4). Under the assumptions of the Bayesian analysis provided by DFH and Tešic, and under the assumption of the coherence measure of Bovens and Hartmann, the coherence measures will provide two conditions which the two theories of the GNS likely meet in light of their conditional dependency, which is in turn due to the reduction of a theory to the other. Finally, in the last section, I present some numerical examples aimed at analysing the relation between coherence measures and confirmation measures in the context of intertheoretic reduction as designed by DFH and Tešic (Section 5).
2 The Generalized Version of the Nagel–Schaffner Model
In this section, I will present the GNS by looking at a putative example of GNS reduction. Then, I will outline the discussion between DFH’s and Tešic’s Bayesian analysis. The latter does not present knockdown arguments for the former. Rather, it corrects a couple of flaws DFH’s analysis has by reviewing the conceptual character of bridge laws.
2.1 An Example of a GNS Reduction
TD is a branch of physics which describes those phenomena observable in macroscopic systems (e.g. solids, liquids, gases, plasma) and their relation to energy, radiation, and properties of matter. The behaviour of these entities can be expressed by the four laws of thermodynamics which make use of macroscopic properties (e.g. pressure, temperature). Yet, such behaviour can also be explained in terms of their microscopic constituents by SM. In fact, based on statistical methods, probability theory and microscopic physical laws, SM explains the behaviour of macroscopic systems in terms of those dynamical laws governing its microscopic quantities (e.g. molecules, particles). The laws of TD can therefore be expressed in terms of the laws of SM. This hints at our first definition of Nagelian reduction, according to which reducing a theory T A to another theory T B is possible if and only if the laws of T A are derivable from T B with the help of bridge laws, which are empirical facts linking concepts of the reduced theory to terms of the reducing theory. This, however, is not sufficient to properly describe a successful reduction. Consider an example of reduction between TD and SM, namely the Boyle–Charles law (Dizadji-Bahmani, Frigg, and Hartmann 2010, pp. 395–396), to better understand the role of bridge laws. The Boyle–Charles law states that the temperature T of a gas is directly proportional to the product of the values of its pressure p and the volume V over which it evenly distributed:
where k is a constant. This law together with some specific conditions (i.e. gas in thermodynamic equilibrium with the surrounding environment and relatively low pressure) forms the core of the thermal theory of the ideal gas. In SM, there is a corresponding theory for ideal gases: the kinetic theory of the ideal gas. This theory describes the motion of n particles with mass m of a gas spread over the volume of, for example, a vessel according to Newtonian mechanics. The theory includes two assumptions:
the gas should be ideal to the extent that its molecules, which collide elastically, are point particles;
the three components of the velocity (v x , v y , v z ) should be evenly distributed (i.e. there is no favoured direction).
Following the definition of pressure in Newtonian physics and the first assumption, the gas hitting a wall of the vessel exerts a pressure:
where is the average of the square of v z , a particle’s velocity in z-direction with respect to the x–y plane of the wall. After few other calculations and following the second assumption, the left-hand term of the equation in the Boyle–Charles law can be expressed as:
where is the average kinetic energy of the gas. T can therefore be seen as:
This process shows how to derive the Boyle–Charles law from the laws of Newtonian physics. In fact, first, a particular theory, the thermal theory of the ideal gas, (here, eq. (3)), was derived by combining Newtonian physics with the two assumptions of the kinetic theory of the ideal gas. Second, eq. (4), that would stand as a bridge law in the GNS, has connected relevant terms, such as T and , and it has yielded a version of the Boyle–Charles law bounded to some conditions. Finally, it has been shown that this particular version of the Boyle–Charles law bounded by particular conditions is strongly analogous (or even coincide) to the standard version of the Boyle–Charles law. Nowadays, scientists consider the reduction of TD to SM successful. The reduction of TD to SM is considered to be a synchronic intertheoretic reduction, namely a reductive relation between two coexisting theories which deal with different levels of a largely overlapping domain. In this reduction, the concepts and the laws of one theory can respectively be expressed as the concepts and be derived from the laws of the more fundamental theory. Accordingly, a correct reduction of TD to SM involves the derivations of the laws of TD from the laws governing the microconstituents of macroscopic systems together with probabilistic assumptions. For this reason, DFH suggest that this reductive relation resembles the GNS, which applies to synchronic intertheoretic reductions.
In the reductive relation, TD is the reduced theory and SM is the reducing one . According to the GNS, corresponds to the set of empirical propositions associated with . Sp, is the set of empirical propositions associated with . Here, the empirical propositions of and are the various laws of the theories. As shown in the case of the Boyle–Charles law, the reduction of to would then follow three steps.
Use auxiliary assumptions to help deriving a restricted version of each element of . Let be the set of the restricted versions.
Adopt bridge laws in order to connect the relevant terms which are not share by the vocabularies of the theories involved. Substituting the terms in with the terms from shows that the bridge laws yield the set .
Show that each element of is strongly analogous to the corresponding element of .
If these conditions are satisfied, it is believed that is reduced to with respect to the GNS.
2.2 The Bayesian Analysis of DFH
Amongst the desiderata of reductions in science, coherence and confirmation are the main ones (Dizadji-Bahmani, Frigg, and Hartmann 2010, 2011). Nagel (1961, p. 341) himself sensed that reduction should reconcile two self-consistent and well-confirmed theories whose domains of application (largely) overlap whenever the two sketch a contradictory view of the world. The example of the reduction of TD to SM perfectly fits into this picture. In fact, TD (here, ) and SM (here, ) should be consistent with each other and evidence confirming TD should support SM, and vice versa. Obviously, both criteria must be met after the reduction occurs. DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011) use Bayesian networks to represent the relation between reduction and confirmation. In fact, Bayesian networks, as illustrated in Nagel (1974), exploit (un)conditional independences and dependencies in order to represent large instances in little space by graphical structures and to perform probabilistic inferences in little time. The type of statistical calculation involved in Bayesian networks is called Bayesian inference, that is, an inference in which Bayes’ theorem is one of the main rules used to update the probability for a hypothesis as more evidence and information become available. Bayesian networks are a type of probabilistic graphical model that uses Bayesian inference for probability computations. Confirming an hypothesis H with a piece of evidence E in Bayesian terms means having a conditional probability larger than the prior probability . In other words, an hypothesis is confirmed by E if:
More precisely, a Bayesian Network is a directed acyclical graph (DAG), which satisfies the Markov condition and whose nodes represent discrete propositional variables, and edges capture their conditional independences and dependencies. To frame GNS in Bayesian terms, DFH introduce few simplifications which I will use throughout the article.
To simplify the calculations, DFH assume that and have only one element, namely T F and T P respectively. Their corresponding propositional variables will be T F and T P respectively.
The propositional variables represented by the nodes of a Bayesian network can take two values, i.e. T F and . While the latter means that the proposition T F is false, the former asserts that it is the case that T F is true.
The probability of every node can lie in the open interval (0, 1). I set for all parameters j, unless a parameter is a logical consequences of another variable: in this case, its conditional probability on such variable is 1.
Furthermore, three different pieces of evidence supporting the theories in the reduction relation are gained from experimental tests. They are defined with the propositional variables E, E F , and E P by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011, p. 324). The same simplifications introduced for and are applied to , , and . These three respectively support both theories, only the fundamental theory, and only the reduced theory.
Evidence supporting only TD, e.g. the Joule–Thomson process.
Evidence supporting only SM, e.g. the dependence of a metal’s electrical conductivity on temperature.
Evidence supporting TD and SM simultaneously, e.g. the second law in TD.
According to DFH, the situation before the reduction would then look like the network in Figure 1. Let P 1 be the probability distribution over the variables in such network. The relevant probabilities specifying the network are:
Before the reduction occurs, T F and T P are probabilistically independent because they do not share the same vocabulary and they are not supported by the same evidence. In fact, E F is independent of T P given T F and, vice versa, E P is independent of T F given T P . Formally:
The independences in (8) hold because, in the aforementioned Bayesian network, the paths and are respectively blocked at T F and T P by and . So, E F and T P are d-separated, and E P and T F as well. Therefore, the conjunction of the prior probabilities of the root nodes T F and T P looks like the following:
Already before the reduction, one notices that in the Bayesian network there is a connection between T P and T F , namely evidence E. Such link has lead scientists to investigate the intimate relation among those two theories. DFH present then the network for the situation after the reduction (see Figure 2). To reduce one theory to the other, DFH complete three steps: derive from T F together with some auxiliary assumptions; introduce bridge laws which, together with , yield ; show that is strongly analogous to together with T P . They make important remarks which help defining the values of the conditional probabilities of the three nodes which follow from the only remaining root node T F . The derivation of from T F and the interpretation of strong analogy between and T P depend on the judgment of the scientists and on the specific context in which the reduction occurs (Dizadji-Bahmani, Frigg, and Hartmann 2011, p. 328). Regarding bridge laws, which are not factual claims in a rigorous sense, is a logical consequence of , according to DFH. Let then P 2 be the probability distribution over the propositions one has after reducing T P to T F . The same simplifications introduced for and are applied to and . The relevant probabilities specifying the second network in Figure 2 are:
Following such network, DFH show that, after the reduction, evidence confirming one theory confirms the other and vice versa. In fact:
E F confirms T P iff i) and ii) .
E P confirms T F iff i) and ii) .
The two theorems maintain that in order to have a confirmation flow from E F to T P and from E P to T F : i) E F should confirm T F and E P should confirm T P ; ii) T F should confirm and should confirm T P . The two conditions are satisfied because:
one of the original assumptions of the network is the fact that two evidence support their own respective theory;
T F likely confirms and likely confirms T P , because was derived by T F , and is strongly analogous to T P . Again, such construction of the GNS model is justified by the example in 2.1.
Once one constructs a Bayesian network like Figure 2 and assumes the presence of a confirmatory flow from T F to T P via and , she can prove that, after reducing T P to T F , E F confirms T P , and E P confirms T F .
2.3 The Bayesian Analysis of Tešic
The Bayesian analysis offered by DFH helps representing the reductive relation among T P and T F . However, Tešic (2019) points out to two difficulties faced by their analysis. I omit the details of the less relevant difficulty, because it does not undercut the main project laid out by DFH. I focus, instead, on the main critique Tešic presents, which regards the value of the probabilities and . As showed in (10), while , . Tešic claims that thinking the propositional variables and as interchangeable with each other (Dizadji-Bahmani, Frigg, and Hartmann 2011, pp. 329–330) without explicitly stating that bridge laws are assumed is misleading. In fact, DFH’s representation of the bridge law in their Bayesian network suffers from three problems (Tešic 2019, pp. 1108–1111):
Recall eq. (4), that is, the bridge law in the example of the Boyle–Charles law. According to the Bayesian network in Figure 2, one should then have not only , but also , because . Instead of the incorrect entailment , Tešic notices that such entailment is only possible by supposing the bridge law B:
The problem arising in Figure 2 is that it does not incorporate B in the background, according to Tešic. B therefore needs to be included in the probability function P 2 for the entailment to hold.
The fact that from eq. (4) and follow (Tešic 2019, p. 1118) makes the reduction symmetric. The Bayesian network in Figure 2 seems to imply that the Boyle–Charles law is reduced to the kinetic theory of gases, and vice versa. This clearly goes against the main idea behind every kind of scientific reduction: reduction is anti-symmetric. Furthermore, the interchangeability between and would prevent partial reductions, which are still important in science, according to DFH (Dizadji-Bahmani, Frigg, and Hartmann 2010, p. 399). In fact, scientists are not always able to connect every term of to and deduce every law of from plus bridge laws.
From eq. (4) it also follows that the marginal probabilities , and are equal (Tešic 2019, p. 1118). It is hard to conceive of them as equal given that the equation is deduced from the kinetic theory of gases together with some auxiliary assumptions, and that the equation is then deduce from the equation and the bridge laws. This suggests that and should be left open.
Because of these three problems, Tešic presents an alternative Bayesian network to Figure 2. The main idea behind his network is to explicitly include the propositional variable B representing the bridge law as a root node. Let P 3 be a probability distribution over the variables in Figure 3. The same simplification applied to and applies to . Thus, assume that the only element of is B, and that the two values assignable to the propositional variable B are: B and , Then:
Two reasons motivate this explicit specification of the bridge laws (Tešic 2019, p. 1121): a) different scientists (often) have different credence about a particular bridge law (e.g. scientists share in fact different degree of belief on eq. (4)); b) the flow of confirmation depends on the value assigned to the probability of the bridge law. Thus, Tešic assigns the following values to the probabilities , , , and :
where . Accordingly, the new probability assignments do not face the three problems noticed by Tešic. In fact, the first problem is avoided because now one has and, thus, , instead of simply having . The second problem is evaded by showing that the values in (13) entail : this means the reduction represented by Tešic’s network is not symmetric. Finally, the third problem is successfully addressed because it is proved that the prior probabilities and can be either different or equal: in fact, it depends on the particular values one assigns to the relevant probabilities. According to this analysis, Theorems 2.1 and 2.2, which have already appeared in DFH’s network, follow.
For all the reasons mentioned in this section, I will compare the coherence pre- and post-reduction between the reducing theory and the reduced one by following the probability assignments specified in the Bayesian network in Figure 3.
3 Coherence Measures
Coherence measures are probabilistic measures of the degree of coherence of information sets. They are real-valued functions and the value they assign to each set of propositions represents the degree of coherence of such a set. Coherence might not have a fixed meaning and quantifying coherence suffers from this presumed vagueness behind its notion. Because of this, there is not a single coherence measure: different coherence measures try to grasp several conceptions one might have of coherence. Regarding GNS, it might seem obvious that the reduction of T P to T F establishes some sort of coherence between the two theories (Sarkar 2015, p. 47) because of the way T P is logically derived from T F and the bridge law. In particular, from the perspective of a confirmation-laden coherence measure such as the Shogenji-Schupbach’s measure, the condition which may seem to confirm an improved agreement between T P and T F after the reduction of one to other is the positive confirmation flow that goes from T F to T P via the bridge laws and the auxiliary assumptions. The measure, in fact, treats the coherence of an information set as the mutual support of the propositions in it, which is the view that coherence corresponds to the probabilistic dependence between propositions in a set. As opposed to this view, the Olsonn’s measure intends coherence as the relative overlap amongst those propositions. The higher their overlap over their conjunction the higher the degree of coherence and the higher the agreement of the propositions. These two coherence measures are the main ones in the literature and each of them corresponds to two different properties, i.e. dependence and agreement. Dependence and agreement cannot however be fulfilled at the same time.
In Nagel’s words, reduction makes sure that two theories with largely overlapping domains are mutually consistent when they describe the same event. It appears that the properties aforementioned are in this context desirable. Therefore, what one would need from these coherence measures are stable results to the extent that they will all give similar results. Furthermore, while one would expect no coherence at all (or, a very low degree of coherence) prior to the reduction, after the reduction T F should cohere with T P . In fact, obtaining the same result through different means count as a valid way to further support Nagel’s view on coherence in scientific reductions. Here, it is important to remark that I am interested in the notion of relative coherence rather than an absolute one; what is relevant is to check that the set containing the theories prior to the reduction is less coherent than the set of theories after the reduction occurs. The three coherence measures (Bovens and Hartmann 2003; Olsson 1999; Schupbach 2011) which I will now present, might yield different results in certain contexts: hopefully, in the case of the Bayesian network representing the GNS, they will not.
3.1 Schupbach’s Measure
To introduce Schupbach’s measure, consider a finite and non-empty information set , that is a set of ordered pairs:
where R i is a source reporting that A i , the content of the report R i , is true. The content of the information system is the ordered set of report contents . Let P be the probability distribution over the propositions , where gives the degree of confidence of a rational agent in A i . A coherence measure is a function C which maps every finite and non-empty set of propositions with positive probability to a single real-valued outcome . In other words, the degree of coherence consists precisely in the degree of coherence of its content (Bovens and Hartmann 2003). Then, let be the set of all finite and non-empty information sets of propositions with positive probability. Shogenji (1999) proposes to define the coherence of a set as:
Shogenji conceives of the coherence of as a mutual support between the propositions in . The measure is sensitive to the number n of sources in cases of logically equivalent propositions. Given that the join probability of the propositions of equals to 1 when all sources reports the same proposition, as n, the number of agreeing reports, tends to infinity, so does the degree of coherence: indeed, ”the more coherent beliefs are, the more likely they are together” (Shogenji 1999, p. 338). Instead, if the propositions of a set are independent, the variables representing them will be probabilistically independent and the coherence measure will equal to 1: in this case, the set is neither coherent nor incoherent. Finally, if the joint probability of the propositions is lower than the probability of their products, the set will be incoherent. A series of criticisms has been offered to the way this measure tries to capture the meaning of coherence (Schupbach 2011, pp. 3–5). For this reason, Schupbach suggests to generalise Shogenji’s measure by calculating the degree of coherence as a weighted average of the degree of coherence of all subsets of :
Schupbach then proposes several definitions which are needed to avoid the problems arising with Shogenji’s original measure and, therefore, to measure the degree of coherence in a more precise manner. As I will show in the next section, I am only interested in information sets with cardinality two. If sets have cardinality two, the subset-sensitive generalisation of Shogenji’s measure offered by Schupbach can be simply represented by the Log-normalised version of the Shogenji’s measure (Schupbach 2011, p. 9), namely eq. (16). In fact, sets with cardinality two avoid all the problems faced by sets with cardinality strictly larger than two. So, if and , then:
If one measures the degree of coherence of a set containing probabilistically independent propositions as members, 0 will be the outcome since the joint probability of probabilistically independent variables is their product and therefore the logarithm of the ratio of two same values is 0. The outcome of will tend to negative infinity if the set is not coherent. Vice versa, it will tend towards positive infinity if is coherent.
3.2 Olsson’s Measure
Olsson (1999) understands coherence as a total agreement between the propositions in an information set . In the case of a set , Olsson then proposes the following coherence measure:
which would look like the following for a set :
This measure ranges over the closed interval [0, 1]. indeed assigns 0 to cases of minimal agreement, that is, when the propositions involved are logically inconsistent. Regarding eq. (18), A 1 and A 2 would therefore not overlap in cases of minimal agreement. Whilst, in cases of maximal agreement between the propositions reported, the measure will give 1 as outcome. This means that the propositions in the set are logically equivalent. This measure also presents counter-intuitive results (Dietrich and Moretti 2005, p. 407), such as the possibility of reporting the degree of coherence of two positively dependent propositions as lower than that of two negatively dependent propositions. This remark about is not relevant for the aim of this article: in next section, I will compare the degree of coherence before and after reduction by exploiting (and assuming), respectively, the independence and the positive dependence amongst the two theories.
3.3 Bovens and Hartmann’s Measure
Bovens and Hartmann (2003) change the course of previous coherence measures. Instead of trying to make the intuition one can have on the notion of coherence precise (i.e. mutual support, total agreement, relative overlap), they focus on the role that coherence, as a property of information sets, plays: boosting our confidence that the content of an information set is true ceteris paribus once the information is received from independent and partially reliable source. The model Bovens and Hartmann construct aims to measure the degree of confidence in the joint truth of an information set. Such degree is determined by the combination of conditions such as results’ expectation, tests’ reliability, coherence of the information. Each of these conditions has a specific measure. Respectively, one has i) an expectance measure, which is about the degree of prior expectance of the joint truth of an information set, ii) a reliability measure, which computes the degree of reliability of the sources, and iii) the coherence measure, which is needed to assess to degree of coherence of the set.
Consider again n partially reliable sources i which report proposition A i , for , so that the information set is . The propositional variable A i is defined for the proposition A i and it can take on two values: A i and . Similarly, the propositional variable R i can also take on two values: R i and . If the report mentions that A i is the case after consulting the proper source, then R i ; otherwise, . Then, let P be a probability distribution over the variables , which satisfy the constraints of having independent and partially reliable sources. One needs sources to be:
independent, or else, sources would present information either by looking at other reports and bring additional information or by conveying what they think a coherent report is. For coherence to play a role as confidence boosting, the sources should rather gather information through and only through their own observations which they will report without biasedly inferring what they think a coherent report would look like;
partially reliable, or else, sources would either be truth-tellers or randomisers. The latter would report the information needed in entirely random manner and assessing the degree of coherence of information reported without any degree of reliability is useless. The former would report only true information making the property of coherence redundant, because it would no longer be important whether the report is coherent or not as it is already true given that its information comes from a fully reliable source.
These two points can be formally translated.
Having independent sources means that R i should only report that A i is the case given the she has (likely) observed A i without her observations being affected by additional facts. Probabilistically speaking, A i screens off R i from all other variables A j and R j . Thus, there is a conditional independence between R i and , given A i , for :
Partial reliability can be specified with two parameters, namely the true positive rate and the negative positive rate . Bovens and Hartmann then assume that all sources are equally reliable, that is, all sources have the same p and the same q. The assumptions is introduced because knowing how much one trusts a source is not relevant to assessing the degree of coherence of an information set. For the reasons stated above, all sources in this model are deemed epistemically imperfect, which means that they are more reliable than randomisers, but less reliable than truth-tellers. Thus, the following constraint is imposed on P:
Bovens and Hartmann then define the degree of confidence in the information set as equal to the posterior joint probability of the propositions in the set after all reports have been collected:
While the numerator can be seen as:
where is the positive rate to the nth degree, and is the prior probability of the conjunction of positive values and 0 negative values of the variables . The denominator looks like:
In the denominator, Bovens and Hartmann gather all terms in which the variables take, first, n positive values and 0 negative values, then positive values and 1 negative value and so on, until the term in which those variables take 0 positive values and n negative values is reached. This means that, for instance, is a prior probability where one proposition is false. Finally, if both numerator and denominator are divided by , the posterior probability would be:
where , that is, the likelihood ratio, and . is therefore the conjunction of all the joint probabilities of the instances of positive values and i negative values of the variables .
According to the information collected so far, the three measures of Bovens and Hartmann can be finally presented. The expectance measure is defined by the prior joint probability of the propositions in the information set, i.e. the probability before any report was received:
The more increases, the more the degree of coherence of the set increases. The degree of coherence of the information set, i.e. eq. (22), is a monotonically increasing function of r, that is, the reliability measure:
where x is the likelihood ratio. This measure ranges over the open interval (0, 1), because the sources are neither fully reliable ( ), nor entirely unreliable ( ). The last relevant measure is the coherence measure. In order to evaluate the coherence of an information set, they measure the proportion of the confidence boost b, defined by the ratio , relative to the confidence boost : a confidence boost which would have been received if the same information had been received in the form of maximally coherent information. A maximally coherent information set would contain only logically equivalent propositions and it has a specific distribution of ξ:
After calculating the posterior joint probability of a maximally coherent information set and its confidence boost, Bovens and Hartmann compute the coherence measure of an information set . This measure is functionally dependent on the expectance and the reliability measure (cf. Bovens and Hartmann 2003, p. 612):
According to Meijs (2005), the maximal requirement is what makes Bovens and Hartmann’s measure produce counter-intuitive results, because may, in some cases, consider the degree of coherence of a set containing independent proposition as higher of the degree of coherence of a set whose members are positively dependent. This feature of can be threatening for the aim of the article: comparing the degree of coherence pre- and post-reduction among the two theories. Therefore, it will be necessary to check whether or not the results which come from the comparisons of the degree of coherence pre- and post-reduction, and which are obtained through different coherence measures, are similar. In case the results will contrast with each other, further remarks regarding the choice of the proper coherence measure will have to be made. It would, in fact, become a contextual question of which coherence measure one should prioritise with respect to the epistemic conception of coherence (e.g. mutual support, boosting confidence) they represent.
4 Comparing the Degree of Coherence
In this section, I will compare the degree of coherence of the two theories before and after the reduction. To calculate the degree of coherence with the measures aforementioned, consider the information set containing the two theories before the reduction, and the information set containing the two theories after the reduction. Then, let be a quasi-ordering relation over the set , which denotes the binary relation at least as coherent as, such that if , then will be at least as coherent as . To compare and is important to use the assumption Bovens and Hartmann makes for the sources: they need to be partially and equally reliable, and independent. So, I am implying that, for example, a scientist has the same credence for , , , , and so on. Having such an assumption regarding the evidence is important as neither Schupbach nor Olsson consider the role coherence plays for the evidence supporting theories.
4.1 Schupbach’s Measure
Recall Schupbach’s coherence measure. Before the reduction, T P and T P are probabilistically independent. Hence, according to Schupbach, the value of coherence will be 0:
The situation changes after T P is reduced to T P as the two become probabilistically dependent:
Therefore, according to Schupbach, the two theories will be consistent with each other after the reduction if and only if what is inside the logarithm is strictly higher than 1, that is, if and only if the joint probability of T F and T P is higher than the product of their prior probabilities. Thus, one can have the following theorem (see Appendix A):
iff or . iff and .
The first part of theorem means that if T P and are independent or T F and are independent, therefore T P and T F remain independent after the reduction and their coherence does not improve. The second part of the theorem, instead, means that coherence between T P and T F is gained after the reduction if and only if: i) the probability of the conjunction of T P and is higher than the conditional probability of T P on ; and ii) the probability of the conjunction of T F and is higher than the conditional probability of T F on .
4.2 Olsson’s Measure
The same theorem will appear with the comparison made with Olsson’s measure. Before T P is reduced to T F , one have to maintain the independence between their variables. Thus, the coherence measure formulated by Olsson would look like the following:
Once the reduction happens, for is:
Here, one would need to fix the prior probability , i.e. , to meaningfully compare the two measures. Then, the sufficient and necessary condition for to hold is that the denominator of is less than or equal to the denominator of , because the numerator of is greater than the numerator of . Once the values are substituted in the coherence measure , the following theorem is obtained:
iff or . iff and .
This is the same condition one finds with Schupbach’s measure. The higher the difference and the difference , the higher the degree of coherence.
4.3 Bovens and Hartmann’s Measure
To evaluate the degree of coherence pre- and post-reduction in light of the analysis of Bovens and Hartmann, the reliability measure they formulate should not play any role. In fact, construct a quasi-ordering relation over the set and this binary relation is formally independent of the reliability measure. Reconsider the two sets and which have the same size, i.e. 2. Recall that while P 1 is the joint probability distribution for the pre-reduction propositions T F and T P , P 3 is the joint probability distribution over the post-reduction propositions T F and T P . By calculating the weight vectors for P 1 and for P 3, the following difference function can be constructed:
The relation which induces a quasi-ordering over the set of the two information sets and is then defined as:
Finally, to determine whether , one needs to assess the conditions under which the sign of is actually positive for all values of . For this reason, Bovens and Hartmann calculate the following condition:
This condition is necessary and sufficient for to hold. First, consider the first condition. resembles what has been showed above: . This is our usual condition, which reports:
Then, means that:
Once is fixed and made equal to (for details, see Appendix C), a similar theorem to the ones seen above follows:
iff or . iff and .
These are the same conditions one has with Olsson and Schupbach’s measures. The second part of the condition mentioned by Bovens and Hartmann is ruled out because it has been showed earlier that the probability of the conjunction of the two propositions T F and T P is higher after the reduction rather than before. Thus, the second part of Bovens and Hartmann’s condition does not hold.
5 Coherence and Confirmation
The conditions upon which the coherence measures report that is more coherent than are the same reported by the confirmation measure used by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011) which say that evidence confirming T P also confirms T F and vice versa. This raises an important issue about the nature of coherence measures: do the coherence measures employed here simply track the positive flow of information in the model of the intertheoretic reduction designed by DFH and Tešić? To answer this question, I now consider a couple of numerical examples for Schupbach’s and Olsson’s coherence measures. These examples are aimed at not only understanding the relationship between confirmation and coherence, but also the different shapes of the functions representing the two coherence measures. In these examples, I assume that there is a positive flow of confirmation from T F to T P , that is, and .
The first two examples show the different degrees of coherence of according to Schupbach’s and Olsson’s coherence measure as a function of the amount of confirmation between T F and . The examples have and set as variables and , a, b, , and as parameters. The value assigned to , a, b and is 0.5, while has 0.1. Interestingly, both graphs reveals the degree of coherence of the two theories after the reduction and the amount of confirmation between T F and are positively related (Figure 4).
The other two examples in Figure 5 show, instead, the different degrees of coherence of according to Schupbach’s and Olsson’s coherence measure as a function of the amount of confirmation between and T P . Here, I set and as variables and , a, b, , and as parameters. Like in the previous example, the value assigned to , a, b and is 0.5, while has 0.1. As opposed to the two graphs showed beforehand, the graphs display a negative correlation between the degree of coherence of the two theories after the reduction and the amount of confirmation between and : the more T F and T P are coherent, the less confirms T P .
Finally, the graphs in Figure 6 reveal the positive correlation between the prior probability of the bridge laws and the degree of coherence of . As expected, a higher assignment of b contributes to a higher degree of coherence. Bridge laws, however, do not provide a major contribution for the coherence of the set containing the two theories after the reduction.
In this article, I have showed that, according to three coherence measures and under the assumption elaborated by Bovens and Hartmann, the degree of coherence amongst two self-consistent and well-confirmed theories with largely overlapping domains of application which are involved in a reduction relation á la GNS is higher than the degree of coherence of the same theories which, before one is reduced to another, sketch a contradictory view of the world. To do this, I first presented the classic putative example where the GNS can be applied to described the main relations occurring in a scientific reduction: the reduction of TD to SM. Then, I looked at two attempts of modelling the GNS in probabilistic terms. Specifically, DFH and Tešic offer two Bayesian analyses, which mostly disagree on the probabilistic account of bridge laws, a crucial relation in a GNS. In fact, Tešic undermines the view shared by DFH that the propositional variables and are interchangeable. This view faces three problems: i) it proposes an incorrect entailment amongst the laws derived from the fundamental theory through the bridge laws and the fundamental theory itself; ii) it makes the reduction symmetric; iii) it omits the role of auxiliary assumptions and boundary conditions in deriving laws strongly analogous to the ones involved in the reduction. The necessity of explicitly assuming the bridge law in order to overcome these three challenges leads Tešic then to formulate a new probability distribution P 3 and therefore new probability assignments, such as , , and . This new purported probabilistic representation of GNS has been used to assess the degree of coherence between the two theories in question as the three coherence measures proposed are expressed in probabilistic terms. I briefly mentioned which epistemic notion of coherence these measures try to grasp. Schupbach’s measure understands coherence as mutual support amongst the propositions included in an information set. Olsson’s measure conceives coherence as an agreement between the propositions involved. Bovens and Hartmann, instead, focus on the role that coherence, as a property of an information set, should play whenever one wants to assess whether or not the set is true. Coherence, according to this framework, boosts one’s confidence that the set is true ceteris paribus once the information is received from independent and partially reliable source. Due to the limit breadth of the article, I did not explain in greater the counter-intuitive results these measures may offer in certain scenarios. Rather, I tried to heuristically show that, in the case of GNS constructed in function of Tešic’s suggestions, they do not report different outcomes. In the fourth section, I showed that they actually outline stable results. The three theorems point out that a set containing T F and T P have a higher degree of coherence after is reduced to than before the reduction if and only if and . These two conditions are likely met by the theories involved in the GNS. Thus, under the assumptions of the Bayesian analysis provided by Tešic and the one provided by Bovens and Hartmann, the coherence measures seem to report that the GNS make the two theories involved cohere with each other in light of their positive dependency. Those two inequalities highlighted in the theorems have to hold in order to consider two theories involved in the reductive relation coherent. In the last section, I finally introduced six numerical examples aimed at better understanding the relation between coherence and confirmation as well as their respective measures with respect to the context of intertheoretic reduction as designed by DFH and Tešic. Interestingly, while bridge laws and the confirmation flow between T F and are positively related to degree of coherence of , the confirmation flow between and is not.
Further projects can still be proposed at the intersection of coherence and GNS. First of all, regarding Tešic’s probability assignment of the bridge law and, in general, regarding every probability assignment in the Bayesian network (see Figure 3), it would be interesting to use tools borrowed from imprecise probability (IP) to better represent the credence scientists might have towards prior probabilities and likelihoods. In fact, usually scientists disagree about the value assignments of prior probabilities and likelihoods. IP is a generalisation of probability theory, which is applicable to cases where it is hard to identify a unique probability distribution, because evidence might be scarce, vague, or conflicting. Thereby, the goal of IP is represent the available knowledge more accurately, instead of focusing on a single precise outcome. Would IP be able to represent the situation before and after the reduction and assess whether of not the two post-reduction theories would cohere amongst each other? Second, more coherence and confirmation measures should check the results I showed above and further discussion should be focused on their relation. The three coherence measures might be extensionally equivalent with the respect to the GNS, because, even if their frameworks and their main epistemic notions of coherence differed, they all confirmed the same hypothesis and provided stable results. What would happen with other coherence measures? Would they report an increase or a decrease in coherence if considered as a function of the amount of confirmation? Third, the coherence relation between evidence and theories requires a proper investigation. One might want to start this investigation from the work of Meijs (2005) and then apply it to the case of GNS. Will the evidence of a theory be coherent with the other theory, and vice versa? Fourth and finally, the third question opens up to the diatribe between foundationalism and coherentism, and to the role coherence should play in assessing a scientific theory. I showed and worked with respect to the assumption made by Bovens and Hartmann. Some philosophers have actually been skeptical of their suggestions. This means that not only new measures, but also other assumptions regarding the sources reporting crucial information (i.e. the evidence) might be employed to construct novel coherence measures. Would considerations on coherence still play a role in deciding to accept a scientific theory if one dropped the conception that comparing the degree of coherence of two (or more) information sets should assume that sources reporting the propositions contained in the sets are equally and partially reliable, and independent? These four questions highlight the fact that philosophers of science and formal epistemologists should work closely with natural scientists.
In what follows, I shall refer to Neapolitan (2003) in order to compute the probabilities of the values of random variables in Bayesian networks.
A Theorem 4.1
B Theorem 4.2
C Theorem 4.3
Recall Bovens and Hartmann’s condition. In order to assess and , one needs the sum of the joint probabilities of one theory and the negation of the other for our two information sets before and after reduction. First, let me show , which the pre-reduction parameter:
which holds in virtue of the probabilistic independence of T F and T P . Second, I compute :
Then, I substitute the following values in this equation.
Meijs, W. 2005. “Probabilistic Measures of Coherence.” PhD thesis. Erasmus University, Rotterdam.Search in Google Scholar
Nagel, E. 1974. Teleology Revisited. New York: Columbia Press.Search in Google Scholar
Neapolitan, R. 2003. Learning Bayesian Networks. Upper Saddle River: Prentice-Hall.Search in Google Scholar
Olsson, E. 2017. “Coherentist Theories of Epistemic Justification.” In The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, edited by E. N. Zalta, Stanford University, spring 2017 edition.Search in Google Scholar
Schaffner, K. F. 1974. “Reductionism in Biology: Prospects and Problems.” In PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 613–32.Search in Google Scholar
Schaffner, K. F. 1977. “Reductionism, Values, and Progress in the Biomedical Sciences.” In Logic, Laws, and Life, edited by R. Colodny, 143–71. Pittsburgh: University of Pittsburgh Press.Search in Google Scholar
Schaffner, K. F. 1993. Discovery and Explanation in Biology and Medicine. Chicago: Chicago University Press.Search in Google Scholar
© 2021 Andrea Giuseppe Ragno, published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.