Coherence and Reduction

: Synchronic intertheoretic reductions are an important field of research in science. Arguably, the best model able to represent the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction, a model further developed by Schaffner and nowadays known as the generalized version of the Nagel – Schaffner model (GNS). In their article (2010), Dizadji-Bahmani, Frigg, and Hartmann (DFH) specified the two main desiderata of a reduction á la GNS: con ﬁ rmation and coherence. DFH ﬁ rst and, more rigorously, Te š ic (2017) later analyse the con ﬁ rmatory relation between the reducing and the reduced theory in terms of Bayesian con ﬁ rmation theory. The purpose of this article is to analyse and compare the degree of coherence between the two theories involved in the GNS before and after the reduction. For this reason, in the ﬁ rst section, I will be looking at the reduction of thermodynamics to statistical mechanics and use it as an example to describe the GNS. In the second section, I will introduce three coherence measures which will then be employed in the comparison. Finally, in the last two sections, I will compare the degrees of coherence between the reducing and the reduced theory before and after the reduction and use a few numerical examples to understand the relation between coherence and con ﬁ rmation measures.


Introduction
Synchronic intertheoretic reductions, namely those reductions between pairs of theories whose domains of applicability largely overlap, are an important field of research in science. Arguably, the best model offered in the philosophical literature which is able to track the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction (Nagel 1961(Nagel , 1974, a model which was further developed by Schaffner (1967Schaffner ( , 1969Schaffner ( , 1974Schaffner ( , 1977Schaffner ( , 1993 and is nowadays known as the GNS, acronym for generalized version of the Nagel-Schaffner model. In short, according to the GNS, reducing a theory T A to another theory T B is possible if and only if the laws of T A are derivable from T B with the help of bridge laws. Seven classes of criticisms were put forward against this model of reduction. In their article, Dizadji-Bahmani, Frigg, and Hartmann (henceforth 'DFH') respond to these attacks by offering some needed modifications to the GNS Hartmann 2010, 2011). Amongst these clarifications, DFH specify the two main desiderata one might want from the GNS: confirmation and coherence. So, DFH first and, more rigorously, Tešic (2019) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. More precisely, evidence confirming one theory also confirms the other theory and vice versa given that after the reduction of one to the other they become connected and share their evidence. The purpose of this article is, instead, to compare the different degrees of coherence between the reducing and the reduced theory before and after the reduction through the Bayesian analysis firstly sketched by DFH and then corrected by Tešic. For this reason, I will prepare the ground by looking at the classic putative example of a reduction á la GNS: the reduction of thermodynamics (henceforth 'TD') to statistical mechanics (henceforth 'SM') (Section 2.1). Then, I briefly present the Bayesian analysis of the relation between the reducing and the reduced theory given by DFH (Section 2.2) and corrected by Tešic (Section 2.3): this analysis is crucial to probabilistically represent the GNS. Such Bayesian representation will indeed be used in the comparison of the two different degrees of coherence amongst the two theories in question: in fact, the coherence measures which I will take into account for comparing the degrees of coherence are probabilistic. In particular, three coherence measures will be discussed (Section 3); these measures might present counter-intuitive results in certain contexts. The issue here is whether they will report different outcomes in the case of GNS or not, that is, whether one coherence measure might report that the set of theories after the reduction coheres more than the set of theories prior to the reduction, and another reports the opposite. The goal would be having all the coherence measure reporting the same result. In the fourth section, I will show that they actually outline similar results under an assumption formulated by Bovens and Hartmann for their coherence measure (Bovens and Hartmann 2003): when ones does considerations on coherence, the several sources reporting evidence should be understood as equally and partially reliable, and taken into account independently because the observations being reported is what is at stake when analysing its coherence. I will therefore compare the degrees of coherence among the reducing and the reduced theory pre-and post-reduction (Section 4). Under the assumptions of the Bayesian analysis provided by DFH and Tešic, and under the assumption of the coherence measure of Bovens and Hartmann, the coherence measures will provide two conditions which the two theories of the GNS likely meet in light of their conditional dependency, which is in turn due to the reduction of a theory to the other. Finally, in the last section, I present some numerical examples aimed at analysing the relation between coherence measures and confirmation measures in the context of intertheoretic reduction as designed by DFH and Tešic (Section 5).
2 The Generalized Version of the Nagel-Schaffner Model In this section, I will present the GNS by looking at a putative example of GNS reduction. Then, I will outline the discussion between DFH's and Tešic's Bayesian analysis. The latter does not present knockdown arguments for the former. Rather, it corrects a couple of flaws DFH's analysis has by reviewing the conceptual character of bridge laws.

An Example of a GNS Reduction
TD is a branch of physics which describes those phenomena observable in macroscopic systems (e.g. solids, liquids, gases, plasma) and their relation to energy, radiation, and properties of matter. The behaviour of these entities can be expressed by the four laws of thermodynamics which make use of macroscopic properties (e.g. pressure, temperature). Yet, such behaviour can also be explained in terms of their microscopic constituents by SM. In fact, based on statistical methods, probability theory and microscopic physical laws, SM explains the behaviour of macroscopic systems in terms of those dynamical laws governing its microscopic quantities (e.g. molecules, particles). The laws of TD can therefore be expressed in terms of the laws of SM. This hints at our first definition of Nagelian reduction, according to which reducing a theory T A to another theory T B is possible if and only if the laws of T A are derivable from T B with the help of bridge laws, which are empirical facts linking concepts of the reduced theory to terms of the reducing theory. This, however, is not sufficient to properly describe a successful reduction. Consider an example of reduction between TD and SM, namely the Boyle-Charles law (Dizadji-Bahmani, Frigg, and Hartmann 2010, pp. 395-396), 1 to better understand the role of bridge laws. The Boyle-Charles law states that the temperature T of a gas is directly proportional to the product of the values of its pressure p and the volume V over which it evenly distributed: where k is a constant. This law together with some specific conditions (i.e. gas in thermodynamic equilibrium with the surrounding environment and relatively low pressure) forms the core of the thermal theory of the ideal gas. In SM, there is a corresponding theory for ideal gases: the kinetic theory of the ideal gas. This theory describes the motion of n particles with mass m of a gas spread over the volume of, for example, a vessel according to Newtonian mechanics. The theory includes two assumptions: i. the gas should be ideal to the extent that its molecules, which collide elastically, are point particles; ii. the three components of the velocity v → (v x , v y , v z ) should be evenly distributed (i.e. there is no favoured direction).
Following the definition of pressure in Newtonian physics and the first assumption, the gas hitting a wall of the vessel exerts a pressure: where 〈v 2 z 〉 is the average of the square of v z , a particle's velocity in z-direction with respect to the x-y plane of the wall. After few other calculations 2 and following the second assumption, the left-hand term of the equation in the Boyle-Charles law can be expressed as: where n〈E kin 〉 is the average kinetic energy of the gas. T can therefore be seen as: This process shows how to derive the Boyle-Charles law from the laws of Newtonian physics. In fact, first, a particular theory, the thermal theory of the ideal gas, (here, eq. (3)), was derived by combining Newtonian physics with the two assumptions of the kinetic theory of the ideal gas. Second, eq. (4), that would stand as a bridge law in the GNS, has connected relevant terms, such as T and 〈E kin 〉, and it has yielded a version of the Boyle-Charles law bounded to some conditions. Finally, it has been shown that this particular version of the Boyle-Charles law bounded by particular conditions is strongly analogous (or even coincide) to the standard version of the Boyle-Charles law. Nowadays, scientists consider the reduction of TD to SM successful. 3 The reduction of TD to SM is considered to be a synchronic intertheoretic reduction, namely a reductive relation between two coexisting theories which deal with different levels of a largely overlapping domain. In this reduction, the concepts and the laws of one theory can respectively be expressed as the concepts and be derived from the laws of the more fundamental theory. Accordingly, a correct reduction of TD to SM involves the derivations of the laws of TD from the laws governing the microconstituents of macroscopic systems together with probabilistic assumptions. For this reason, DFH suggest that this reductive relation resembles the GNS, which applies to synchronic intertheoretic reductions. 4 In the reductive relation, TD is the reduced theory T P and SM is the reducing one T F . 5 According to the GNS, T P corresponds to the set of empirical propositions Here, the empirical propositions of T P and T F are the various laws of the theories. 6 As shown in the case of the Boyle-Charles law, the reduction of T P to T F would then follow three steps. 1. Use auxiliary assumptions to help deriving a restricted version of each element } be the set of the restricted versions. 2. Adopt bridge laws in order to connect the relevant terms which are not share by the vocabularies of the theories involved. 8 Substituting the terms in T * F with the terms from T F shows that the bridge laws yield the set T * P ≐ {T * ( 1) P … T * ( n P ) P }. 3. Show that each element of T * P is strongly analogous to the corresponding element of T P .
3 Although, the foundations of SM remain a controversial topic. 4 DFH are known for having responded to seven classes of criticisms put forward against the GNS by slightly modifying this version. In this article, we shall refer to their conception of the GNS simply as reduction. See Dizadji-Bahmani, Frigg, and Hartmann (2010) for arguments why the GNS corrected by DFH is our best reduction model for mapping several scientific reductive relations. See, instead, Sarkar (2015) for reasons why one would want to stick with Nagel's original model of reduction (Nagel 1961(Nagel , 1974. 5 While, the index P stands for phenomenological, the index F stands for fundamental. 6 This, however, does not mean that T A = T A since a theory includes more elements than just its laws. 7 See eq. (3). 8 See eq. (4).

Coherence and Reduction
If these conditions are satisfied, it is believed that T P is reduced to T F with respect to the GNS.

The Bayesian Analysis of DFH
Amongst the desiderata of reductions in science, coherence and confirmation are the main ones Hartmann 2010, 2011). 9 Nagel (1961, p. 341) 10 himself sensed that reduction should reconcile two self-consistent and well-confirmed theories whose domains of application (largely) overlap whenever the two sketch a contradictory view of the world. The example of the reduction of TD to SM perfectly fits into this picture. In fact, TD (here, T P ) and SM (here, T F ) should be consistent with each other and evidence confirming TD should support SM, and vice versa. Obviously, both criteria must be met after the reduction occurs. DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011) use Bayesian networks to represent the relation between reduction and confirmation. 11 In fact, Bayesian networks, as illustrated in Nagel (1974), exploit (un)conditional independences and dependencies in order to represent large instances in little space by graphical structures and to perform probabilistic inferences in little time. The type of 9 It is important to remark that DFH introduce the desiderata of reductions while defending their account of bridge laws. In general, according to DFH, bridge laws are factual assertions, which fall into two categories: bridge laws which associate entities postulated by two theories, and bridge laws which associate properties. While entity association laws indicate identities and are internal to T F , property association laws are external to T F and are therefore not required to express identities. Following this, a property in T P can correspond to several properties in T F . This principle is known as multiple realisability (MR), a famous thesis in philosophy which claims that a single event can be implemented by different physical properties. DFH gathers four arguments according to which MR undermines GNS together with four counter-arguments, which sum up their position: amongst T F and T P , bridge laws should associate not only entities, but also properties. Here, I briefly mention the first argument with its corresponding counter-argument. Critics of bridge laws believe indeed that it is untenable that properties in T P are not wholly contained in T F . They suggest that properties in T P must be identifiable with properties in T F . The principle of MR does not identify a property in T P with a property in T F and, thus, it weakens the claims made by the defenders of the GNS. DFH responds to this concern by stating why reductions are desired in science: consistency and confirmation. Explanations and strict identities between properties in T P and T F are not necessary. Yet, as it will soon be shown, it seems that the requirement of strict identities appear in the Bayesian analysis offered by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011). This is one of the reasons why Tešic (2019) is skeptic of such analysis and why I present and use his outlook in this article to prove the coherence amongst the two theories involved in the GNS. 10 Nagel included explanation amongst the desiderata of reductions. See Sarkar (2015) for details. 11 Later, I shall do the same for the relation between reduction and coherence. statistical calculation involved in Bayesian networks is called Bayesian inference, that is, an inference in which Bayes' theorem is one of the main rules used to update the probability for a hypothesis as more evidence and information become available. 12 Bayesian networks are a type of probabilistic graphical model that uses Bayesian inference for probability computations. Confirming an hypothesis H with a piece of evidence E in Bayesian terms means having a conditional probability P(H | E) larger than the prior probability P( H). In other words, an hypothesis is confirmed by E if: More precisely, a Bayesian Network is a directed acyclical graph (DAG), 13 which satisfies the Markov condition 14 and whose nodes represent discrete propositional variables, and edges capture their conditional independences and dependencies. 15 To frame GNS in Bayesian terms, DFH introduce few simplifications which I will use throughout the article.
-To simplify the calculations, DFH assume that T F and T P have only one element, namely T F and T P respectively. Their corresponding propositional variables will be T F and T P respectively. -The propositional variables represented by the nodes of a Bayesian network can take two values, i.e. T F and ¬T F . While the latter means that the proposition T F is false, the former asserts that it is the case that T F is true. -The probability of every node can lie in the open interval (0, 1). I set j = 1 − j for all parameters j, unless a parameter is a logical consequences of another variable: in this case, its conditional probability on such variable is 1.
12 Formally, the Bayes rule gives a value of P(A | B): 13 Recall graph theory. A directed graph consists of an ordered pair ( V, E), where V is the finite set of nodes and E is the set of ordered pair of distinct elements of V, also known as edges. Let (X, Y) ∈ E: then, Y is a parent node of child node X if there is an edge from Y to X. A directed acyclical graph is a directed graph where there is no path from a parent node back to itself. Let (X 1 , X 2 , …, X K ) ∈ E such that K > 2: then, there is a path from X 1 to X K , and X K is a descendent of X 1 and X 1 an ancestor of X K . 14 The Markov condition is the assumption that every node is conditionally independent of the set of all its nondescendents, given the set of all its parents. 15 For more details on DAGs see Neapolitan (2003, chapters 1-2). In a DAG, causal relationships are represented by arrows between the propositional variables. Obviously, in the case of the Bayesian networks which will be presented, I am not considering the relationships between the variables representing the laws in the GNS as causal.

Coherence and Reduction
Furthermore, three different pieces of evidence supporting the theories in the reduction relation are gained from experimental tests. They are defined with the propositional variables E, E F , and E P by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011, p. 324). The same simplifications introduced for T F and T P are applied to E, E F , and E P . These three respectively support both theories, only the fundamental theory, and only the reduced theory.
-Evidence supporting only TD, e.g. the Joule-Thomson process. 16 -Evidence supporting only SM, e.g. the dependence of a metal's electrical conductivity on temperature. 17 -Evidence supporting TD and SM simultaneously, e.g. the second law in TD. 18 According to DFH, the situation before the reduction would then look like the network in Figure 1. Let P 1 be the probability distribution over the variables in such network. The relevant probabilities specifying the network are: Before the reduction occurs, T F and T P are probabilistically independent because they do not share the same vocabulary and they are not supported by the same evidence. In fact, E F is independent of T P given T F and, vice versa, E P is independent of T F given T P . Formally: Figure 1: The Bayesian Network representing the situation before T P is reduced to T F .
16 Say that we slow push a gas from a chamber with higher pressure into a chamber with lower pressure so that no heat is exchanged and pressure remains constant in both chamber. We can calculate the amount of cooling of the gas in the second chamber by using the principles of TD. This calculation coincides with the experimental data. 17 SM can help deriving equations relating the change in the conductivity of certain metals to the prior change in temperature. This involves quantum theory. The relation coincides with the experimental values as well.
18 If a wall dividing a box into two chambers is removed, the gas which was confined to one of the two chambers will spread evenly in the box. SM and TD respectively calculate that the Boltzmann entropy and the TD entropy of the gas increase.
The independences in (8) hold because, in the aforementioned Bayesian network, the paths E F − T F − E − T P and E P − T P − E − T F are respectively blocked at T F and T P by {T F } and {T P }. So, E F and T P are d-separated, 19 and E P and T F as well. Therefore, the conjunction of the prior probabilities of the root nodes T F and T P looks like the following: Already before the reduction, one notices that in the Bayesian network there is a connection between T P and T F , namely evidence E. Such link has lead scientists to investigate the intimate relation among those two theories. DFH present then the network for the situation after the reduction (see Figure 2). To reduce one theory to the other, DFH complete three steps: derive T * F from T F together with some auxiliary assumptions; introduce bridge laws which, together with T * F , yield T * P ; show that T * P is strongly analogous to together with T P . They make important remarks which help defining the values of the conditional probabilities of the three nodes which follow from the only remaining root node T F . The derivation of T * F from T F and the interpretation of strong analogy between T * P and T P depend on the judgment of the scientists and on the specific context in which the reduction occurs (Dizadji-Bahmani, Frigg, and Hartmann 2011, p. 328). Regarding bridge laws, which are not factual claims in a rigorous sense, T * P is a logical consequence of T * F , according to DFH. Let then P 2 be the probability distribution over the propositions one has after reducing T P to T F . The same simplifications introduced for T F and T P Figure 2: The Bayesian Network representing the situation after the reduction of T P to T F , according to DFH.
19 In Bayesian networks, d-separation is a property for assessing whether or not a set A of variables is independent of another set B, given a third set C.

Coherence and Reduction
are applied to T * F and T * P . The relevant probabilities specifying the second network in Figure 2 are: Following such network, DFH show that, after the reduction, evidence confirming one theory confirms the other and vice versa. In fact: The two theorems maintain that in order to have a confirmation flow from E F to T P and from E P to T F : i) E F should confirm T F and E P should confirm T P ; ii) T F should confirm T * F and T * P should confirm T P . The two conditions are satisfied because: i. one of the original assumptions of the network is the fact that two evidence support their own respective theory; ii. T F likely confirms T * F and T * R likely confirms T P , because T * F was derived by T F , and T * R is strongly analogous to T P . Again, such construction of the GNS model is justified by the example in 2.1.
Once one constructs a Bayesian network like Figure 2 and assumes the presence of a confirmatory flow from T F to T P via T * F and T * P , she can prove that, after reducing T P to T F , E F confirms T P , and E P confirms T F .

The Bayesian Analysis of Tešic
The Bayesian analysis offered by DFH helps representing the reductive relation among T P and T F . However, Tešic (2019) points out to two difficulties faced by their analysis. I omit the details of the less relevant difficulty, 20 because it does not undercut the main project laid out by DFH. 21 I focus, instead, on the main critique 20 I am referring to the difficulty faced by DFH in representing the independence between T P and T F as conditional. 21 In short, Tešic agree with DFH on the account of (8) prior to the reduction. However, he notices that (8) does not hold after the reduction. I am not focusing on the details here because I will not be discussing the relation between evidence and coherence in this article. Thus, what matters to the goal of my essay is that, ultimately, T F and T P are unconditionally independent before the reduction. For a better understanding of this issue, see Neapolitan (2003Neapolitan ( , p. 1108).
Tešic presents, which regards the value of the probabilities P 2 (T * Tešic claims that thinking the propositional variables T * P and T * F as interchangeable with each other (Dizadji-Bahmani, Frigg, and Hartmann 2011, pp. 329-330) without explicitly stating that bridge laws are assumed is misleading. In fact, DFH's representation of the bridge law in their Bayesian network suffers from three problems (Tešic 2019(Tešic , pp. 1108(Tešic -1111: i. Recall eq. (4), that is, the bridge law in the example of the Boyle-Charles law.
According to the Bayesian network in Figure 2, one should then have not only Tešic notices that such entailment is only possible by supposing the bridge law B: The problem arising in Figure 2 is that it does not incorporate B in the background, according to Tešic. B therefore needs to be included in the probability function P 2 for the entailment to hold. i. The fact that from eq. (4) P 2 (T * (Tešic 2019(Tešic , p. 1118) makes the reduction symmetric. The Bayesian network in Figure 2 seems to imply that the Boyle-Charles law is reduced to the kinetic theory of gases, and vice versa. This clearly goes against the main idea behind every kind of scientific reduction: reduction is antisymmetric. Furthermore, the interchangeability between T * P and T * F would prevent partial reductions, which are still important in science, according to DFH (Dizadji-Bahmani, Frigg, and Hartmann 2010, p. 399). In fact, scientists are not always able to connect every term of T * P to T F and deduce every law of T * P from T F plus bridge laws. ii. From eq. (4) it also follows that the marginal probabilities P 2 ( T P ), P 2 (T * P ) and P 2 (T * F ) are equal (Tešic 2019(Tešic , p. 1118. It is hard to conceive of them as equal given that the equation pV = 2n 3 〈E kin 〉 is deduced from the kinetic theory of gases together with some auxiliary assumptions, and that the equation pV = kT is then deduce from the equation pV = 2n 3 〈E kin 〉 and the bridge laws. This suggests that P 2 (T * P ) and P 2 (T * F ) should be left open.
Because of these three problems, Tešic presents an alternative Bayesian network to Figure 2. The main idea behind his network is to explicitly include the propositional variable B representing the bridge law as a root node. Let P 3 be a Coherence and Reduction probability distribution over the variables in Figure 3. The same simplification applied to T F and T P applies to B. Thus, assume that the only element of B is B, and that the two values assignable to the propositional variable B are: B and ¬B, Then: Two reasons motivate this explicit specification of the bridge laws (Tešic 2019, p. 1121): a) different scientists (often) have different credence about a particular bridge law (e.g. scientists share in fact different degree of belief on eq. (4)); b) the flow of confirmation depends on the value assigned to the probability of the bridge law. Thus, Tešic assigns the following values to the probabilities P 3 (T * where a ∈ (0, 1). Accordingly, the new probability assignments do not face the three problems noticed by Tešic. In fact, the first problem is avoided because now one has P 3 (T * P ⃒ ⃒ ⃒ ⃒ T * F , B) = 1 and, thus, The second problem is evaded by showing that the values in (13) entail 0 < P 3 (T * F ⃒ ⃒ ⃒ ⃒ T * P , B) < 1: this means the reduction represented by Tešic's network is not symmetric. Finally, the third problem is successfully addressed because it is proved that the prior probabilities P 3 (T * P ) and P 3 (T * F ) can be either different or equal: in fact, it depends on the particular values one assigns to the relevant probabilities. According to this analysis, Theorems 2.1 and 2.2, which have already appeared in DFH's network, follow.
For all the reasons mentioned in this section, I will compare the coherence preand post-reduction between the reducing theory and the reduced one by following the probability assignments specified in the Bayesian network in Figure 3. Coherence measures are probabilistic measures of the degree of coherence of information sets. They are real-valued functions and the value they assign to each set of propositions represents the degree of coherence of such a set. Coherence might not have a fixed meaning and quantifying coherence suffers from this presumed vagueness behind its notion. Because of this, there is not a single coherence measure: different coherence measures try to grasp several conceptions one might have of coherence. Regarding GNS, it might seem obvious that the reduction of T P to T F establishes some sort of coherence between the two theories (Sarkar 2015, p. 47) because of the way T P is logically derived from T F and the bridge law. In particular, from the perspective of a confirmation-laden coherence measure such as the Shogenji-Schupbach's measure, the condition which may seem to confirm an improved agreement between T P and T F after the reduction of one to other is the positive confirmation flow that goes from T F to T P via the bridge laws and the auxiliary assumptions. The measure, in fact, treats the coherence of an information set as the mutual support of the propositions in it, which is the view that coherence corresponds to the probabilistic dependence between propositions in a set. As opposed to this view, the Olsonn's measure intends coherence as the relative overlap amongst those propositions. The higher their overlap over their conjunction the higher the degree of coherence and the higher the agreement of the propositions. These two coherence measures are the main ones in the literature and each of them corresponds to two different properties, i.e. dependence and agreement. Dependence and agreement cannot however be fulfilled at the same time. 22 In Nagel's words, reduction makes sure that two theories with largely overlapping domains are mutually consistent when they describe the same event. It appears that the properties aforementioned are in this context desirable. Therefore, what one would need from these coherence measures are stable results to the extent that they will all give similar results. Furthermore, while one would expect no coherence at all (or, a very low degree of coherence) prior to the reduction, after the reduction T F should cohere with T P . 23 In fact, obtaining the same result through different means count as a valid way to further support Nagel's view on coherence 22 See Koscholke, Schippers, and Stegmann (2018) for more details. 23 Due to the limited breadth of this article, I am not taking into account the total evidence in the comparison between pre-and post-reduction joint probabilities. As it will be shown below, I am limiting myself to compare the degree of coherence of two information sets containing T P and T P . Thus, I consider the prior probability of the propositional variables T F , T * F , T * P , T P . If one wants to focus on assessing the degree of coherence between a theory and the total evidence supporting it, Coherence and Reduction in scientific reductions. Here, it is important to remark that I am interested in the notion of relative coherence rather than an absolute one; what is relevant is to check that the set containing the theories prior to the reduction is less coherent than the set of theories after the reduction occurs. The three coherence measures (Bovens and Hartmann 2003;Olsson 1999;Schupbach 2011) which I will now present, might yield different results in certain contexts: 24 hopefully, in the case of the Bayesian network representing the GNS, they will not.

Schupbach's Measure
To introduce Schupbach's measure, consider a finite and non-empty information set S, that is a set of ordered pairs: where R i is a source reporting that A i , the content of the report R i , is true. The content of the information system S is the ordered set of report contents 〈A 1 ∧ … ∧ A n 〉. Let P be the probability distribution over the propositions 〈A 1 ∧ … ∧ A n 〉, where P( A i ) gives the degree of confidence of a rational agent in A i . A coherence measure is a function C which maps every finite and non-empty set S of propositions with positive probability to a single real-valued outcome C(S). In other words, the degree of coherence C(S) consists precisely in the degree of coherence of its content 〈A 1 ∧ … ∧ A n 〉 (Bovens and Hartmann 2003). Then, let S be the set of all finite and non-empty information sets S of propositions with positive probability. Shogenji (1999) proposes to define the coherence of a set S ∈ S as: she should discuss coherence measures which resemble confirmation measure. See Bovens and Hartmann (2003, pp. 626-628) and Meijs (2005, chapter 5) for more details. 24 In epistemology, the coherence theory of justification, or coherentism, regards a belief in a set of beliefs or the set of beliefs itself as justified if the former coheres with the belonging set and the latter forms a coherent system. Usually, coherence is used as an epistemic criterion for theory choice even though coherentism does not assess the truth of a proposition or a belief in terms of its coherence. Rather, Moretti (2007) have found out that several coherence measures are confirmation conducive, which means that if evidence confirms a hypothesis, confirmation is transmitted to any hypotheses that are sufficiently coherent with the former hypothesis (Dietrich and Moretti 2005). Furthermore, following (Moretti and Akiba 2007), coherence measures violate intuitive epistemic principles we would want them to respect. This leads every coherence measure producing counter-intuitive results, e.g. the degree of coherence changes by adding or removing logical consequences to the information set, according to every coherence measure. See Olsson (2017) for more details.
Shogenji conceives of the coherence of S as a mutual support between the propositions in S. The measure is sensitive to the number n of sources in cases of logically equivalent propositions. Given that the join probability of the propositions of S equals to 1 when all sources reports the same proposition, as n, the number of agreeing reports, tends to infinity, so does the degree of coherence: indeed, "the more coherent beliefs are, the more likely they are together" (Shogenji 1999, p. 338). Instead, if the propositions of a set are independent, the variables representing them will be probabilistically independent and the coherence measure will equal to 1: in this case, the set S is neither coherent nor incoherent. Finally, if the joint probability of the propositions is lower than the probability of their products, the set will be incoherent. A series of criticisms 25 has been offered to the way this measure tries to capture the meaning of coherence (Schupbach 2011, pp. 3-5). For this reason, Schupbach suggests to generalise Shogenji's measure by calculating the degree of coherence as a weighted average of the degree of coherence of all subsets of S: Schupbach then proposes several definitions which are needed to avoid the problems arising with Shogenji's original measure and, therefore, to measure the degree of coherence in a more precise manner. As I will show in the next section, I am only interested in information sets with cardinality two. If sets have cardinality two, the subset-sensitive generalisation of Shogenji's measure offered by Schupbach can be simply represented by the Log-normalised version of the Shogenji's measure (Schupbach 2011, p. 9), namely eq. (16). 26 In fact, sets with cardinality two avoid all the problems faced by sets with cardinality strictly larger than two. So, if n = 2 and S = {A 1 , A 2 }, then: ] .
If one measures the degree of coherence of a set containing probabilistically independent propositions as members, 0 will be the outcome since the joint probability of probabilistically independent variables is their product and therefore the logarithm of the ratio of two same values is 0. The outcome of S(S) will tend to 25 E.g. adding an irrelevant proposition to an information set (coherent or not) necessarily does not change the degree of coherence of the set, according to Shogenji's measure. 26 For this reason, I will not show the intricate means Schupbach's measure encompasses to avoid Shogenji's counter-intuitive results.

Coherence and Reduction
negative infinity if the set S is not coherent. Vice versa, it will tend towards positive infinity if S is coherent. Olsson (1999) understands coherence as a total agreement between the propositions in an information set S ∈ S. In the case of a set S = {A 1 , A 2 }, Olsson then proposes the following coherence measure:

Olsson's Measure
which would look like the following for a set S = {A 1 , …, A n }: 27 This measure ranges over the closed interval [0, 1]. O indeed assigns 0 to cases of minimal agreement, that is, when the propositions involved are logically inconsistent. Regarding eq. (18), A 1 and A 2 would therefore not overlap in cases of minimal agreement. Whilst, in cases of maximal agreement between the propositions reported, the measure will give 1 as outcome. This means that the propositions in the set are logically equivalent. This measure also presents counterintuitive results (Dietrich and Moretti 2005, p. 407), such as the possibility of reporting the degree of coherence of two positively dependent propositions as lower than that of two negatively dependent propositions. This remark about O is not relevant for the aim of this article: in next section, I will compare the degree of coherence before and after reduction by exploiting (and assuming), respectively, the independence and the positive dependence amongst the two theories. Bovens and Hartmann (2003) 28 change the course of previous coherence measures. Instead of trying to make the intuition one can have on the notion of coherence precise (i.e. mutual support, total agreement, relative overlap), they focus on the 27 Meijs generalises Olsson's measure into a coherence measure viewed as relative overlap. See Meijs (2005, ch. 3) for more details. Because such measure, regarding this specific context, meets the same conditions of the boosting confidence measure elaborated by Bovens and Hartmann (which will be outlined in the next subsection), I will keep using Olsson's measure.

Bovens and Hartmann's Measure
28 Their position is known as weak Bayesian coherentism. See Huemer (2007) for a broader discussion on Bayesianism and coherentist theories of epistemic justification. role that coherence, as a property of information sets, plays: boosting our confidence that the content of an information set is true ceteris paribus once the information is received from independent and partially reliable source. The model Bovens and Hartmann construct aims to measure the degree of confidence in the joint truth of an information set. Such degree is determined by the combination of conditions such as results' expectation, tests' reliability, coherence of the information. Each of these conditions has a specific measure. Respectively, one has i) an expectance measure, which is about the degree of prior expectance of the joint truth of an information set, ii) a reliability measure, which computes the degree of reliability of the sources, and iii) the coherence measure, which is needed to assess to degree of coherence of the set. Consider again n partially reliable sources i which report proposition A i , for i = 1, …, n, so that the information set is {A 1 , …, A n }. The propositional variable A i is defined for the proposition A i and it can take on two values: A i and ¬A i . Similarly, the propositional variable R i can also take on two values: R i and ¬R i . If the report mentions that A i is the case after consulting the proper source, then R i ; otherwise, ¬R i . Then, let P be a probability distribution over the variables A 1 , …, A n , R 1 , …, R n , which satisfy the constraints of having independent and partially reliable sources. One needs sources to be: i. independent, or else, sources would present information either by looking at other reports and bring additional information or by conveying what they think a coherent report is. For coherence to play a role as confidence boosting, the sources should rather gather information through and only through their own observations which they will report without biasedly inferring what they think a coherent report would look like; ii. partially reliable, or else, sources would either be truth-tellers or randomisers.
The latter would report the information needed in entirely random manner and assessing the degree of coherence of information reported without any degree of reliability is useless. The former would report only true information making the property of coherence redundant, because it would no longer be important whether the report is coherent or not as it is already true given that its information comes from a fully reliable source.
These two points can be formally translated. i. Having independent sources means that R i should only report that A i is the case given the she has (likely) observed A i without her observations being affected by additional facts. Probabilistically speaking, A i screens off R i from all other variables A j and R j . Thus, there is a conditional independence between R i and A 1 , R 1 , …, A i−1 , R i−1 , A i+1 , R i+1 , …, A n , R n , given A i , for i = 1, …, n: Coherence and Reduction ii Partial reliability can be specified with two parameters, namely the true positive rate P(R i |A i ) = p and the negative positive rate P(R i | ¬ A i ) = q. Bovens and Hartmann then assume that all sources are equally reliable, that is, all sources have the same p and the same q. The assumptions is introduced because knowing how much one trusts a source is not relevant to assessing the degree of coherence of an information set. For the reasons stated above, all sources in this model are deemed epistemically imperfect, which means that they are more reliable than randomisers, but less reliable than truth-tellers. Thus, the following constraint is imposed on P: Bovens and Hartmann then define the degree of confidence in the information set as equal to the posterior joint probability of the propositions in the set after all reports have been collected: Then, they apply the Bayes rule on eq. (22) and simplify it with respect to the independence constraint (eq. (20)): While the numerator can be seen as: where p n is the positive rate P(R i |A i ) = p to the nth degree, and ξ 0 is the prior probability of the conjunction of n − 0 positive values and 0 negative values of the variables A 1 , …, A n . The denominator looks like: In the denominator, Bovens and Hartmann gather all terms in which the variables A 1 , …A n take, first, n positive values and 0 negative values, then n − 1 positive values and 1 negative value and so on, until the term in which those variables take 0 positive values and n negative values is reached. This means that, for instance, ξ 1 is a prior probability where one proposition is false. Finally, if both numerator and denominator are divided by p n , the posterior probability P * (A 1 ∧ … ∧ A n ) would be: where x = q p , that is, the likelihood ratio, and ∑ n i=0 ξ i = 1. ξ i is therefore the conjunction of all the joint probabilities of the instances of n − i positive values and i negative values of the variables A 1 , …, A n .
According to the information collected so far, the three measures of Bovens and Hartmann can be finally presented. The expectance measure is defined by the prior joint probability of the propositions in the information set, i.e. the probability before any report was received: The more ξ 0 increases, the more the degree of coherence of the set increases. The degree of coherence of the information set, i.e. eq. (22), is a monotonically increasing function of r, that is, the reliability measure: where x is the likelihood ratio. This measure ranges over the open interval (0, 1), because the sources are neither fully reliable (r ≠ 1), nor entirely unreliable (r ≠ 0). The last relevant measure is the coherence measure. In order to evaluate the coherence of an information set, they measure the proportion of the confidence boost b, defined by the ratio P * ( A1∧…∧An) P( A1∧…∧An) , relative to the confidence boost b max : a confidence boost which would have been received if the same information had been received in the form of maximally coherent information. A maximally coherent information set would contain only logically equivalent propositions and it has a specific distribution of ξ: After calculating the posterior joint probability of a maximally coherent information set and its confidence boost, Bovens and Hartmann compute the coherence measure of an information set S = {A 1 , …, A n }. This measure is functionally dependent on the expectance and the reliability measure (cf. Bovens and Hartmann 2003, p. 612): According to Meijs (2005), the maximal requirement is what makes Bovens and Hartmann's measure produce counter-intuitive results, because B may, in some cases, consider the degree of coherence of a set containing independent proposition as higher of the degree of coherence of a set whose members are positively dependent. This feature of B can be threatening for the aim of the article: comparing the degree of coherence pre-and post-reduction among the two theories. Therefore, it will be necessary to check whether or not the results which come from the comparisons of the degree of coherence pre-and post-reduction, and which are obtained through different coherence measures, are similar. In case the results will contrast with each other, further remarks regarding the choice of the proper coherence measure will have to be made. It would, in fact, become a contextual question of which coherence measure one should prioritise with respect to the epistemic conception of coherence (e.g. mutual support, boosting confidence) they represent.

Comparing the Degree of Coherence
In this section, I will compare the degree of coherence of the two theories before and after the reduction. To calculate the degree of coherence with the measures aforementioned, consider the information set S = {T F , T P } containing the two theories before the reduction, and the information set S ′ = {T F , T P } containing the two theories after the reduction. Then, let ≻ be a quasi-ordering relation over the set S = {S, S ′ }, which denotes the binary relation at least as coherent as, such that if S ′ ≻S, then S ′ will be at least as coherent as S. To compare S ′ and S is important to use the assumption Bovens and Hartmann makes for the sources: they need to be partially and equally reliable, and independent. So, I am implying that, for example, a scientist has the same credence for P 1 (E F |T F ), P 3 (E F |T F ), P 1 (E P |T P ), P 3 (E P |T P ), and so on. 29 Having such an assumption regarding the evidence is important as neither Schupbach nor Olsson consider the role coherence plays for the evidence supporting theories.

Schupbach's Measure
Recall Schupbach's coherence measure. Before the reduction, T P and T P are probabilistically independent. Hence, according to Schupbach, the value of coherence will be 0: The situation changes after T P is reduced to T P as the two become probabilistically dependent: ].
Therefore, according to Schupbach, the two theories will be consistent with each other after the reduction if and only if what is inside the logarithm is strictly higher than 1, that is, if and only if the joint probability of T F and T P is higher than the product of their prior probabilities. Thus, one can have the following theorem (see Appendix A): The first part of theorem means that if T P and T * P are independent or T F and T * F are independent, therefore T P and T F remain independent after the reduction and their coherence does not improve. The second part of the theorem, instead, means that coherence between T P and T F is gained after the reduction if and only if: i) the probability of the conjunction of T P and T * P is higher than the conditional probability of T P on ¬T * P ; and ii) the probability of the conjunction of T F and T * F is higher than the conditional probability of T F on ¬T * F .

Olsson's Measure
The same theorem will appear with the comparison made with Olsson's measure. Before T P is reduced to T F , one have to maintain the independence between their variables. Thus, the coherence measure formulated by Olsson would look like the following: Once the reduction happens, O for S ′ is: Here, one would need to fix the prior probability P 1 ( T P ), i.e. P 1 (T P ) = P 3 (T P ), to meaningfully compare the two measures. Then, the sufficient and necessary condition for S ′ ≻S to hold is that the denominator of O(S ′ ) is less than or equal to the denominator of O( S), because the numerator of O(S ′ ) is greater than the Coherence and Reduction numerator of O(S). 30 Once the values are substituted in the coherence measure O, 31 the following theorem is obtained: This is the same condition one finds with Schupbach's measure. The higher the dif- , the higher the degree of coherence.

Bovens and Hartmann's Measure
To evaluate the degree of coherence pre-and post-reduction in light of the analysis of Bovens and Hartmann, the reliability measure they formulate should not play any role. 32 In fact, B construct a quasi-ordering relation over the set {S, S ′ } and this binary relation is formally independent of the reliability measure. Reconsider the two sets S and S ′ which have the same size, i.e. 2. Recall that while P 1 is the joint probability distribution for the pre-reduction propositions T F and T P , P 3 is the joint probability distribution over the post-reduction propositions T F and T P . By calculating the weight vectors <ξ 0 , ξ 1 > for P 1 and <ξ ′ 0 , ξ ′ 1 > for P 3 , the following difference function can be constructed: The relation which induces a quasi-ordering over the set of the two information sets S and S ′ is then defined as: For S, S ′ , S ′ ≻S iff f r (S ′ , S) ≥ 0 for all values of r ∈ (0, 1).
Finally, to determine whether f r ≥ 0, one needs to assess the conditions under which the sign of f r is actually positive for all values of r ∈ ( 0, 1). For this reason, Bovens and Hartmann calculate the following condition: This condition is necessary and sufficient for S ′ ≥ S to hold. First, consider the first condition. ξ 0 ≤ ξ ′ 0 resembles what has been showed above: P 1 (T F ) × P 1 ( T P )≤ P 3 (T F ∧ T P ). This is our usual condition, which reports: 30 See Appendix A for details. 31 See Appendix B for details. 32 I have been using this assumption so far for Schupbach's and Olsson's measure.
Then, ξ 1 ≥ ξ ′ 1 means that: Once P 1 (T P ) is fixed and made equal to P 3 ( T P ) (for details, see Appendix C), a similar theorem to the ones seen above follows: These are the same conditions one has with Olsson and Schupbach's measures. The second part of the condition mentioned by Bovens and Hartmann is ruled out because it has been showed earlier that the probability of the conjunction of the two propositions T F and T P is higher after the reduction rather than before. Thus, the second part of Bovens and Hartmann's condition does not hold.

Coherence and Confirmation
The conditions upon which the coherence measures report that S ′ is more coherent than S are the same reported by the confirmation measure used by DFH (Dizadji-Bahmani, Frigg, and Hartmann 2011) which say that evidence confirming T P also confirms T In these examples, I assume that there is a positive flow of confirmation from T F to T P , that is, p * F > q * F and p * P > q * P . The first two examples show the different degrees of coherence of S ′ according to Schupbach's and Olsson's coherence measure as a function of the amount of confirmation between T F and T * F . The examples have p * F and q * F set as variables and t F , a, b, p * P , and q * P as parameters. The value assigned to t F , a, b and p * P is 0.5, while 33 I would like to thank the second reviewer for raising this point.

Coherence and Reduction
q * P has 0.1. 34 Interestingly, both graphs reveals the degree of coherence of the two theories after the reduction and the amount of confirmation between T F and T * F are positively related (Figure 4).
The other two examples in Figure 5 show, instead, the different degrees of coherence of S ′ according to Schupbach's and Olsson's coherence measure as a function of the amount of confirmation between T * P and T P . Here, I set p * P and q * P as variables and t F , a, b, p * F , and q * F as parameters. Like in the previous example, the value assigned to t F , a, b and p * F is 0.5, while q * F has 0.1. As opposed to the two graphs showed beforehand, the graphs display a negative correlation between the degree of coherence of the two theories after the reduction and the amount of confirmation between T * P and T P : the more T F and T P are coherent, the less T * P confirms T P . Finally, the graphs in Figure 6 reveal the positive correlation between the prior probability of the bridge laws and the degree of coherence of S ′ . As expected, a higher assignment of b contributes to a higher degree of coherence. Bridge laws, however, do not provide a major contribution for the coherence of the set containing the two theories after the reduction. Figure 4: The x-axis and the y-axis in the two pictures are respectively the amount of confirmation between T F and T * F (i.e. P 3 (T * F ⃒ ⃒ ⃒ ⃒T F ) − P 3 (T * F )) and the degree of coherence of S ′ (in the first graph, there is O(S ′ ), while in the second there is S(S ′ ). The variables p * F and q * F take any value that goes from 0.01 to 0.99. When the difference p * F − q * F tends to 1, the area showed in the graphs converges towards a single value since only two values, i.e. p * F = 1 and q * F = 0, can give p * F − q * F = 1. The highlighted red lines are for p * F = 0.5 and they show the positive correlation between coherence and confirmation.
34 The values assigned to these parameters can be potentially given by scientists and their disagreement about prior probabilities and likelihoods could be represented in another matter (e.g. with imprecise probabilities). Hence, here I assign arbitrary values simply to better understand the relation between confirmation and coherence.

Conclusions
In this article, I have showed that, according to three coherence measures and under the assumption elaborated by Bovens and Hartmann, 35 the degree of Figure 5: The x-axis and the y-axis in the two pictures are respectively the amount of confirmation between T * P and T P (i.e. P 3 (T P |T * P ) − P 3 (T P )) and the degree of coherence of S ′ (in the first graph, there is O(S ′ ), while in the second there is S(S ′ ). The variables p * P and q * P take any value that goes from 0.01 to 0.99. Also in these examples, when the difference p * P − q * P tends to 1, the area showed in the graphs converge towards a single value since only two values, i.e. p * P = 1 and q * F = 0, can give p * P − q * P = 1. The highlighted red lines are for p * P set as 0.5 and they show the negative correlation between coherence and confirmation. Figure 6: The x-axis and the y-axis in the two pictures are respectively the different probability assignments for b and the degree of coherence of S ′ (in the first graph, there is O(S ′ ), while in the second there is S(S ′ ). The variable b take any value that goes from 0.01 to 0.99. Here, t F , a, p * F , p * P , 1 * F , and q * P are set as parameters. Like in the previous example, the value assigned to t F , a, p * F , and p * P is 0.5, while q * F and q * P have 0.1.
coherence amongst two self-consistent and well-confirmed theories with largely overlapping domains of application which are involved in a reduction relation á la GNS is higher than the degree of coherence of the same theories which, before one is reduced to another, sketch a contradictory view of the world. To do this, I first presented the classic putative example where the GNS can be applied to described the main relations occurring in a scientific reduction: the reduction of TD to SM. Then, I looked at two attempts of modelling the GNS in probabilistic terms. Specifically, DFH and Tešic offer two Bayesian analyses, which mostly disagree on the probabilistic account of bridge laws, a crucial relation in a GNS. In fact, Tešic undermines the view shared by DFH that the propositional variables T * P and T * F are interchangeable. This view faces three problems: i) it proposes an incorrect entailment amongst the laws derived from the fundamental theory through the bridge laws and the fundamental theory itself; ii) it makes the reduction symmetric; iii) it omits the role of auxiliary assumptions and boundary conditions in deriving laws strongly analogous to the ones involved in the reduction. The necessity of explicitly assuming the bridge law B in order to overcome these three challenges leads Tešic then to formulate a new probability distribution P 3 and therefore new probability assignments, such as P 3 ( T * This new purported probabilistic representation of GNS has been used to assess the degree of coherence between the two theories in question as the three coherence measures proposed are expressed in probabilistic terms. I briefly mentioned which epistemic notion of coherence these measures try to grasp. Schupbach's measure understands coherence as mutual support amongst the propositions included in an information set. Olsson's measure conceives coherence as an agreement between the propositions involved. Bovens and Hartmann, instead, focus on the role that coherence, as a property of an information set, should play whenever one wants to assess whether or not the set is true. Coherence, according to this framework, boosts one's confidence that the set is true ceteris paribus once the information is received from independent and partially reliable source. Due to the limit breadth of the article, I did not explain in greater the counter-intuitive results these measures may offer in certain scenarios. Rather, I tried to heuristically show that, in the case of GNS constructed in function of Tešic's suggestions, they do not report different outcomes. In the fourth section, I showed that they actually outline stable results. The three theorems point out that a set containing T F and T P have a higher degree of coherence after is reduced to than before the reduction if and only if P 3 (T * F ⃒ ⃒ ⃒ ⃒ T F ) > P 3 (T * F ⃒ ⃒ ⃒ ⃒ ¬ T F ) and P 3 (T P | T * P ) > P 3 (T P | ¬ T * P ). These two conditions are likely met by the theories involved in the GNS. Thus, under the assumptions of the Bayesian analysis provided by Tešic and the one provided by Bovens and Hartmann, the coherence measures seem to report that the GNS make the two theories involved cohere with each other in light of their positive dependency. Those two inequalities highlighted in the theorems have to hold in order to consider two theories involved in the reductive relation coherent. In the last section, I finally introduced six numerical examples aimed at better understanding the relation between coherence and confirmation as well as their respective measures with respect to the context of intertheoretic reduction as designed by DFH and Tešic. Interestingly, while bridge laws and the confirmation flow between T F and T * F are positively related to degree of coherence of S ′ , the confirmation flow between T * P and T P is not. Further projects can still be proposed at the intersection of coherence and GNS. First of all, regarding Tešic's probability assignment of the bridge law and, in general, regarding every probability assignment in the Bayesian network (see Figure 3), it would be interesting to use tools borrowed from imprecise probability (IP) to better represent the credence scientists might have towards prior probabilities and likelihoods. In fact, usually scientists disagree about the value assignments of prior probabilities and likelihoods. IP is a generalisation of probability theory, which is applicable to cases where it is hard to identify a unique probability distribution, because evidence might be scarce, vague, or conflicting. Thereby, the goal of IP is represent the available knowledge more accurately, instead of focusing on a single precise outcome. Would IP be able to represent the situation before and after the reduction and assess whether of not the two postreduction theories would cohere amongst each other? Second, more coherence and confirmation measures should check the results I showed above and further discussion should be focused on their relation. The three coherence measures might be extensionally equivalent with the respect to the GNS, because, even if their frameworks and their main epistemic notions of coherence differed, they all confirmed the same hypothesis and provided stable results. What would happen with other coherence measures? Would they report an increase or a decrease in coherence if considered as a function of the amount of confirmation? Third, the coherence relation between evidence and theories requires a proper investigation. One might want to start this investigation from the work of Meijs (2005) and then apply it to the case of GNS. Will the evidence of a theory be coherent with the other theory, and vice versa? Fourth and finally, the third question opens up to the diatribe between foundationalism and coherentism, and to the role coherence should play in assessing a scientific theory. I showed and worked with respect to the assumption made by Bovens and Hartmann. Some philosophers have actually been skeptical of their suggestions. This means that not only new measures, but also other assumptions regarding the sources reporting crucial information (i.e. the evidence) might be employed to construct novel coherence measures. Would considerations on coherence still play a role in deciding to accept a scientific theory if one dropped the conception that comparing the degree of coherence of two (or more) information sets should assume that sources reporting the propositions contained in the sets are equally and partially reliable, and independent? These four questions highlight the fact that philosophers of science and formal epistemologists should work closely with natural scientists.

Appendices
In what follows, I shall refer to Neapolitan (2003) in order to compute the probabilities of the values of random variables in Bayesian networks.

C Theorem 4.3
Recall Bovens and Hartmann's condition. In order to assess ξ 1 and ξ ′ 1 , one needs the sum of the joint probabilities of one theory and the negation of the other for our two information sets before and after reduction. First, let me show ξ 1 , which the pre-reduction parameter: ξ 1 = P 1 (T F ∧ ¬T P ) + P 1 (¬T F ∧ T P ) = t F (1 − t P ) + t P t F , which holds in virtue of the probabilistic independence of T F and T P . Second, I compute ξ ′ 1 : Then, I substitute the following values in this equation.