The debate between Einstein and Bohr, about the foundations of quantum mechanics, resulted in a Gedanken- experiment suggested by Einstein-Podolsky-Rosen (EPR)  that was later modified by Bohm (EPRB) . The schematics of this experiment is shown in Figure 1 and involves two wings and two measurement stations. EPR used the quantum mechanical predictions for possible outcomes of this experiment to show that quantum mechanics was incomplete.
Many years later, John S. Bell derived an inequality for the possible outcomes of EPRB experiments that he perceived to be based only on the physics of Einstein’s relativity . Bell’s inequality, as it was henceforth called, seemed to contradict the quantum predictions for EPRB experimental outcomes altogether .
Experimental investigations, following Bell’s theoretical suggestions, provided a large number of data that were violating Bell-type inequalities and climaxed in the suspicion of a failure of Einstein’s physics and his basic understanding of space and time [3, 12, 15, 57, 89].
Central to these discussions and questions are the correlations of space-like separated detection events, some of which are interpreted as the observation of a pair of entities such as photons. The problem of classifying events as the observation of a “photon” or of something else is not as simple as in the case of say, billiard balls. The particle identification problem is, in fact, key for the understanding of the epistemology of correlations between events.
What do we know about such correlations of space-like separated events? Popular presentations of Bell’s work typically involve two isolated agents (Alice, Bob) at separated measurement stations (Tenerife, La Palma), who just collect data of local measurements. But how does Alice know that she is dealing with a particle of a pair of which Bob investigates the other particle? She is supposed to be totally isolated from Bob’s wing of the experiment in order to fulfill Einstein’s separation and locality principle! The answer is that neither Alice nor Bob know they deal with correlated pairs if their stations are completely separated from each other and have no space-time knowledge of the other wing ever.
In his papers on the EPRB experiment, Bell did not address this fundamental question but considered correlated pairs as given, without any trace of the tools of measurement and of space-time concepts that are both necessary to accomplish the identification of events. He then claimed to have discovered a conflict between his theoretical description and the quantum theoretical description of the EPRB thought experiment . As a consequence of this discovery, much research was devoted to
the actual derivation of Bell-type inequalities from Einstein’s framework of physics (particularly his separation principle that derives from the speed of light in vacuum (c being the limit of all speeds) and Kolmogorov’s probability theory ,
designing and performing laboratory experiments that provide data that are in conflict with the Bell-type inequalities,
constructing mathematical-physical models, at times supported by computer simulations that entirely comply with Einstein’s and Einstein’s separation principle, that do not rely on concepts of quantum theory, and are nevertheless in conflict with the Bell’s theorem.
Bell’s theory, and the theories of all his followers, including Wigner, do not deal with the identification of the correlated particles and assume that they are known automatically, so to speak per fiat. But the knowledge of pairing requires additional data or additional channels of information. These additional data may be measurement times, certain thresholds for detection and many other elements of the physical reality of the experiments in both wings. It is important to note that these data must involve measurements in both stations and are necessarily influencing the possible knowledge of the correlations of the single measurements in these stations. In the case of atomic or subatomic measurements the measurement equipment does not only influence the single outcomes as Bohr has taught us, but correlated measurement equipment (such as synchronized clocks or instruments that determine thresholds) also influence the knowledge of correlations of these single outcomes. It is this extension of the Copenhagen view that leads to a loophole in Bell’s Theorem, the photon identification loophole. The violations of Bell-type inequalities described in this paper are based on this loophole.
Bell and followers envisage that the correlations may be measured in the laboratory in complete separation and, therefore, physical models of the Bell opponents must only use the measurements of two completely separated wings operated by Alice and Bob who know nothing of each other. As just explained, correlations of spatially separated events can only be conceived by involving the human-invented space-time system in order to demonstrate the pairing, the knowledge that measurements of particles belong together. Therefore, correlations can only be determined in a consistent way under the umbrella of a given space-time system that encompasses the two or more measurement stations. This space-time system that all experimenters subscribe to enables us to ban spooky influences out of science, particularly for the EPR experiments that Einstein constructed for the purpose to show this fact.
We maintain that it is highly non-trivial to identify (correlated) photons by experimental methods and that this identification involves, at least in some way, a space-time system, a system developed by the human mind and agreed upon by all experimenters and evaluators of the Bell-type experiments. In fact, the identification of particle pairs requires certain knowledge of the space-time properties of all the experimental equipment involved. This knowledge must necessarily extend to measurement stations far from each other and is, therefore, “non-local”. Of course, this non-local knowledge does not imply that there are non-local physical influences on the data measured in the two wings. In EPRB experiments [34, 38, 85, 89], great care is taken to rule out the possibility that the observed correlations are due to physical influences that travel with velocities not exceeding the speed of light in vacuum. But without that non-local knowledge, only spooky influences are left as a possibility for connecting events in the two experimental wings. A space-time knowledge of all involved equipment is nevertheless required to apply the scientific method.
2 Aim and structure of the paper
As explained above, it is essential that the identification of photons is included into any meaningful theoretical model of an EPRB experiment because otherwise, the model is too simple to describe this experiment. Failing to do so, only spooky influences can explain the observed pair correlations. Specifically, omitting the inclusion of data which select the photons and/or pairs opens an Einstein-local loophole, which we call henceforth the photon identification loophole. By design, Bell-type models for the recent experiments that claim to be loophole free [34, 85] suffer from this loophole.
The main aim of this paper is to show that by exploiting this loophole, or formulated more positively, by constructing a model that captures the essence of these recent laboratory experiments [34, 85], a manifestly non-quantum computer simulation of an Einstein-local model that employs the same local photon identification method in each wing of the EPRB experiments as in recent EPRB experiments [34, 85], yields the photon pair correlation of a pair of photons in the singlet state (of their polarizations), in blatant contradiction with Bell’s theorem.
The paper is structured as follows. Section 3 argues that a simulation on a digital computer is a perfect laboratory experiment with a physically existing device and can therefore be used as a metaphor for other laboratory experiments. The material in this section forms the conceptual basis for developing computer simulation models of the recent EPRB experiments [34, 85].
Section 4 discusses the relevance of counterfactual definiteness (CFD) in relation to the derivation of Bell-type inequalities and hence also for computer simulation models that yield data that are in conflict with these inequalities. In Section 5, we introduce a CFD-compliant computer simulation model of the recent EPRB experiments [34, 85]. We discuss the correspondence of the essential elements of the latter with those of the simulation model but refrain from plunging into the details of the algorithm itself.
Section 6 gives a simple, rigorous proof that operationally but not conceptually, photon identification in each wing by a local voltage threshold , photon identification in each wing by a local time window, and photon identification by time-coincidence counting are all mathematically equivalent. In Section 7, we discuss the consequences of excluding from a model, at least one feature that is essential for an experiment to yield useful data. We argue that Bell-type models, which are believed to be relevant for the description of recent EPRB experiments [34, 85], suffer from the photon identification loophole in a dramatic manner.
Section 8 is devoted to a simple proof that the simulation model introduced in Section 5 is CFD-compliant. In Section 9, we give a simple derivation of the Bell , Clauser-Horn-Shimony-Holt (CHSH) , Eberhard , and Clauser-Horn (CH)  inequalities for real data, not for the imagined data produced by probabilistic models (which are discussed in the Appendix) and in Section 10 we present a more general inequality that accounts for the local photon identification procedure, employed in the laboratory experiments.
In section 11, we specify the computer simulation algorithm and the simulation procedure in full detail. A representative collection of simulation results is presented in section 12. The main conclusion from these simulations is that a non-quantum model that employs the same photon identification method in each wing of the EPRB experiments as the one used in recent EPRB experiments [34, 85], reproduces the results of quantum theory of the EPRB thought experiment.
In section 13, we argue that all EPRB laboratory experiments with photons can be viewed as a tool to characterize the response of the observation stations, leading to the conclusion that this response, in particular the local photon identification rate, depends on the settings of these stations, consistent with the assumptions made in constructing the simulation model and debunking the hypothesis that the observed pair correlations can be explained by non-local influences only. The paper ends with section 14 which contains our conclusions.
3 Metaphor for a perfect laboratory experiment
An important, characteristic feature of digital computers is that their logical operation does not depend on the technology that is used to construct the machine. These days, the first thing that comes to mind when talking about digital computers are the electronic machines based on semiconductor technology but it is a fact that, although not cost-effective nor particularly useful in practice, digital computers can also be built from mechanical parts, e.g. Lego™ elements.
In former times, one could read off the state of the computer’s internal registers from a LED display. Although not practical at all, in principle one could use a huge LED display to show the internal state of the whole computer. This is only to say that there is a one-to-one mapping from the state of the computer to sense impressions (e.g. light on/off). Therefore, the metaphor also offers unique possibilities to confront man-made concepts and theories with actual facts, i.e. real perfect experiments, because it guarantees that we have a well-defined, precise representation of the concepts and algorithms (both in terms of bits) involved that directly translate into sense impressions.
In the analysis of laboratory EPRB experiments, it is essential that all the important degrees of freedom that affect the data analysis are identified and included, otherwise the conclusions drawn from an incomplete analysis may be wrong . Computer simulation puts us in the position to perform experiments under the same mathematical conditions for which e.g. Bell-type inequalities can be derived, simply because we can carry out real, perfect experiments that are void of any unknown elements that may affect the results and analysis.
A digital computer is a physical (electronic or mechanical) device that changes its physical state (by flipping bits) according to well-defined rules (the algorithm). Therefore, assuming that the machine is operating flawless for the time period of interest (a very reasonable assumption these days), executing an algorithm on a digital computer is a physics experiment in which there are no unknown elements of physical reality that might affect the outcome. In this sense, the “digital computer + algorithm” can be viewed as a metaphor for a perfected laboratory experiment, a discrete-event simulation that represents the so called “loophole-free” EPRB experiments [34, 85].
Starting with Bell’s work , most theoretical work on the subject matter is based on probability theory. This mathematical framework contains conceptual elements (probability measures and infinitesimals) that are outside the domain of our sensory experiences and have no counterpart in our physical world. Therefore, to avoid pitfalls, we first devise an algorithm that simulates the perfect laboratory experiment and then construct a probabilistic model of this algorithm. A graphical representation of the modeling philosophy that we adopt in this paper is shown in Figure 2. Note that our approach starts from the experiments and results in a theory, which we believe is the only direction one should go. In contrast, Bell’s approach was the design of an experiment starting from his theoretical point of view.
4 Counterfactual definiteness
Counterfactual reasoning plays a significant role in the literature related to Bell’s work and is seen by many a conditio sine qua non to derive Bell-type inequalities [18, 40, 56, 67]. However, as explained below, the actual EPRB experiments do not permit any proof of CFD compliance. This fact demonstrates an unexpected conceptual advantage of computer experiments. We can turn on and off CFD compliancy at will in our algorithm and simulate the consequences and thus distinguish the precise conditions that may or may not lead to violations of Bell-type inequalities. This is our reason to dedicate significant sections of this paper to counterfactual definiteness (CFD) as defined below, and include CFD-compliant models at the side of more faithful models of the actual EPRB experiments.
So called counterfactual “measurements” yield values that have been derived by means other than direct observation or actual measurement, such as by calculation on the basis of a well-substantiated theory. If one knows an equation that permits deriving reliably, output values from a list of inputs to the system under investigation, then one has “counterfactual definiteness” (CFD) in the knowledge of that system .
The word “counterfactual” is a misnomer  but is well established. It is therefore helpful to have a clear-cut operational definition of what is meant with CFD. In essence, CFD means that the output state of a system, represented by a vector of values y, can be calculated using an explicit formula, e.g. y = f(x) where f(.) is a known vector-valued function of its argument x. If x denotes a vector of values, the elements of this vector must be independent variables for the mathematical model to be CFD-compliant [23, 40].
In laboratory EPRB experiments, every trial takes place under different conditions, different settings etc. which may or may not affect the outcome of a single trial. Mathematically, we may express this dependence as y = F(x, C, U) where now F(.) is a vector-valued function of its arguments and C and U represent the known conditions/settings and unknown influences, respectively. In general, there is no reason why U should be a constant from trial to trial. In other words, F(.) is unknown. Therefore, data produced by laboratory EPRB experiments (or any other laboratory experiments) cannot, as a matter of principle, be cast in the form of data generated by a CFD-compliant model. On the other hand, performing computer experiments in a CFD-compliant manner is not difficult nor is it much work to change a CFD-compliant algorithm into one that does not meet all the requirements of CFD simulation. In other words, computer experiments can be carried out in both CFD and non-CFD mode, providing quantitative information about the differences of these two modes of modeling.
In the realm of finite sets of two-valued data, a strict derivation of Bell-type inequalities [5, 6], such as the Bell-CHSH [5, 14] and Eberhard’s inequalities  require that these data are generated in a CFD-compliant manner [23, 40]. In other words, CFD is a prerequisite for deriving Bell-type inequalities. Therefore, to test whether or not a simulation model produces data that do not satisfy such inequalities, it is necessary to perform a CFD-compliant simulation. Otherwise, there is no mathematical justification for the hypothesis that these data should satisfy Bell-type inequalities in the first place. Of course, we can always revert to the non-CFD algorithm and check if e.g. averages exhibit the same features as the averages obtained from the CFD-compliant algorithm (see section 12).
In an earlier paper , we have adopted this strategy to demonstrate that in the case of EPRB experiments,
CFD-compliant simulations can reproduce the averages and correlation of two particles in the singlet state,
CFD does not distinguish classical from quantum physics because our computer models do not contain any quantum concepts, yet yield results that lead to conclusions (e.g. entanglement) that are commonly regarded as signatures of quantum physics.
In this paper, we adopt the same strategy. We construct a CFD-compliant simulation model of the laboratory experiments [34, 85], meaning that we simulate a perfected, idealized realization of these laboratory experiments. Of course, this does not mean that we omit essential features of the laboratory experiments. These features have to be included, otherwise the simulation model is not applicable to these laboratory experiments. see section 7 for a general discussion of this aspect.
In this section, we introduce the essential elements of the simulation model of the laboratory experiments reported in Ref. [34, 85]. The details of the simulation algorithm are given in section 11. For concreteness, we adopt the terminology that is used in Ref.  when we connect the elements of the simulation model to those of the laboratory experiments.
As shown in Figure 1, in a typical EPRB experiment there are three different components. There is a source and there are two observation stations. The algorithm that simulates the source is described in full detail in section 11. In this section, we focus on the observation stations which, because we are performing “perfected” experiments, are assumed to be identical.
5.1 Observation station
In Figure 3, we show a graphical representation of the function of an observation station. Input to an observation station is the setting a (representing the orientation of the polarizer), two numbers 0 ≤ r, < 1 taken from a list of uniform random numbers (see section 11 for further details) and an angle 0 ≤ ϕ < 2π (representing the initial polarization of the photon). Output of the observation station is a two-valued variable x = ± 1 and a detector-related variable vmin ≤ v ≤ vmax.
The correspondence between the data produced by the experimental realization of an observation station and those generated by the computational model is as follows. The variable x encodes the detector outcomes (either D+, i or D-, i in station i = 1, 2) that fired. In the laboratory experiments [34, 85] there is only one detector in each station but in the computer experiment we can easily simulate the complete EPRB experiment (see Figure 1), hence we consider both the “+” and “-” events. The variable v represents the voltage signal produced by the electronics that amplifies the transition-edge detector current (see the description in section IV of the supplementary material to Ref. ).
If necessary, we label different events by attaching the subscript i = 1, 2 of the observation station and/or the subscript k where k = 1, …, N and N denotes the total number of input events to a station. In full detail, for the kth input at station i, the observation station i generates the output values xi = xi(ai, ϕi, k, ri, k) and vi = vi(ai, ϕi, k, ) according to the rules which will be specified in full detail in section 12. Occasionally, we write xi(ai) = xi(ai, ϕi, k, ri, k) and vi(ai) = vi(ai, ϕi, k, ) to simplify the notation while still emphasizing that the x’s and v’s only depend on variables that are local to the respective station.
5.2 Photon identification
In the following, we use the term detection event whenever the negative voltage signal produced by the electronics that amplifies the transition-edge detector current is smaller (we are dealing with negative voltages) than the “trigger threshold” (terminology from Ref.  (supplementary material)), and speak of the observation of a photon whenever the same negative voltage signal is smaller than the “photon identification threshold” (about 4/3 times the “trigger threshold”) (terminology from Ref.  (supplementary material)).
From the description of the laboratory EPRB experiments under scrutiny, it follows immediately that not every detection event is regarded as the observation of a photon [34, 85]. Indeed, after all the voltage traces of an experimental run have been recorded, a part of the collected trace is analyzed by software, the photon identification thresholds are “calibrated” and assuming that the relevant properties of the whole set of traces is stationary in real time, the remaining set of traces is analyzed . In Ref.  there is no specification of the cost function that is being minimized by the calibration procedure whereas Ref.  (supplementary material) explicitly states that “Because the experiment was calibrated to maximize violation of the CH inequality…”. This seems to suggest that the software is designed to adjust the photon identification thresholds such that the desired result, namely a violation of a Bell-type inequality, is obtained.
In our simulation approach, we may assume that all units are identical. Therefore, unlike in Ref. , one and the same value of photon identification threshold, denoted by 𝓥, can be used to identify photons. The effect of the photon identification threshold is captured by the function (1)
where Θ(x) is equal to one if x > 0 and is zero otherwise. Recall that, as in Ref. , 𝓥 is negative. In the simulation, we do not “calibrate” 𝓥 but simply generate the data and analyze the results as a function 𝓥.
The correspondence with the data collected in the laboratory experiment is as follows: a detection event is represented by xi(ai) = +1 and wi(ai) = 0 and the observation of a photon in station i = 1, 2 is represented by xi(ai) = +1 (because there is only one, not two, transition-edge detectors at each station) and wi(ai) = +1 (implemented in software), both exactly as in the simulation model. Recall, and this is new and important, that also in the simulations the photon identification is performed locally, i.e. without communication between the observation stations.
6 Equivalence of local time-window and time-coincidence processing
In this section we show that in spite of the conceptually very different setup, from an operational point of view, employing local photon identification thresholds is equivalent to local time-window selection and also to time-coincidence counting that is used in most EPRB experiments with photons [3, 15, 57, 89].
As explained above, in the laboratory experiments a detection event is classified as being a photon if the (negative) voltage signal, denoted by v, produced by the electronics that amplifies the transition-edge detector current (see the description in section IV of the supplementary material to Ref. ) is smaller than the photon identification threshold 𝓥. This rejection criterion is implemented through Eq. (1) from which it follows directly that the criterion to observe a photon in these laboratory experiments is v ≤ 𝓥. Recall that we adopted the convention of the laboratory experiments  in which 𝓥 takes negative values.
In practice, we have vmin ≤ vi ≤ vmax and vmin ≤ 𝓥 ≤ vmax with finite vmin and vmax, hence the condition for counting a detection event as photon may be written as (2)
Defining a dimensionless “time” ti ≡ (vi − vmin)/(vmax − vmin) and a dimensionless “time window” W = (𝓥 − vmin)/(vmax − vmin), Eq. (2) reads (3)
which expresses the condition to observe a photon at station i = 1, 2 in terms of locally defined time slots of size W. From Eq. (3) we have −t2 ≤ t1 − t2 ≤ W − t2 and using −W ≤ −t2 we find (4)
which is exactly the same criterion as the one used in most EPRB experiments with photons [3, 15, 57, 89] (with t1, t2 and W representing physical times in that case) and in computer simulation models thereof [19, 20, 22, 23, 24, 25, 26, 78, 90].
In summary: although physically very different, local voltage thresholds, local time windows or time-coincidence counting are mathematically equivalent and all serve the same purpose, namely to give an operational meaning to the statement “a single photon (pair)” has been identified.
7 Loopholes in experimental tests of Bell’s theorem
A useful model of an experiment needs to encompass all relevant parameters that affect the experimental outcomes and, of course, the most important elements of physical reality, namely the data itself. Specifically, a physical theory that describes pair-correlations of space-like separated systems, must account for and include all procedures that determine the detection of the particles and the knowledge which pair of particles and data belongs together. Therefore, any model which purports to describe the laboratory experiments [34, 85] that we consider in this paper must necessarily account for the photon identification threshold mechanism that is instrumental in the data-processing step of these experiments, see section 5.2. Likewise, the earlier generation of EPRB experiments that employ time coincidence to identify pairs [3, 13, 89] can only be faithfully be described by models that incorporate the time-coincidence window selection process that is an essential component of this class of experiments [19, 20, 22, 23, 24, 25, 26, 78, 90].
Drawing a conclusion about a world view from models (such as those of Bell and his followers, see the Appendix) that do not properly account for the photon identification threshold mechanism which, in the laboratory experiments [34, 85], is essential for identifying the photons, requires a drastic departure from rational reasoning. If we allow for such a departure, we might equally well wonder what it means for our world view when we construct and analyze a model of an airplane that excludes the engines and then observe that a real airplane can take off by itself. Any reasonable person would rightfully question our ability to represent the airplane (or laboratory experiments) by such a model and regard the idea that we may have to change our world because of the contradictions to such a model as unfounded. In other words, the only logically correct conclusion that one can draw from the failure of Bell-type models to describe the qualitative features of the experimental data is that these models are too simple.
The photon-identification loophole that we introduce in this paper accounts for
the assumption that voltage signals produced by the detection equipment may depend on the analyzer setting (see Eq. (1)). Regarding this latter assumption, it is of interest to recall that since the early days of the Bell-test experiments, it is well-known that application of Bell-type models requires at least one extra assumption. We reproduce here the relevant passage from Ref.  (p.1890): “The approach used by CHSH is to introduce an auxiliary assumption, that if a particle passes through a spin analyser, its probability of detection is independent of the analyser’s orientation. Unfortunately, this assumption is not contained in the hypotheses of locality, realism or determinism.”
the requirement that a relevant model of an experiment needs to encompass all elements that affect the experimental outcomes.
There is a large body of theoretical work that considers all kinds of loopholes in experiments that must be closed before a definite conclusion about the consequences of Bell inequality tests for certain wold views can be drawn. A detailed, comprehensive discussion of a large collection of loopholes is given in Ref. . Also in this respect, the digital computer – laboratory experiment metaphor offers unique possibilities because we can open and close loopholes at will. As this and our earlier paper  demonstrate, computer simulation models of EPRB experiments can easily be engineered to be free of e.g. detection, coincidence, and memory loopholes  and, in addition, include features such as CFD compliance that close the contextuality loophole [75, 76, 77].
Wrapping up: in this paper we construct a minimal model of the perfected version of the laboratory experiment . With the exception of the photon-identification loophole, this minimal model is free of the known loopholes and reproduces the quantum results of the EPRB thought experiment, from which violations follow automatically. This approach offers the unique possibility to confront all kinds of reasonings and assumptions, such as the (ii) above, with actual facts.
8 CFD compliance
In section 5, the operation of the simulation model of an observation station has been defined such that for every input event (a,ϕ,r, ), we know the values of all outputs variables x = x(a,ϕ,r) and v = v(a,ϕ,). Therefore, the input-output relation of this unit, represented by the diagram of Figure 3, satisfies the requirement of a CFD-compliant model.
The computational equivalent of the EPRB experiments [34, 85] is shown in Figure 4. Each time the source S is activated, it sends one entity carrying the data ϕ1 to station 1 and another entity carrying the data ϕ2 to station 2. The procedure for generating the ϕ’s, r’s and ’s is specified in section 12.
Upon arrival of the entities, observation stations i = 1,2 execute their internal algorithm (completely specified by Eqs. (19) and (20)) and produces output in the form of the pair (xi,vi). The scheme represented by Figure 4 computes the vector-valued function (5)
which clearly defines a CFD-compliant model. Nevertheless, with this CFD-compliant model we cannot construct the quadruple (x1,x2, ) in a CFD-compliant manner. Indeed, by construction, there is no guarantee that the (ϕi,k,ri,k)’s that determine say the x1’s for the pair of settings (a1,a2) will be the same as the (ϕi,k,ri,k)’s that determine that values of the ’s for the pair of settings ( ,a2). Of course, with a simulation on a digital computer being an ideal, fully controllable experiment, we could enforce CFD-compliance by re-using the same (ϕi,k,ri,k, )’s for every pair of settings. This would make the simulation CFD-compliant. However, in this paper we do not so but instead generate new values of the (ϕi,k,ri,k, )’s for every new instance of input.
The layout of a CFD-compliant computer model of the EPRB experiment is depicted in Figure 5. It uses the same units as the model shown in Figure 4, the only difference being that the input ϕi is now fed into an observation station with setting ai and into another one with setting , something which, for obvious reasons, is impossible to realize in laboratory experiments with photons. As each of the four units operates according to the rules given by Eq. (19) and (20), we have and As the arguments of the functions X and T are independent and may take any value out of their respective domain, the whole system represented by Figure 5 satisfies, by construction, the criterion of a CFD theory.
Note that the actual EPRB experiments produce only pairs of data. The three pairs of data considered by Bell involve, therefore, six local measurements and the four pairs of CHSH involve eight local measurements. Our CFD compliant model considers only quadruple (= four local) measurements to simulate the actual eight possible measurement outcomes of a CHSH type experiment.
9 Bell-type inequalities
It is evident from the formulation of his model that Bell and all his followers, including Wigner, do not deal with the issue of identifying particles and take for granted that the measured pairs correspond to the correlated particles. The common prejudice that additional variables cannot possibly defeat Bell-type inequalities is based on the assumption that all sent out correlated pairs, or a representative sample of them, are measured. This reasoning does not account for the photon identification or pair-modeling loophole: the necessary particle or pair identification may necessarily select in a way that is not representative for all possible measurements of all possible pairs emanating from the source. In this section, we adopt Bell’s viewpoint by ignoring the v-variables and demonstrate that CFD-compliance and the existence of Bell-type inequalities are mathematically equivalent.
Figure 5 shows the CFD compliant arrangement of the computer experiment. The two stations on the left of the source S receive the same data ϕ1 from the source. The settings a1 and are fixed for the duration of the N repetitions of the experiment. The same holds for the two stations on the right of the source, with subscript 1 replaced by 2. Clearly, the algorithm represented by Figure 5 generates quadruples of output data (x1, ,x2, ) in a CFD-compliant manner.
For any such quadruple (x1, ,x2, ) in which the x’s only take values +1 and -1, it is straightforward to verify that the following equalities hold: (6) (7) (8) (9) (10)
In a non-CFD setting, the data is collected as four pairs which we may denote as (x1,x2), (x͠1,x2), ( ,x͠2), and ( ) where the tilde is used to indicate that the value of e.g. x1 obtained with setting (a1, ) may be different from the one obtained with setting (a1,a2). Instead of Eq. (10), we now consider the expression s͠ = x1 x2 − x͠1 + x͠2 + = −4,−2,0,+2,+4 and similar ones for b͠1,…,b͠4, each of them taking values −3,−1,+1,+3. If we now impose that s͠ = −2,+2 and b͠1…,b͠4 = −1,3, simple enumeration of all possible values of the x’s and the x͠’s shows that in order for all equalities to be satisfied simultaneously we must have . In other words, imposing the constraints s͠ = −2,+2 and b͠1 = −1,3,…,b͠4 = −1,3 on data obtained in a non-CFD setting forces this data to form quadruples, i.e. to be CFD compliant. It then follows immediately that CFD is necessary and sufficient for the equalities Eqs. (6) – (10) to hold.
Attaching the subscript k (k = 1,…,N) to label the events, the algorithm generates the set of quadruples Introducing the Bell-CHSH function (11)
it follows immediately from |sk| = 2 (see Eq. (10)) that | | ≤ 2 for all N ≥ 1, that is we obtain the Bell-CHSH inequality constraining four correlations of pairs of actual data. Put differently, if the output consists of quadruples of two-valued data generated by the setup shown in Figure 5 and we ignore the v-variables then the Bell-CHSH inequality | | ≤ 2 is always satisfied, independent of the number of events N ≥ 1 considered.
Similarly, from the fact that for example b1,k = −1,3 we obtain the Leggett-Garg inequality [41, 69] for three correlations of pairs of actual data and by combining b1,k = −1,3 with the equalities obtained by substituting x1 → −x1 we obtain the Bell inequality involving three correlations of pairs of actual data . In other words, Bell-type inequalities follow directly from the fact that quadruples of data satisfy rather trivial arithmetic identities such as Eq. (10).
It then also follows immediately that CFD is a necessary and sufficient condition for the data (x1,k,x2,k), with k = 1,…,N to satisfy simultaneously for all N ≥ 1, all Bell-type inequalities involving three and four different correlations of pairs. We emphasize that this conclusion follows from elementary arithmetic only. Concepts such as “locality” or any other physical argument are irrelevant for establishing this result.
Similar reasoning yields Eberhard’s inequality which differs from the Bell-CHSH inequality in the sense that it can account for reduced detector efficiencies . For convenience of comparison with the original work, we temporarily adopt Eberhard’s parlance and notation. Central to Eberhard’s derivation is the so-called fate of a photon. This fate can be either detected in the ordinary beam (labeled o), or detected in the extraordinary beam (labeled e), or undetected (labeled u). For counting purposes, we represent the fate of a photon by the symbol f, taking the values +1, 0, and −1 corresponding to o, u and e, respectively. Introducing the variables no = f(f+1)/2, ne = f(f−1)/2, and nu = (1−f2), it is clear that one of them takes the value 1 with the other two taking the value 0. For a given pair of settings, say (α1,β2), the number of pairs with both photons suffering fate (o) is then given by noo(α1,β1) = no(α1)no(β1) = f1,1(f1,1+1)f2,1(f2,1+1)/4 where f1,1 = f1,i(αi) and f2,i = f2,i(βi) for i = 1,2. There are similar expressions for neo(α1,β2), nuo(α1,β2), etc. Following Eberhard, we consider the expression  (12)
It is straightforward to enumerate all possible 81 values of the 4 different f-variables that appear in Eq. (12). This enumeration proves that j ≥ 0, independent of the values of the settings. Attaching the subscript k (k = 1,…,N) to label the events as we did to derive the Bell-CHSH inequality and introducing the Eberhard function (13)
it follows immediately that JEberhard ≥ 0 for all N ≥ 1.
In the laboratory experiments [34, 85] there is only one detector per observation station. Hence it makes sense to regard also say, the e photons, as undetected. In terms of the “fate” variables f introduced above this amounts to letting f taking the values +1 and 0 corresponding to o and u, respectively. Instead of Eq. (12), we now consider the expression (14)
Enumerating all possible 16 values of the 4 different f-variables that appear in Eq. (13) proves that jCH ≥ 0, independent of the values of the settings. Attaching the subscript k (k = 1,…,N) to label the events as before and introducing the CH function (15)
it follows immediately that the CH inequality  JCH ≥ 0 holds for all N ≥ 1.
In short: if for all N, the x’s (f’s) are generated according to a CFD-compliant procedure the Bell-CHSH (the Eberhard and CH) inequality is (are) satisfied. In essence, this result is embodied in the work of George Boole , see also Ref. [21, 83]. Moreover, as CFD implies that all Bell-type inequalities hold for all N ≥ 1, there is no room for speculating without violating at least one of the rules of Aristotelian logic that something “spooky” is going on if we encounter data that violate a Bell-type inequality. The logically correct conclusion that one can draw from such a violation is that these data have not been generated in a CFD-compliant manner.
10 An inequality accounting for photon identification
In this section, we address the modifications to the inequality || ≤ 2 that ensue when we take into account the fact that laboratory experiments employ the photon identification threshold to decide whether or not a detection event corresponds to the observation of a photon.
The average detection event counts and detection event pair correlation are given by (16)
respectively, and we have similar expressions for the other choices of settings. In Eq. (16) and in the equations that follow, it is understood that ∑ means , i.e. the sum over all input events, represented by values of the ϕ’s. As shown in section 9, if the x’s that enter Eq. (16) have been obtained by a CFD-compliant procedure, the correlations (a1,a2),… satisfy Bell-type inequalities.
In contrast to Eq. (16), the average photon counts and photon pair correlation for the settings (a1,a2) are given by (17)
where, as explained in section 5.2, the w’s in Eq. (17) account for the effect of the photon identification thresholds and take values 0 or 1. Clearly, Eq. (17) is very different from Eq. (16) unless all the w’s that appear in Eq. (17) are equal to 1, in which case the photon identification threshold mechanism is superfluous and unlike as in the laboratory experiment , the number of photon and detection events is the same.
In the analysis of the experimental data, the photon identification threshold is chosen such that many of the w’s are zero . Hence from the discussion in section 9, it follows immediately that with some w’s zero, it is impossible to prove that the Bell-CHSH function S = S(a1,a2, ) ≡ E(a1,a2)− E(a1, )+ E(,a2)+ E( ) satisfies the inequality |S| ≤ 2.
However, it directly follows from the proof given in our earlier paper  that if the x’s and w’s have been generated by a CFD-compliant procedure, the Bell-CHSH function S can never violate the inequality (18)
The term 2δ in Eq. (18) is a measure for the number of paired events that have been rejected relative to the number of emitted pairs. In detail, 0 ≤ δ≡ N′/Nmax ≤ 1 where N′ denotes the number of input events for which the negative voltage signal of all the photons is smaller than the photon identification threshold 𝓥 and Nmax is the maximum number of contributing pairs per setting. If all paired events would be regarded as photon pairs then δ = 1 and then, and only then we recover the Bell-CHSH inequality |S| ≤ 2. If the x’s and w’s have not been generated by a CFD-compliant procedure, there is only the trivial bound |S| ≤ 4.
The inequality Eq. (18) is a rigorous mathematical fact that holds if, for all N ≥ 1, the x’s and v’s are generated in a CFD-compliant manner and none of the denominators in Eq. (17) is identically zero (in which case no photon pairs have been detected). Conversely, if we find a set of x’s and v’s that yields a value of |S| that exceeds 4 – 2δ, we can only conclude that these data have not been obtained from a CFD-compliant procedure. Any other conclusion would not be logically justified.
In analogy with the derivation of Eq. (18), one may derive an Eberhard-type or CH-type inequality that accounts for the w’s but as such inequalities do not add anything to the discussion that follows, we do not discuss them any further.
11 Discrete-event simulation algorithm
In this section, we specify the algorithm and the simulation procedure in full detail. The algorithm that mimics the operation of the particle source is very simple. For each event k = 1,…,N, a uniform random generator is used to generate a floating-point number 0 ≤ ϕ1,k ≤ 2π. This number is input to the stations with setting (a1,) and another number ϕ2,k = ϕ1,k+π/2 is input to the stations with setting (a2, ). Because of ϕ2,k = ϕ1,k+π/2, the kth event simulates the emission of a photon pair with maximally correlated, orthogonal polarizations. In this respect, we deviate from what is done in the laboratory experiments [34, 85] in the following sense. Unlike in the computer simulation, the detectors used in these laboratory experiments are not perfect. As already mentioned, Eberhard’s inequality can account for reduced detector efficiencies and this feature can be put to good use through minimizing the value of JEberhard with respect to the correlation . This is what is done in the laboratory experiments [34, 85]. However, in our simulation model, the detectors are perfect. Hence the minimum value of JEberhard will be obtained by choosing maximally correlated, orthogonal polarizations . Recall that our aim here is to simulate the most ideal, perfect experiment that accounts for all the essential features of the laboratory experiments, not to simulate a real laboratory experiment including trams passing by  etc.
Upon receiving the input ϕ an observation station (see Figure 3) executes the following two steps. First it retrieves two uniform random numbers r and from a list of such numbers (or, more conveniently, generates these numbers on the fly) and then
where d, is an adjustable parameter to be discussed later and 0 ≤ Vmax, and 0 ≤ Vmin ≤ Vmax set the range of the voltage signal. From Eq. (20) it follows that −Vmax ≤ v,𝓥 ≤ −Vmin, as in the laboratory experiment .
Our choice for the specific functional forms of x = x(a,ϕ,r) and v = v(a,ϕ,) is inspired by previous work in which it was shown that a similar model, which employs time-coincidence to identify pairs, exactly reproduces the single particle averages and two-particle correlations of the singlet state if the number of events becomes very large [26, 72].
For any fixed value of ϕ and uniformly distributed random numbers r, the unit generates a sequence of randomly distributed x’s such that the average of the x’s agrees (within statistical fluctuations) with Malus’ law, i.e. the normalized frequencies to observe x = +1 and x = −1 are given by cos2(a−ϕ) and sin2(a−ϕ), respectively.
The presence of an output variable v which serves to mimic the detector traces recorded in the laboratory experiments. Note that the explicit expression of v = v(a,ϕ,) shows a dependence on the local setting of the station. Such a dependence cannot be ruled out a posteriori but finds a post-factum justification in the fact that the simulations reproduce the results for two particles in a singlet state, see section 13.
The use of the random numbers 0 ≤ r, ≤ 1 mimics the uncertainties about the outcomes, as observed in experiments. Thereby, it is implicitly understood that for every instance of new input, new values of the uniform random numbers r and have been generated.
By construction, the algorithm is a metaphor for Einstein-local experiments: changing a1 (a2) does not affect the present, past or future values of x2 (x1) or v2 (v1). In plain words, the output of one particular unit depends on the input to that particular unit only.
For the settings of the observation stations, we take a1 = θ+π/8, = a1+π/4, a2 = π/8, = 3π/8 and let θ vary from 0 to π. For this choice of settings, quantum theory for a system in the singlet state predicts E(a1,a2) = E( ) = cos 2θ, E(a1, ) = E(,a2) = sin 2θ and S(a1,a2,) = −2 cos(2θ+π/4), the latter reaching its maximum 2 at θ = 3π/8. When we operate the computer model in non-CFD mode, random numbers are used to make a choice between the settings ai and , for i = 1,2, exactly as in the experiments [34, 85].
The simulation procedure is quite simple. We choose a fixed photon identification threshold 𝓥, generate input pairs k = 1,…,N, collect corresponding outputs in terms of x’s and w’s, and compute the single- and two-particle averages according to Eq. (18), the Bell-CHSH function S(a1,a2,) = E(a1,a2) − E(a1, )+E(,a2)+E(), and the Eberhard function J given by Eq. (13).
Because the computer experiment is “perfect”, it differs from the laboratory experiment in the sense that all pairs are created “on demand” and all emitted pairs create one detection event in each station (there are no “false” detection events) but exactly as in the laboratory experiment, the local photon identification threshold at each observation station serves to decide whether a photon has arrived or not.
12 Computer simulation results
This section reports the results of simulations with N = 105 events for the CFD-compliant and N = 105 events per setting for the non-CFD model, with Vmin = 1/2 and Vmax = 1. Note that the “time-tag threshold” and a “trigger threshold” (terminology from Ref.  (supplementary material)) are important to the laboratory implementation but are superfluous, meaning that they do not affect the results of our computer experiments in any way. Indeed, in our perfect experiments, there is no ambiguity in determining when a particle arrives at the observation station. Nevertheless, to counter possible (pointless) critique that we have not incorporated into our simulation model the two thresholds that are essential to the laboratory implementation, we have chosen Vmin = 1/2 in order to leave room for introducing these thresholds.
We limit the discussion to the case d = 4 because we know from earlier work [20, 72, 90], which uses time-coincidence selection, that for d = 4 the computer model reproduces the quantum theoretical result of the correlation of two particles in the singlet state, Malus’ law for the single-particle averages etc. if N → ∞ followed by 𝓥 → − Vmax.
It is not difficult to see that single-particle averages E1(a1, a2), E2(a1, a2) etc. are expected to be zero, up to fluctuations. The reason is that ϕ → ϕ + π/2 changes the sign of the x’s but has no effect on the values of the v’s (see Eq. (20)). Therefore, if the ϕ’s uniformly cover [0, 2π[, the number of times that x = +1 and x = −1 appear is about the same. All our simulation results are in concert with this prediction.
In Figure 6, we present the simulation data of the correlation E(a1, a2) (◯), the single-particle averages E1(a1, a2) (△) and E2(a1, a2) (▽) as a function of θ=a1 − a2, as obtained from a CFD-compliant (Figure 6(a)) and non-CFD (Figure 6(b)) simulation. All the simulation data are in excellent agreement with the quantum theoretical description of a two-particle system in the singlet state which predicts E1(a1, a2) = E2(a1, a2) = 0 and E(a1, a2) = −cos 2θ. Within statistical fluctuations, it is difficult to distinguish between CFD-compliant and non-CFD simulation data, in concert with our earlier work .
In Figure 7 we show the data of the Bell-CHSH function S(θ + π/8, θ + 3π/8, π/8, 3π/8) as a function of θ. Clearly the simulation results are in excellent agreement with the quantum theoretical prediction S(θ + π/8, θ + 3π/8, π/8, 3π/8) = −2 cos(2θ + π/4). In both Figs.6 and Figure 7, there are deviations from the quantum theoretical prediction which are not due to statistical fluctuations. These deviations can be reduced systematically and eventually vanish by letting 𝓥 → −Vmax (𝓥 → > −Vmax), a fact that can be proven rigorously for the probabilistic version of the simulation model [20, 72, 90].
We note in passing that the observation that the frequency distribution of many events agrees with the probability distribution of a singlet state is a post-factum characterization of the repeated preparation and measurement process only, not a demonstration that at the end of the preparation stage, each pair of particles actually is in an entangled state. The latter describes the statistics, not a property of a particular pair of particles .
Results of the Eberhard function Eq. (13) as a function of θ are given in Figure 8(a). The correspondence between the symbols used in Eberhard’s and this paper are as follows: o ⇔ + 1, e ⇔ − 1, α1 ⇔ , α2 ⇔ a1, β1 ⇔ a2, and β2 ⇔ . As expected from the requirements to derive Eq. (13) (see section 9), the CFD-compliant simulation without the photon identification threshold satisfies JEberhard ≥ 0 for all θ whereas processing the data as in the laboratory experiment, i.e. by employing a photon identification threshold, yields JEberhard < 0 for a non-zero interval of θ’s. As is clear from Figure 8(a), the results of Eberhard function Eq. (13) do not change significantly if we use replace the CFD-compliant simulation model by its non-CFD version. The reason for this apparent violation is that the data obtained through the application of the photon identification mechanism do not satisfy the mathematical requirements for deriving Eq. (13).
For completeness, in Figure 8(b) we present results for the function δ which determines the upperbound to the Bell-CHSH function in the case that the photon identification threshold is being used to discard detection events (see Eq. (18)). From Figure 8(b), it follows that δ < 0.8. Hence Eq. (18) predicts an upperbound that is not smaller than 3.4, large enough to include the maximum value of 2 ≈ 2.83 predicted by the quantum theory of the polarizations of two photons (or, equivalently, two spin-1/2 particles).
Finally, it is instructive to compare the number of detection events that the photon identification threshold rejects as being a photon. In the laboratory experiment , the number of trials is about 3.5 × 109 and the total number of so-called “relevant counts”, i.e. the number of times that at least one photon was identified by means of the photon identification thresholds (by software), is about 1.8 × 105. Thus, in this experiment the overall number of events considered to be relevant for the physics, relative to the number of detection events is about 0.005%. For comparison, in the simulations, a photon identification threshold 𝓥 = −0.995 identifies about 23% of the detection events as photons, several orders of magnitude larger than in the laboratory experiment. Clearly, the quality of the data collected in the laboratory experiments are not on par with the quality of the data produced by the computer experiments but obviously, the latter is much easier to realize and use than the former.
13 Post-factum justification of the simulation model
We have already drawn attention to the fact that the explicit expression of the voltage v = v(a, ϕ, ) shows a dependence on the local setting of the station through the factor |sin[2(a – ϕ)]|d (see Eqs. (19) and (20)) and mentioned that such a dependence cannot be ruled out a posteriori. In this section, we examine the consequences of removing this dependence.
In Figure 9, we show results for d = 0, in which case the random variations of the voltage signals v1, , v2, and do not depend on a1, , a2, and , respectively. Instead of E(a1, a2) ≈ −cos 2θ for d = 4, the simulation for d = 0 yields E(a1, a2) ≈ −(1/2) cos 2θ and, as Figure 9(b) shows, |S(θ + π/8, θ + 3π/8, π/8, 3π/8)| ≤ 2. Therefore, the only way to have simulation models of the laboratory experiment  reproduce the quantum theoretical prediction of the polarizations of two photons (or, equivalently, two spin-1/2 particles) is to assume that v1, , v2, and depend on a1, , a2, and . Of course, there is no good argument why, in a particular experiment, this dependence should be of the form Eq. (20). We repeat that we have chosen the form Eq. (20) because our main goal is to reproduce by a CFD-compliant, manifestly non-quantum model, the quantum theoretical predictions of the polarizations of two photons (or, equivalently, two spin-1/2 particles), which as a by-product, yields |S(θ + π/8, θ + 3π/8, π/8, 3π/8)| > 2.
Disregarding the original motivation to perform the Bell-test experiments, the experimental setup shown in Figure 1 can be regarded as a tool to characterize the response of the observation stations to the incoming signals. In the case at hand, what is under scrutiny is the response of the observation station, i.e. of its optical components, the transition-edge detector and the electronics that amplifies its current, under the condition that the incident light is extremely feeble. Viewed from this perspective, our simulations support the hypothesis that the laboratory experiments [34, 85] convincingly demonstrate that the statistics of the observed photons, as defined by the photon detection threshold, depends on the settings (and hence on the polarizations assigned to the photons) of the observation stations.
It is of interest to mention here that since the early days of the Bell-test experiments, it is well-known that application of a Bell-type model requires at least one extra assumption. We reproduce here a passage from Ref.  (p.1890): “The approach used by CHSH is to introduce an auxiliary assumption, that if a particle passes through a spin analyser, its probability of detection is independent of the analyser’s orientation. Unfortunately, this assumption is not contained in the hypotheses of locality, realism or determinism.” It is stunning that although there is at least one auxiliary assumption involved in testing e.g. the CHSH inequality with Bell-test data, the possibility that this assumption is not valid is, to the best of our knowledge, ignored in the experimental studies. As a matter of fact, as we have argued above, all Bell-test experiments with photons performed up to this day can be regarded as direct experimental proof that this auxiliary assumption is invalid. In view of the intricate atomic-scale processes that are involved when light passes through a material, this conclusion seems very reasonable but is, of course, way less spectacular than the conclusion that Bell-type experiments can be used to rule out certain world views.
The general message of this paper is that a model that purports to describe the data produced by an experiment should account for all the data that are relevant for the analysis of the experimental results. In the case at hand, the situation is as follows:
experimental data  are interpreted in terms of a Bell-type model that uses only half of the variables (the x’s),
in the actual experiments  the other half of the variables (the v’s) is essential for the identification of the photons but are ignored in Bell-type models,
the failure of Bell-type models to describe the experimental data is taken as a proof that “local realism” (local in Einstein’s sense) is incompatible with quantum theory and is therefore is declared dead.
We believe that it requires an exotic form of logic to reconciliate the last statement (iii) with the second one (ii).
To head off possible misunderstandings, the authors of this paper do not necessarily subscribe to all or any forms of what is called local realism, CFD theories, or … We are of the opinion that the arguments based on Bell’s theorem in conjunction with Bell-type experiments suffer from what we earlier called the photon identification loophole. One simply cannot blame a model that only accounts for part of the data for not describing all of them. Regarding the previous sentence, Albert Einstein’s quote “make it as simple as possible, but not simpler” is more pertinent than ever.
The challenge for the Bell-experiments community is, therefore, to construct an EPRB-type experiment with a photon (pair) identification that cannot, from the perspective of the simple Bell models, be turned into a “loophole”. Our general proofs of the derivation of Bell-type inequalities for actual data (see section 9), indicate that this challenge cannot be met.
We like to thank D. Willsch, F. Jin, and M. Nocon for useful comments and discussions.
A Probabilistic models
Traditionally, mathematical models of the EPRB experiments are formulated in terms of probabilistic models [1, 2, 5, 9, 10, 14, 16, 17, 21, 29, 30, 31, 32, 33, 35, 39, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 58, 60, 61, 62, 63, 64, 65, 66, 67, 70, 71, 73, 74, 75, 76, 77, 80, 83, 84, 86], often without explicitly mentioning Kolmogorov’s axiomatic framework of probability theory [37, 59]. However, there is a considerable, conceptual gap between a laboratory EPRB experiment and a probabilistic model thereof. The (over)simplifications required to come up with a tractable, proper probabilistic model of a laboratory EPRB experiment are key to the understanding of the phenomena involved.
When we use the “digital computer – laboratory experiment” metaphor, both the simplification and replacement are made during the formulation of the computer model. A computer simulation algorithm entails a complete specification of how the data are generated. In this respect, all “physically relevant” processes are well-defined (by construction) and known explicitly in full detail. There are no uncertainties or unknown influences. Note that the reverse operation, i.e. to construct an algorithm for a digital computer out of a probabilistic model is, as a matter of principle, impossible. At most, a probabilistic model can serve as a guide to construct an algorithm, see also .
In the particular case where the observed phenomena take the form of data generated by simulations of EPRB experiments on a digital computer, the transition from the observed phenomena to suitable mathematical models does not suffer from the many “uncertain” factors that may or may not play an essential role in the laboratory experiments and is, therefore, a fairly simple transition. In this section, we start from a computer simulation model and construct the probabilistic model thereof. We start by showing that the simplest of these models are identical to those proposed and analyzed by Bell  and then move on to the construction of a probabilistic model for the computer model of the recent Bell-type experiments [34, 85] that we use in our simulation work presented in section 12.
A.1 Bell-type models
Consider the CFD simulation model in which we explicitly ignore the v-variables. For a given input to the observation stations, the outcome is one of the so-called elementary events [37, 59], in this case one of the 16 different quadruples In the language of probability theory, the set of these 16 different quadruples is called the sample space Ω [37, 59], the set of elementary events, from which we construct the so-called σ-field 𝓕 of subsets of Ω, containing the impossible (null) event and all the (compound) events in whose occurences we may be interested [37, 59]. In modeling the computer experiments, we only need to consider finite sets, hence we do not have to worry about the mathematical subtleties that arise when dealing with infinite sets [37, 59].
The next step is to assign a real number, a probability measure, between zero and one that expresses the likelihood that an element of the set Ω occurs [37, 59]. We denote this (conditional) probability measure by the part indicating that the settings and all other conditions, denoted collectively by Z, do not change during the imaginary probabilistic experiment. By definition, the probability measure satisfies 1 [37, 59].
Note that a probability measure is a purely mental construct . If it were not, we could interchange the experiment/computer simulation, the results of which directly connect to our senses, with the imaginary world of mathematical models and prove theorems, not only about the mathematical description, but also about our sensory experiences, a tantalizing possibility. One such example that exploits intricate features of set theory is given in Ref. , in which it is explicitly stated that there does not exist an algorithm to actually calculate the relevant functions. In other words, this example cannot be realized on a physical device such as a digital computer, not even approximately. Moreover, unlike the simulation algorithm executing on a digital computer, the probabilistic description does not contain a specification of the process that actually produces an event: we have to call up Tyche to do this for us. In other words, a probabilistic model is incomplete in that it only describes the outcomes of the simulation procedure, not the procedure itself. However, this incompleteness is partially compensated for by the fact that the calculation of averages and correlations no longer involves the number of events N. We have for instance (21)
and we have similar expressions for the other two-particle averages and also for the single-particle averages. Here and in the following, we use the shorthand notation ∑Ω = We have written E͠ instead of to emphasize that the former have been calculated within a probabilistic model whereas the latter involve calculations with actual data. From Eq. (21), we have (22)
and because the elementary events are quadruples, it follows directly from Eq. (10) that |S͠ | ≤ 2. Thus, in the probabilistic realm, not in the world of the observed two-valued data, the existence of the Bell-CHSH inequality follows from the existence of a probability measure for the elementary events of quadruples Moreover, it can be shown that with some additional requirements on its marginals, the existence of such a probability measure is necessary and sufficient for Bell-type inequalities to hold [11, 30, 82, 87, 88].
It is clear from Figure 5 that xi and for i = 1, 2 only depend on the corresponding ai and , respectively. However, the probability measure for quadruples, does not express this basic property of the computer model, nor does it explicitly express the dependence on the ϕ’s.
A simple way to incorporate all these features of the simulation model in a probabilistic description is to define a new joint probability measure for quadruples by (23)
where P(x1|a1, ϕ1, Z) etc. are the “local” probabilities to observe x1 etc., the integration is over the whole domain of ϕ1 and ϕ2 and μ(ϕ1, ϕ2) is a non-negative, normalized density. With the new probability measure Eq. (23), Eq. (21) simplifies considerably. For instance, for the detection events we have E͠12 = E͠(a1, a2) where (24)
Instead of Eq. (22), we now have (25)
From Eq. (23), it follows directly that (26)
which expresses the probability measure P′(x1, x2|a1, a2, Z) in terms of the single-variable probability measures P(x1|a1, ϕ1, Z) and P(x2|a2, ϕ2, Z) and the measure μ(ϕ1, ϕ2) of the variables ϕ1 and ϕ2.
The factorized form Eq. (26) is the landmark of the so-called “local hidden-variable models” [5, 13]. Although “local” is often used to express the notion that physical influences do not travel faster than the speed of light it is, in the context of a probabilistic model (computer model), an expression of statistical (arithmetic) independence only. Bell’s theorem uses the factorized form Eq. (26) to state that quantum mechanics is incompatible with local realism, the world view in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light .
In one respect, Eq. (26) is grossly deceiving, namely it does not reflect the elementary fact that the parent probability measure Eq. (23) from which Eq. (26) follows, concerns quadruples, not pairs. Without the knowledge that Eq. (26) is in fact a marginal distribution of the probability measure Eq. (23) for quadruples, one is inclined to think, as Bell did and his followers still seem to do, that there are “physical” assumptions involved in justifying the factorized form Eq. (26). However, this is not the case because the Bell-type inequalities hold if and only if there exists a joint probability measure for the quadruples . This mathematical statement is void of any physical meaning.
In summary: in this subsection we have shown that a probabilistic model of the CFD computer simulation model that does not account for the photon identification mechanism of the EPRB laboratory experiment, automatically leads to the models introduced by Bell . Within this framework, the existence of Bell-type inequalities and the corresponding joint probability measures are mathematically equivalent . The latter statement, which relates to imaginary data only, corresponds to the statement that in the realm of actual two-valued data, the existence of Bell-type inequalities and CFD-compliant generation of all the quadruples are mathematically equivalent, see section 9.
A.2 Incorporating the photon identification threshold
Referring to Eq. (5), the extension of the construction outlined in section A.1 to incorporate the local photon identification mechanism is of purely technical nature. Instead of Eq. (23), we now introduce (27)
where P(v1|a1, ϕ1, Z) etc. are the “local” probability densities to pick v1 etc., and all other symbols have the same meaning as in Eq. (23).
It is now straightforward to write down the probabilistic expressions that incorporate in exactly the same manner as in the analysis of the laboratory experiment data , the effect of the photon identification threshold. For instance, we have for the photon counts
and, as before, we have similar expressions for the other expectations in Eq. (21) and for the single-particle averages.
The expressions for the single- and two-particles averages that derive from the probabilistic model Eq. (27) all have the form that is characteristic of a genuine “local hidden-variable model”, as exemplified by Eq. (28). Only “local” detection and photon identification are involved. The values of the variables local to one observation station do not depend on variables that are local to another observation station. The only form of “communication” between the stations is through the “hidden” variables ϕ1 and ϕ2.
It directly follows from the general discussion of section 6 that A(a1, a2) and B(a1, a2) can be expressed in terms of both the local time-window and time-coincidence selection. In detail, for the local time-window selection we have
for the time-coincidence selection we have
From earlier work based on representation Eq. (30) [20, 26, 72] and from the simulation results presented in section 12, it follows directly that the probabilistic model defined by Eq. (27) is capable of reproducing the predictions of quantum theory for the single- and two-particles averages of two photon polarizations in the singlet state. This then should stop spreading the misconception that Bell has proven that quantum theory is incompatible with all “local hidden-variable models”. Of course, “local hidden-variable models” that do not include the, for the laboratory experiment essential, mechanism to identify single or pairs of photons are incapable to describe the salient features of the experimental data but as explained in section 7, that is hardly more than a platitude.
In summary, in this appendix we have shown how to construct probabilistic descriptions of computer simulation models of EPRB experiments that rely on photon identification thresholds to decide whether or not a photon has been detected . The resulting probabilistic models conform to the requirements of genuine “local hidden-variable models” and if they account for a local mechanism to identify photons, are also capable of producing results that are in full agreement with quantum theory.
Ballentine L.E., Quantum Mechanics: A Modern Development. World Scientigic, Singapore, 2003 Google Scholar
Bell J.S., Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, Cambridge, 1993 Google Scholar
Bell J.S., On the Foundations of Quantum Mechanics, World Scientific, Singapore, New Jersey, London, Hong Kong, 2001 Google Scholar
Bohm D., Quantum Theory, Prentice-Hall, New York, 1951 Google Scholar
Brans C., Bell’s theorem does not eliminate fully causal hidden variables, Int. J. Theor. Phys., 1987, 27, 219–226. Google Scholar
Brody T., The Philosphy Behind Physics, Springer, Berlin, 1993 Google Scholar
Brody T.A., The Suppes-Zanotti theorem and the Bell inequalities, Revista Mexicana de Física, 1989, 35, 170–187. Google Scholar
Christensen B., McCusker K., Altepeter J., Calkins B., Lim C., Gisin N., Kwiat P., Detection-loophole-free test of quantum non-locality, and applications, Phys. Rev. Lett., 2013, 111, 130406 CrossrefGoogle Scholar
de Muynck V., W. W.D., Martens H. Interpretations of quantum mechanics, joint measurement of incompatible observables and counterfactual definiteness, Found. Phys., 1994, 24, 1589–1664. CrossrefGoogle Scholar
De Raedt H., De Raedt K., Michielsen K., Keimpema K., Miyashita S., Event-based computer simulation model of Aspect-type experiments strictly satisfying Einstein’s locality conditions, J. Phys. Soc. Jpn., 2007, 76, 104005 CrossrefGoogle Scholar
De Raedt H., De Raedt K., Michielsen K., Keimpema K., Miyashita S. Event-by-event simulation of quantum phenomena: Applica tion to Einstein-Podolosky-Rosen-Bohm experiments, J. Comput. Theor. Nanosci., 2007, 4, 957–991. CrossrefGoogle Scholar
De Raedt H., Jin F., Michielsen K., Data analysis of Einstein-Podolsky-Rosen-Bohm laboratory experiments, Proc. SPIE, 2013, 8832, 88321N1–11 Google Scholar
De Raedt H., Michielsen K., Hess K., The digital computer as a metaphor for the perfect laboratory experiment: Loophole-free Bell experiments, Comp. Phys. Comm., 2016, 209, 42–47. CrossrefGoogle Scholar
De Raedt H., Michielsen K., Jin F., Einstein-Podolsky-Rosen-Bohm laboratory experiments: Data analysis and simulation. AIP Conf. Proc., 2012, 1424, 55–66. Google Scholar
Fine A., The Shaky Game: Einstein Realism and the Quantum Theory, University of Chicago Press, Chicago, 1996 Google Scholar
Giustina M., Versteegh M.A.M., Wengerowsky S., Handsteiner J., Hochrainer A., Phelan K., Steinlechner F., Kofler J., Larsson J.A., Abellán C., Amaya W., Pruneri V., Mitchell M.W., Beyer J., Gerrits T., Lita A.E., Shalm L.K., Nam S.W., Scheidl T., Ursin R., Wittmann B., Zeilinger A. Significant-loophole-free test of Bell’s theorem with entangled photons, Phys. Rev. Lett., 2015. 115, 250401 CrossrefGoogle Scholar
Grimmet G.R., Stirzaker D.R., Probability and Random Processes, Clarendon Press, Oxford, 1995 Google Scholar
Hensen B., Bernien H., Dreau A.E., Reiserer A., Kalb N., Blok M.S., Ruitenberg J., Vermeulen R.F.L., Schouten R.N., Abellan C., Amaya W., Pruneri V., Mitchell M.W., Markham M., Twitchen D.J., Elkouss D., Wehner S., Taminiau T.H., Hanson R., Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature, 2015, 15759 Google Scholar
Hess K., Einstein Was Right! Pan Stanford Publishing, Singapore, 2015 Google Scholar
Hess K., De Raedt H., Michielsen K., From Boole to Leggett-Garg: Epistemology of Bell-type inequalities, Adv. Math. Phys., 2016, 2016, 4623040 Google Scholar
Jaynes E.T., Clearing up mysteries - The original goal. In: J. Skilling, ed., Maximum Entropy and Bayesian Methods, Kluwer Academic Publishers, Dordrecht, 1989, 36, 1–27 Google Scholar
Khrennikov A., Nonlocality as well as rejection of realism are only sufficient (but non-necessary!) conditions for violation of Bell’s inequality. Inf. Sciences, 2009, 179, 492–504. CrossrefGoogle Scholar
Khrennikov A.Y., Interpretations of Probability, VSP Int. Sc. Publishers, Utrecht, 1999 Google Scholar
Khrennikov A.Y., Contextual Approach to Quantum Formalism. Springer, Berlin, 2009 Google Scholar
Khrennikov A.Y., Violation of Bell’s inequality and non- Kolmogorovness, AIP Conf. Proc., 2009, 1001, 86 Google Scholar
Khrennikov A.Y., On the role of probabilistic models in quantumphysics: Bell’s inequality and probabilistic incompatibility, J. Comput. Theor. Nanosci., 2011, 8, 1006–1010. Google Scholar
Kolmogorov A., Foundations of the Theory of Probability, Chelsea Publishing Co., New York, 1956 Google Scholar
Kupczynski M., Entanglement and quantum nonlocality demystified, AIP Conf. Proc., 2012, 1508(1), 253–264. 10.1063/1.3567465 Google Scholar
Kupczynski M., Causality and local determinism versus quantum nonlocality, J. Phys.: Conference Series, 2014, 504(1), 012015 Google Scholar
Kupczynski M., Entanglement and quantumnonlocality demystified. Found. Phys., 2015, 45, 735–753. Google Scholar
Kupczynski M., EPR paradox, quantum nonlocality and physical reality, Journal of Physics: Conference Series, 2016, 701(1), 012021 Google Scholar
Leggett A.J., Garg A., Quantum Mechanics versus Macroscopic Realism: Is the Flux There when Nobody Looks. Phys. Rev. Lett.,1985, 9, 857–860. Google Scholar
Matzkin A., Is Bell’s theorem relevant to quantum mechanics? On locality and non-commuting observables, AIP Conf. Proc., 2009, 1101, 339–348. Google Scholar
Nieuwenhuizen T., Kupczynski M., The contextuality loophole is fatal for the derivation of Bell inequalities: Reply to a comment by I. Schmelzer, Found. Phys., 2017, 47, 316–319. CrossrefGoogle Scholar
Nieuwenhuizen T.M., Where Bell went wrong, AIP Conf. Proc., 2009, 1101, 127–133. Google Scholar
Pearl J., Causality: models, reasoning, and inference, Cambridge University Press, Cambridge, 2000 Google Scholar
Shalm L.K., Meyer-Scott E., Christensen B.G., Bierhorst P., Wayne M.A., Stevens M.J., Gerrits T., Glancy S., Hamel D.R., Allman M.S., Coakley K.J., Dyer S.D., Hodge C., Lita A.E., Verma V.B., Lambrocco C., Tortorici E., Migdall A.L., Zhang Y., Kumor D., Farr W.H., Marsili F., Shaw M.D., Stern J.A., Abellán C., Amaya W., Pruneri V., Jennewein T., Mitchell M.W., Kwiat P.G., Bienfang J.C., Mirin R.P., Knill E., Nam S.W., Strong loophole-free test of local realism, Phys. Rev. Lett., 2015, 115, 250402 CrossrefGoogle Scholar
Sica L., Bell’s inequalities I: An explanation for their experimental violation, Opt. Comm., 1999, 170, 55–60. Google Scholar
About the article
Published Online: 2017-11-22