1.1 The issue of dispute
Frequently in human affairs, be they world wars or parlor games, the participants become so fixated on the tactics of the moment that fundamental strategic considerations are overlooked. This writer holds that this is exactly the current situation with regard to the theoretical and experimental study of the nature and consequences of Bell’s Theorem. What has become known as “closing loopholes in the experimental verification of Bell’s Theorem” is a systematic attack on aspects of the experiments due to possible ancillary technicalities of the experimental setups. It is thought that peculiarities pertaining to the experimental equipment, or overlooked physical effects, may introduce erroneous data supporting misleading conclusions. Very few of those concerned with the general validity of the experimental verification of Bell’s theorem also concern themselves with the more fundamental question: is the theorem itself, aside from practical laboratory realities and exotic hypothetical effects, as a statement within the ambit of Quantum Theory, valid? Is it self consistent? And, is it rationally related to entities in the natural world? In short, is Bell’s Theorem logically correct in its ideal form, ignoring practical subsidiary laboratory complications?
In principle, if any statement is conceptually false, then rigorous, logical analysis can identify the offending assumption or deduction in the reasoning chain taken in the attempt to “prove” the conclusion. In mathematics, this process is denoted disproving a theorem. This is a formal matter. Formal logic, however, provides a simpler means to indisputably reject a theorem: display a single counterexample.
Informally, it is likewise instinctively understood, in addition, that any statement or idea that is afflicted with a fatal error, frequently due to complexity or the nonavailability of essential information, can be seen nevertheless as false because some consequence of the statement or a derived idea which should be valid if the statement is true, is in fact invalid. Such a situation can obtain sometimes even when there is no obvious connection to a formal proof or disproof of the statement itself. These “secondary” or derivative falsehoods or inconsistencies can be called “clues.” Often there is no obvious connection of such clues to a formal statement, and, very often they are disregarded as irrelevant.
Now, the critical literature negating the consequences of Bell’s analysis, known as his “Theorem,”1 to the best knowledge of this writer, contains three long-term schools of analysis criticizing the formal proofs of Bell’s Theorem, and in addition, many single publications proffer clues. The latter are mostly accidental discoveries made in investigations, not always of Bell’s Theorem itself, but of some phenomena used for many purposes only one of which is involved in a experimental proof of Bell’s Theorem. Some of these “clues ” may also be denoted by custom as “loopholes,” which can be distinguished from pure clues by their relevance, not to the core validity of “Bell’s Theorem,” but just to its empirical verification.
Herein a line of critical analysis of “Bell’s Theorem” based on the observation that Bell mistook the use of coincident probabilities, is described. Bell’s analysis deduced an inequality that he asserted must be respected by all local (i.e., conventionally causal: causes of all effects lie within the latter’s past light cone) and realistic (i.e., all material entities exist independent of human interventions or observations).
1.2 Clues and counterexamples
The current state of the art regarding proofs of Bell’s Theorem is that experimental realizations of the structure of the inequalities deduced in abstract proofs of Bell’s analysis find that Bell’s inequality is violated for auspiciously chosen parameters. Then, since Bell’s analysis states in short that, without some contribution of irreality (wave function collapse induced by human observation) and/or nonlocality (superluminal interaction) the observed results, i.e., the inequality violation, could not have been obtained. This conclusion is meant to be a logical deduction, in other words, a necessary consequence for the validity of the assertions made to the effect that, “experiments prove Bell’s Theorem.” If it is not valid, then all results from empirical tests of Bell’s analysis are ambiguous, insofar as they may have a conventional explanation, i.e., the experimental ‘proofs’ as such fail to satisfy the proclaimed theoretical deductions.
However, there exist relatively numerous examples of classical phenomena manifestly lacking any hint of irreality or nonlocality violating the very same inequalities. These ‘counterexamples’ usually are based on some macroscopic, classical realization of the microscopic phenomena exploited for the experimental Bell-test experiments. Results from these experiments, to the degree practical, violate Bell inequalities numerically in exactly the way as do Bell verification experiments. As argued above, this should be impossible if Bell analysis is logically fault free. Again, in other words, the conclusion that a Bell Inequality cannot be violated without irreality or nonlocality is baseless.
Herein first, pioneering studies presenting examples of such nonquantum phenomena tending to disprove Bell’s core conclusion by means of counterexample are briefly reviewed. These include those by A. O. Barut & collaborators, Perdijon and Mizrahi and Moussa. Thereafter, the reasons these obviously classical models coincide with the otherwise considered “quantum result” is discussed.
2 The Vanguard
2.1 A. O. Barut & collaborators
In a series of papers beginning about 1984 A. O. Barut and various collaborators have advanced the contention that, for spin -1/2 particles the average from a classical model of an ensemble of similar particles, yields the same correlations as does Quantum Mechanics . They based their analysis on the known fact that Quantum Mechanics addresses only the expectation values of measurable parameters while having nothing to say about individual measurements. This allows then for reasonable physical assumptions regarding individual systems, which they take to be that an ensemble of such entities with spin can have a random distribution of spin orientations over a sphere. These hypothetical inputs to the model imply that, for their model, the singlet state is not a representation of an individual entity with spin, but rather a formalized expression for calculating expectation values for a randomly oriented ensemble of such entities.
In 1986 Barut and his student M. Božic extended their study to the triplet state ; and Barut reported explicit examples of hidden variable renditions of Bell Inequality tests thereby claiming to have found a counterexample to the widely accepted assertion that Quantum Mechanics cannot accept such variables 2
Finally, in 1991 Barut published a considerably streamlined analysis of his central assertion regarding spin . Here spin of an entity is taken to be specified by a vector S(θ, φ) giving its direction in space so that the expectation of the correlation E(A, B) is then the average over the randomly distributed angles (θ, φ) for all the elements of the ensemble, namely:
where a and b are the orientations of the magnetic field interacting with spin, or the axes of polarizers filtering light pulses. Insofar as this classical result is identical to the quantum version, Barut’s model constitutes a counterexample to claims that the observed correlations can be obtained only under the effects of either or both irreality and nonlocality.
2.2 Mizrahi and Moussa
These authors independently extended the analysis of the basic Bell-test by means of a simulation of the classical rendition of the experiment . They proposed an actual mechanical and optical setup to realize the conditions envisioned for the basic, two wing Bell Inequality test. It consists of a randomly flashing light in a rotating tube the ends of which are equipped with two polarizer filters oriented such that their axes have a fixed angular displacement. The light pulses from the flashes then exit the tube on both sides and pass through polarizer filters with axes a, b fixed in the laboratory frame after which pulse intensity is measured and recorded for each. The randomness of the flashes with respect to the rotation of the tube ensures that the polarization orientation of any single pulse is random. The fixed displacement of the axes of the polarizers mounted on the end of the tube ensures that the relationship between the polarization axes of the two pulses is nevertheless fixed. This structure constitutes the essence of the natural phenomena under study. (See Figure 1.)
The laboratory setup to acquire the data for computing a Bell Inequality consists of two fixed polarizers with photodetectors, one set at each end with axis a or b. The photo detectors then register the intensity of the intensity of the macroscopic light pulses (obviously not single photons) which in the simulation are deduced according to Malus’ Law.
The simulation results are parallel to those obtained in actual optical realizations of the basic Bell-test. The fact that this manifestly classical arrangement leads to a violation of a Bell Inequality must mean either that classical optics also is irreal or nonlocal; or that the significance of Bell’s analysis is misinterpreted, even invalid.
3 The underlying defect
3.1 Edwin Jaynes
The models described above contradict the conclusion to Bell’s analysis. The natural question is: how can a seemingly rigorous deduction be challenged? What, if any, error is involved?
Historically, it seems that the otherworldly consequences of Bell’s analysis were in too great a conflict with otherwise intuitively logical, and without other exception, empirically verified principles to be accepted by everybody, so that in spite of sociological forces of conformity, some researchers sought non fantastical explanations. Perhaps the first to do so, or publish his opinion, was Edwin Jaynes. In the 1980’s Jaynes was engaged in an extensive study of Bayesian Methods within the whole of Physics, and was likely highly sensitized to the intricacies of probabilistic reasoning. With this competence he quickly spotted the fundamental mistake in Bell’s argumentation and made it an example of misapplication of probability theory in the preface to the proceedings of a conference held in 1988 on Bayesian Theory . Therein, without any elaboration or even a single formula, he simply pointed out that Bell had misapplied the concept of a conditional probability. The missing elaboration was subsequently published by Perdijon in 1991.
In 1991 the French mining engineer J. Perdijon independently proposed the model described above but applied to the optical version concerning the relationship between different states of polarization . His analysis is based clearly and explicitly on the observation that Bell’s expression for joint expectations, i.e.:
silently presumes that the detections in the two output channels, i.e., “photon detections,” are statistically independent or uncorrelated—contrary to a fundamental, hypothetical input into the analysis. Perdijon notes that for correlated events this formula should be expressed by
where ρA(λ,a|b) is the conditional probability that a detection is made at station A given that a detection was already made at station B. Such conditional probabilities do not imply, as mistakenly taken by Bell, that there is a causative interrelationship between the polarizers with settings a and b, but that the input signals differ in their characteristics as instilled at a “common cause” on the intersection of the past light cones of the signals3 Thereafter, as the signals pass through detection stations, they are registered or “seen” if their characteristics correspond to the preset parameters set in the detection apparatus a,b. This is in accord with the conventional understanding of the application of probability theory to correlated events.
When this consideration for the most elementary optical version of experimental tests of Bell’s analysis is correctly taken into account, the derivation of a Bell Inequality does not go through. Thus, conclusions drawn from the empirical violation of a Bell Inequality are rendered invalid. These results can be extended straightforwardly to more complex coincidence experiments involving more than two channels [11, 12].
3.3 An explicit demonstration
The foundation of a Bell Inequality is the definition of a coincidence probability (or wave function) for correlated events. The version of this expression used by Bell is the following: (1)
Consider now the difference of two such coincident probabilities: (2)
Here, zero in the form:
is added to Eq. (2) above, to get: (4)
Then using |P| ≤ 1, one can write: (5)
which is one form of the celebrated “Bell inequalities.”
Now let us repeat these manipulations, however, starting from the explicit form of Eq. (1), namely: (8)
Again, the difference of two correlated probabilities: (9)
and now a more explicit form for what should be an expression equaling zero: (10)
using, as above, |P| ≤ 1, gives: (12)
So, here we arrive at the crux of the matter insofar as Eq. (7) cannot follow because the term ∫ dλρ(λ)A(a′|λ)B(b|a,λ) does not equal P (a′, b). In fact it is undefined, or nonsense, as it is the product of the absolute probability A(a′|λ) times the conditional probability B(b|a, λ), which is not conditioned on a′, but on a, thereby rendering the product meaningless.
The final, general conclusion is that this Bell inequality is invalid; deductions from it are void4
Exceptionally, of course, when the two detections are uncorrelated, then B(b|a, λ) = B(b|a′, λ), and Bell’s result is valid.
4 Mathematical technicalities
4.1 Quantized and non quantized spaces
There is an intrinsic characteristic related to spin and electromagnetic polarization often overlooked but of fundamental significance. It is that, phenomena for which the mathematical rendition yields an orbital solution manifold with group structure captured by SU(2), are fundamentally non quantum. This follows inexorably from different viewpoints. One such is the fact that, SU(2) is homomorphic to SO(3), i.e., the group of rotations in longitude and latitude on a sphere. The non-commutativity of the generators of SO(3) obviously is geometric in nature. It has nothing to do with quantum mechanical structure, because it is not the consequence of Heisenberg Uncertainty. This is true even though factors of ħ appear, but where this factor scales the radius of the sphere upon which the displacements take place. SU(2) is the group of bi-vector transformations of the 2-D planes in 3-D space orthogonal to vectors (generators) associated with displacements in longitude and latitude. While it is less amenable to visualization than its homeomorph, it is clear that the non-commutative geometric structure of the planes or bi-vectors, like the great circle orbits on a spherical surface, is just a matter of geometry. Quantized spaces, where the non commutativity results from Heisenberg Uncertainty, comprise just two cases, namely phase space (q, p) and quadrature space (phase and amplitude of wave complexes, (ϕ, A)). An obvious consequence of these facts is that all experiments conducted on polarization of electromagnetic signals, (i.e., a structure first introduced by Stokes 70 years before Quantum Theory was envisioned, and having no relationship whatsoever to Heisenberg Uncertainty), cannot be employed for the exploration of implicit consequences from Quantum Mechanics.
This understanding of the fundamental character of this (topological) space, i.e., its non quantum status, is in full accord with all experimental realizations of investigations using Bell Inequalities to plumb Nature as revealed by Quantum Mechanics. In all optical experiments the a’s and b’s are experimenter chosen polarizer axes, which makes the whole setup sensible only if the λ’s are the polarization states of the photons (or electromagnetic pulses) passing through the measuring stations. This obviates the oft encountered theoretical discussion in which it is disputed whether the λ’s are correlated with the a’s and b’s. The whole point of measurement is to exploit a correlation between some property that is not accessible to human perception (because it is too small, outside the ambit of human perception, etc.), here λ, with some variable quantity that is accessible, a meter reading, say. In view of the fact that all the physical processes in the selected venue, i.e., that governed by SU(2), are non quantum in the first place thereby rendering all implications for the existence of preternatural “quantum” phenomena moot; the physical character of all involved variables as prequantum entities is determined by the relevant physics.
These mathematical considerations substantially reinforced by the fact that the instruments and devices employed in optical experiments are in fact capable only of making polarization determinations of electromagnetic pulses, whether such pulses correspond to single photons as imagined or not. The only means to introduce nonlocality or irreality is by hypothesizing that the polarization state of the pulses (photons) is determined by von Neumann’s “Projection” or collapse of the wave function upon observation, in this case by interaction with the polarizers in the measuring stations. However, there is no inexorable reason to reject the non quantum account of the relevant phenomena, specifically, prior causes.
4.2 Representative vice ontological states
Beyond the purely inadequate employ of formulas involving conditional probabilities implicit in this line of critical analysis of Bell’s Theorem, there is an ancillary issue introduced by the singlet state:
If this set of symbols is understood to represent a single, ontological entity, then it, as a composition of mutually exclusive components, is a logical abomination. Nevertheless, in the literature explicating Quantum Mechanics, it is often represented to pertain to a single system or entity. This combination of symbols, however, turns out to be coincidently vitally convenient. For the calculation of a coincidence coefficient as applicable to the experiment in Figure 1, i.e.:
one sees from this formula that the data streams are to be normalized and have zero mean. In the quantum formalism, both the normalization and the zero mean are built into the definition of the singlet state, so that the calculation of the correlation coefficient conforms to the calculation of an expectation as prescribed by the Born interpretation of wave functions. Thus, in this respect and for that structure governed by SU(2), the quantum formalism merely redresses non quantum notation5
In any case, the identification of the expression for the singlet state (and many other similar “quantum” expressions) cannot irrefutably be associated with single ontological entities. Both theory and experiment pertain to ensembles of similar entities; in the case of particles with spin, for example, the ensemble may be distributed randomly over the surface of a sphere.
All of the components of the critical analysis presented above are fundamental principles known to virtually any competent practitioner in optics. Thus, the question arises: just how can what has been called “... la plus grande mépris de l’histoire de la physique?” , persist over 50 years and become ensconced as professional dogma? The response draws on yet another feature of Quantum Theory of an equally mystical character: the “Projection Hypothesis,” according to which all material entities at their core before measurement are completely described by a wave function consisting of a “superposition” of multiple, mutually exclusive subsequent stages of which one is held to be precipitated ultimately by the act of observation. Although von Neumann is credited with this idea by virtue of having presented it in his book on the mathematical foundations of Quantum Theory, less rigorous discussions and disputations on the interpretation of Schrödinger’s wave functions involving similar notions can be found in historical records. In any case, Bell himself in all his presentations clearly considered that the wave function of the “entangled” daughter particles of two wing variations of envisioned tests were to be ‘realized (i.e., converted to observable or “real” non-entangled entities) by the act of measurement at the detection stations A and B. Nonlocality (superluminal intercourse of some sort) should occur, he took it, in accord with the von Neumann’s Projection Hypothesis applied to separated but formerly entangled subsystems, as a consequence of measurement (which implies intervention by sentient beings). “Projection” is considered to entail ‘realizing’ all space-like separated subsystems instantaneously even when only one is materially engaged, in this regard it violates Einstein’s Principle of Causality that no effect can have a cause outside its past light cone.
It was in trying to accommodate the Projection Hypothesis that induced Bell to the erroneous notion that Quantum Mechanics imposed some kind of instantaneous ’realization’ to an unambiguous state (rather than the superposition of mutually exclusive options) of the spin direction of individual electrons passing through a Stern-Gerlach setup. The fact is, however, probability theory, in particular the use of conditional probabilities correctly employed, has nothing to say about the origin of correlations. The mathematical structure itself would accommodate instantaneous, nonlocal phenomena, were they to exist, without alterations. The source of the issue is not one of Probability Theory, but strictly of interpreting Quantum Theory.
From commentary accompanying early research, one can get the impression that the Projection Hypothesis was introduced in order to accommodate the fact that wave functions, even though interpreted as probability densities, seem also to have physical substance as they are seen to diffract at physical slits. Strictly abstract expressions of knowledge (i.e., epistemological entities) do not also interact with concrete material (i.e. ontological entities). Nevertheless, wave functions for single entities cannot be taken as empirically verified; so that imputing individual (vice ensemble) physical identities to them is not fully justified either. In turn, this complexity led to the introduction of yet another “spooky” notion: complementarity. Here again, weirdness in not the objection, but logical contradiction6
Arguably the tolerance of, as well as the public appetite for, mystical or preternatural “scientific” theorizing is best explained perhaps by the Forman Thesis . According to Forman’s historical analysis, the psychological consequence in the post war German Weimar Republic, where the center of the development of Quantum Mechanics took place, as a result of the unexpected and sudden loss of WWI, was such as to foster a general, widespread loss of confidence in rationality and sober consideration of life’s experiences. Nowadays, in retrospect, it seems that this thesis has great merit even when it cannot be taken as the predominant factor. At a minimum, it accounts for the psycho-social environment within which Bell’s generation of physicists (certainly its mentors) were educated, and possibly were also predisposed to “open minded” tolerance of ideas actually deserving deep skepticism.
In conclusion, the analysis presented herein supports the assertion that, Bell’s analysis does not support the contention that Quantum Theory cannot in principle be extended by means of additional local, real variables. Einstein’s life-long conviction that ultimately an interpretation of quantum theory free of the ambiguities he criticized up to his death is seen to be deserving of respect and taken as guidance for the continued development of the understanding of the material world .
Clauser J.F., Early History of Bell’s Theorem, In: Bertlmann R. A., Zeilinger A. (eds.), Quantum [Un]speakables: From Bell to Quantum Information, Springer, Berlin, 2002. Google Scholar
Christian J., Disproof of Bell’s Theorem, Brown Walker Press, Boca Raton, 2014. Google Scholar
Barut A.O., Meystre P., A Classical Model of EPR Eperiments with Quantum Mechanical Correlations and Bell Inequalities, Phys. Lett., 1984, 105A(9), 458. Google Scholar
Barut A. O., Classical and Quantum EPR’-spin Correlations in the Triplet State, ’IAEA (Internal Report), 1986, IC/86/367. Google Scholar
Barut A. O., Explicit Calculations with a Hidden-Variables Spin Model, In: Quantum Mechanics Versus Local Realism, F. Selleri (ed.) 1988, Plenum Press, New York, 1988, 433. Google Scholar
Jaynes E. T., Clearing up Mysteries – The original Goal, In: Skilling J. (ed.), Maximum Entropy and Bayesian Methods, Kluwer Academic, Dordrecht, 1989, 1-27. Google Scholar
Perdijon J., Localité des comptages et dépendance de coincidences dans les tests E.P.R. avec polariseurs, Ann. Fond. Louis de Broglie, 1991, 16(3), 281-286. Google Scholar
Kracklauer A. F., Is Entanglement always Entangled?, J. Opt. B (Semiclasscial and Quantum) 2002, 4, S121-S126. Google Scholar
Sica L., Bell’s Inequalitites I: An explantion for their experimental violation, Opt. Comm., 1999, 170, 55-60, & Bell’ Inequalitites II: Logical loophole in their Interpretation, Opt. Comm., 1999, 170, 60-66. Google Scholar
Canal-Frau D., La “théorie” de Bell, est-elle la plus grande mépries de l’historie de la Physique?, Ann. d. la Fondacion Louis de Broglie, 1991, 16(2), 231-238. Google Scholar
Kracklauer A.F., Pilot Wave Theory: A Mechanism and Test, Found. Phys. Lett. 1999, 12(5), 442-453. Google Scholar
Forman P., Weimar Culture, Causality and Quantum Theory, Historical Studies in the Physical Sciences, 1971, 3, 1-115. Google Scholar
Einstein A., Einleitende Bemerkungen über Grundbegriffe, In: George A. (ed.), Louis de Broglie: physicien et penseur, Éd. Albin Michel, Paris, 1953. Google Scholar
Currently the term “theorem” most often denotes a valid syllogism in a mathematical, logical structure. Such a structure comprises a select set of primitive elements and a set of self-consistent axioms pertaining to the interrelationships of the primitive elements. With these inputs in hand, further truths regarding interrelationships among the primitive elements can be proven as theorems. Physical sciences are, in this light, the reverse of (mathematical) logic in that they are, essentially by working backwards, an effort to identify the primitive elements (particles, whatever) and axioms (basic theories). Given such an understanding, no statement in a Physics Theory can be denoted a (logical) theorem. Calling it such anyway facilitates unjustifiably imputing certainty to its rectitude.In fact it was John Clauser, not John Bell, who coined this designation. See footnote 14 in .
Barut’s claim here is based on the tacit assumption that, per current conventions the quantum mechanical description of spin phenomena is in fact nonclassical or fundamentally quantum in nature. As discussed below, this claim is disputable
This mistake may have been facilitated in Bell’s mind by thinking of these probability densities as expressed in Quantum Theory, where they tacitly have been given an ontological interpretation. Nevertheless, the Born Interpretation for wave functions should be considered to imply that these densities have precisely the same formulation with regard to dependent variables as do expressions used in Probability and Statistics.
There are at least two other lines of critical analysis of Bell’s reasoning reaching the same conclusion.One is based on the observation that the symbols A and B in the above formulas in actual application to experimental data do not stand for single terms, but for sequences. This raises the difficulty in factoring these terms insofar as such manipulations remain consistent only when the sequences are identical and not just statistically indistinguishable. Since these sequences result from four separate experiments, but are being treated as if from a single experiment, they cannot be factored and remultiplied self consistently, thereby redering the final result, a Bell Inequality, self contradictory [13, 14].Likewise, criticism based on the suspicion that Bell inadvertently erred by failing to consider time variable correlations leads to the conclusion that Bell Inequalities are fundamentally misunderstood. Based on the above considerations, this suspicion is correct, as time dependent correlations are a subset of all correlations .
In connection with data collection and reduction for Bell test experiments there is a very serious issue that arises with regard to detection efficiency. The above formula for a correlation coefficient is most directly computed when the individual data points, here the I(·)’s, are current intensities resulting in different values in the two channels. However, in low intensity experiments, at the so-called “single photon” level, the raw data is either a detection or a non-detection. To convert such data to intensities as needed to calculate correlation coefficients, multiple repetitions at each setting combination must be made to enable the computation of ratios of detections to detections+non-detections, etc. Clearly, if in an experiment the detectors have low efficiency, such ratios cease to be representative of the population of detected photons (or pulses), rendering possible deductions from experiments inconclusive. This is a serious issue for those concerned with loopholes, but not so for considerations on the validity of the conceptual basis for the derivation of Bell Inequalities. If experiments with perfect detectors are conceptually defective, then all such experiments, regardless of detector efficiency, are pointless
Wave-particle duality is another preternatural ‘quantum’ property, and logically independent of entanglement. For a possible resolution of this conundrum, see .
About the article
Published Online: 2017-12-29