Method of Theoretical Physics:
Albert Einstein, 1933
The Herbert Spencer Lecture 
Pure logical thought cannot yield us any knowledge of the empirical world, all knowledge of reality starts from experience and ends in it. Propositions arrived at by purely logical means are completely empty as regards reality.
It is now a 100 years since Sir Arthur Stanley Eddington, the first relativistic astrophysicist, led the 1919 expedition to photograph the sun during a total solar eclipse in order to find out whether photons from distant stars were deflected (bent) passing the sun as predicted by Einstein’s theory of general relativity (GR). The theory was confirmed (within the accepted experimental accuracy of the time), and as a result, Einstein became an instant celebrity in the popular press. Since then, GR has passed all tests with flying colours, ruling out alternative theories, demonstrating perfect agreement with all experimental evidence (cf. string theory) up to today.
We deliberately start this article with the above citation from Einstein, which is used as a roadmap for the novel physical concepts presented in this article. Einstein’s remarks also purport a stern warning. Expressed in simpler terms, Einstein reminds us that the description of physical reality is the ultimate goal of physics, which in the end is a purely empirical science. Theoretical constructs are certainly necessary but are subordinate to experiment and have no life of their own unless confirmed by proper data. Approval by experimental data is the yardstick for any theoretical model. If this cannot be achieved, then it is not a physical theory, and those ideas should be transferred to the mathematics or philosophy department. Regarding the present situation in theoretical physics, as discussed in Part I , the scientific community seems to have forgotten – at least to some extent – this admonition of Einstein.1 A recent, highly informative and expertly written account (from a physics point of view) on Large Hadron Collider (LHC) data and the state of Grand Unification Theories (GUT) is presented by S. Hossenfelder  in her excellent book: Lost in Math whose conclusion can be summarised by one word: failure. In Chapter 8 of her book, a brief account of string theory along with a long list of its problems and attempts to escape from experimental evidence is given. Eventually, the author concludes “It (string theory), does not, however, describe our universe.” Hardly a success story after more than four decades.
In Section 2, the importance and challenge of numerous experiments from particle physics and astrophysical observations are presented, as well as measurements of gravitational phenomena. The LHC data are also presented in Section 3 because of their importance. Based on these empirical findings, alternative physical concepts termed matter flavour and hypercomplex gravity are introduced in nonmathematical form to emphasise their physical meaning.
In Section 3, the latest LHC data with emphasis on the Conseil Européen pour la Recherche Nucléaire (CERN) exotic search program, both from Run-1 TeV and Run-2 TeV, will be employed to evaluate their impact on the validity of physical theories based on the existence of extra space dimensions. The key finding is that these data impose extremely tight constraints, which may be employed to question the whole theoretical approach, in particular when combined with other experimental results discussed in Section 2. It seems, therefore, justified to search for alternative explanations.
In the remaining part of this article, the novel concept of extra number systems [subsumed under the name extended Heim theory (EHT2) see Part I] is introduced, and its consequences regarding the group structure of particle physics will be considered. Of prime importance is the prediction of additional gravitational bosons that would allow the generation of extreme gravity fields outside GR, as will be outlined further in detail in Section 5. With regard to cosmology, a tentative explanation of the origin of dark energy (DE) is given, where the picture of a hot Big Bang is questioned by the idea of a Quantised Bang (Section 6).
In Section 4, we start with Einstein’s advice by presenting in detail an assembly of fundamental physical principles, termed cosmic physical principles, meant to govern all physical processes. This set of 12 principles, of which the duality principle is the most important, will be presented one by one, and their impact on the various physical phenomena will be discussed as the basis for a different (but not too different) model of particle physics and cosmology. It turns out that the effects stemming from these simple sounding principles are amazing, leading to the formulation of nine so-called no-go theorems. Moreover, these principles require the introduction of novel physical concepts. The most far-reaching consequence is the replacement of extra spatial dimensions by extra number systems. The physical reality of the postulated extra spatial dimensions appears no longer tenable, because the range of Newton’s law of gravitation was recently experimentally found to extend down to the range of the atomic size (10−10 m, details in the next section). A recent discussion of the modification of Newton’s law in accordance with GR for cosmological distances, but not for small distances, is given by Eingorn et al. , demonstrating a Yukawa-type exponential screening of the gravitational potential at distances >λ [λ being due to the existence of cold dark matter (DM)] termed the cosmic screening by the authors who found the cosmic background to be responsible for this type of screening. Their results are consistent with the largest known structure in the universe, the Great GRB Wall (or Hercules Corona Borealis Great Wall), with the size of the order of 3000 Mpc. It should be mentioned that these results have no impact on modified Newtonian dynamics (MOND) (see below).
In Section 5, the most striking consequences from the novel concept of an extra system of numbers with regard to allowable physical symmetries are presented – displacing the most likely experimentally disproved concept of extra dimensions. A novel group, termed cosmic group, is introduced to cover all physical phenomena as well as its effects on both particle physics and spacetime. The most significant outcome is the prediction of two different types of gravity, represented by groups SU(2) × SU(2), resulting in the postulation of a total of six gravitational bosons. The first SU(2) group gives a description of Einstein’s GR enhanced by the additional interaction of matter with the DE field. The associated gravitational fields are termed cosmological gravitational fields and are of purely geometrical origin. The effect, influenced by the presence of DE, is very weak and is a result of the distribution of matter on spacetime curvature. GR also describes the feedback of spacetime curvature on the distribution of matter. The observed weakness of cosmological gravitational fields demonstrates the rigidity of the space lattice, allowing the universe to assume an enormous spatial extension.
The gravitational bosons of the second SU(2) group mediate much stronger gravitational fields that are proposed to result from the interaction with electromagnetism. According to the duality principle, the three cosmological and the three hypercomplex-gravity fields are caused by two different sources, namely, pure geometry (spacetime curvature) and charge (in the form of electric charge and mass, represented by particles of hypercomplex mass) and thus should not be unifiable. A total of four groups derived from hypercomplex numbers (quaternions) q ∈ ℍ can be found that describe the physical properties of matter in general, i.e. both bosons and fermions, as well as the external spacetime. The external spacetime is complemented by an (internal) gauge space, termed Heim space, or . An important outcome is that there should exist four families of leptons and quarks (Fig. 1), where the fourth family of particles possesses negative masses and is assumed to represent DM, living in dual spacetime (see below), and thus cannot be observed directly in our Minkowski spacetime. A new concept arises, denoted matter flavour (analogous to quark flavour), which is derived from the hypercomplex group structure that incorporates both dark and ordinary matter (OM) as well as the hypercomplex masses of the three bosons representing the postulated hypercomplex-gravity fields.
In Section 6, several of the so-called cosmological riddles are addressed (to be revisited in Part III), reconsidering the role of the Einstein field equations in the formation and evolution of the universe and critically dealing with the presently favoured idea of a Big Bang as well as discussing the origin of DE and DM, as they result from the concept of extra number systems.
In Section 8, we present a preliminary discussion of propellantless propulsion (propulsion without fuel) where the proposed gravitational spin 1 bosons are introduced. It will be argued that the associated hypercomplex-gravity fields can be generated in order to provide the enabling acceleration mechanism for propellantless propulsion – provided that a suitable material composition is selected. A brief description of other currently considered propellantless propulsion concepts is given, namely, the electro magnetic (EM) drive, the Woodward effect, Mach’s principle, and Becker’s electrodynamics, demonstrating that these systems/concepts cannot function in practice.
In Section 10, the novel physical concepts of EHT are summarised and discussed.
The final section provides an outlook on the repercussions of the novel physical concepts with regard to particle physics, cosmology, and novel gravitational technology as well as novel schemes for energy generation based on the existence of hypercomplex-gravity fields.
The above discussion should have made clear that current physics is far from complete; instead, there are severe challenges that are still to be resolved. The so-called advanced physical theories, developed over the last five decades, have not provided the tools to successfully tackle these puzzles. Therefore, in Part I and also in this article, alternative ideas are presented in an attempt to contribute to an explanation, at least to some extent, of the basic contradictions posed by recent experiments and astrophysical observations.
…behind all the discernible laws and connections,
there remains something subtle, intangible and inexplicable.
2 Mysteries in Physics and the Universe
In the following, we present an attempt to construct an alternative physical picture to resolve the riddles posed by recent experiments that are either contradictory or remain still unexplained. In particular, this concerns the null results of the LHC in seeing any supersymmetric particles, the different lifetimes of the neutron, the varying size of the proton, the discrepancies in the measured magnitude of the gravitational constant, the enigma of missing DM particles, the nonexplainable existence of DE, and a possible spatial variation of the fine structure constant α. In cosmology, there are fundamental questions concerning the Big Bang describing the origin of the universe, the long-standing problem of the deviation from Newton’s gravitational law for the rotational velocities of stars in orbit about their galactic centre, the measured deviations of Newton’s gravitational constant GN, and, most recently, observed small differences by laser interferometer gravitational-wave observatory (LIGO) in the propagation speeds of gravitational waves and photons. These problems are severe, demonstrating a lack of understanding at the most fundamental level of physics.
The predictions of presently favoured advanced physical concepts, i.e. supersymmetry (SUSY) and superstring theories, when compared to most recent experimental results, are even more in conflict (no superpartners, no unification, no naturalness,3 no DM particles) with observation than they were in early 2017, when Part I of this article  was published (Fig. 2). Supersymmetry predictions are not only in contradiction to experiments from atomic and particle physics, but also from astrophysics as well as gravitational measurements. Most recent results (Winter 2018) from the LHC collider ATLAS cooperation  have not found any excess above the expected standard model (SM) background, running for about 3 years at TeV.4 That is, none of the flurries of predicted elementary particles (neither from the ATLAS nor from CMS experiment) have been revealed up to particle masses of 1.62 and 2.4 TeV/c2 for spin-0 resonances.5 These findings are confirmed by most recent ACME data (October 22, 2018)  (the table-top experiment is operational since 2014), and more recent results can be found at https://www.nature.com/articles/s41586-018-0599-8 that constrain the value of the electron dipole moment, EDM, to be smaller than e m, where e denotes the electron charge. As stated in  pp. 33–46, the electron seems to be perfectly round, which means that the new types of particles, assumed to cause a deviation from the spherical electron orbits about atomic nuclei, as postulated by numerous theories, do not seem to exist. A direct consequence of these measurements is the quasi confirmation of the SM of particle physics that predicts an EDM of e m at the four-loop level and the ruling out of the predictions of supersymmetric theories, for instance, see figure 4, p. 39 in , meaning the mass of the undiscovered heavy particles to have shifted above the 10 TeV/c2 level, and thus, SUSY can no longer contribute to the solution of the hierarchy problem and again had to be moved out of the experimental reach, a process going on for more than four decades, resulting in a substantial loss of scientific credibility. Nevertheless, theoreticians have been quick to construct a substantial number of models consistent with these data , predicting particle masses in the range between 3 TeV/c2 and 109 TeV/c2, hardly an informative result. However, the SM of particle physics cannot be the final answer, and there is a need to go beyond the SM, in order to discuss three mysteries, namely, the extreme fine tuning of the Higgs boson mass and to find a solution of the long standing problem of the matter-antimatter asymmetry. The third problem, the existence of DM, is addressed in this article. A perfectly round electron does not exhibit any asymmetry, and thus, its electric dipole moment must be zero, i.e. the centre of mass and the centre of charge of the electron (almost) coincide. The SM predicts an electron EDM of e m. This means that the centre of mass and charge must be separated by a distance of about m, far below the Planck length, and thus, their distance is practically zero. This distance is nonphysical if there exists a spacetime lattice with a grid spacing of the Planck length m. The result of the SM is not surprising, as it is based on a continuous and flat spacetime. The ACME measurements already are at 10−31 m, and dipole moments less than 10−35 m cannot be distinguished anymore if the Planck length is the limit, and not the ESA Integral satellite value (https://www.esa.int/Our_Activities/Space_Science/Integral_challenges_physics_beyond_Einstein). The recent constraints on the EDM also have an important consequence for the high energy scale, termed grand unification, which assumes the equality of the couplings of the three subgroups (unification of couplings does not occur in the SM). To go beyond the SM, the hypothesis was made that there exists a simple group G at this energy scale that does embed the three subgroups of the SM. This GUT group is assumed to represent the complete particle content of the SM. The smallest group possible is the group SU(5), without considering right handed neutrinos, or SO(10) if right-handed neutrinos are accounted for. At the MGUT mass scale, this group spontaneously breaks down into the three subgroups of the SM. However, based on the recent measurements of the EDM, this is no longer possible, for a group SO(10) at the GUT high scale is ruled out, see figure 4 on p. 39 of . This fact also is in support of our statement (Section 4) that a unification of the four fundamental interactions should not be possible because it would contradict the principle of duality. Also the concept of technicolour appears to be invalid. In particular, the CMS collaboration started an extensive search for the neutralino and top squark in 2016 based on proton-proton collisions at the centre-of-mass energy of 13 TeV, but so far no significant excess of events could be observed above the expectation values of the SM and most likely will not be found.
Moreover, no DM particles have been found by the LHC, confirming the futile search of the three dozen experiments performed over the last 35 years (with zero results, according to S. Hossenfelder  Chapter 9, dressed up as interesting bounds). The LUX experiment (ongoing since 2013) has provided zero evidence for DM particles, thus independently supporting both the LHC and ACME measurements. In particular, the findings of the DAMA collaboration of a statistically significant annual modulation in the rate of nuclear interaction events was ruled out by the Cosine-100 collaboration in their recent publication in Nature on December 5, 2018 . No evidence for an excess of events above the expected background was found, and hence, there are no WIMPs. The upper bound for the WIMP-sodium cross section is cm2 for WIMPs of mass 10 GeV/c2 at the 90 % confidence level. Annual modulation is expected because the velocity of the Earth varies relative to the galaxy’s dark-matter halo owing to the orbital motion of the Earth around the sun. According to the so-called standard dark-matter halo model, this result rules out WIMP–nucleon interactions and thus cannot be the cause of the annual modulation that was possibly observed by the DAMA collaboration. As a direct consequence, supersymmetric particles have become increasingly unlikely to exist in nature.
Confusion reigns, as demonstrated by controversial experimental findings and about 20 articles written by well-known physicists, published in the recent book by the late J. Brockman (ed.)  entitled This Idea Must Die,6 portraying a highly controversial picture (of the physics of string theory and SUSY). Therefore, present experimental findings may suggest that it is these two theoretical concepts that may have to be retired despite interesting mathematical features. Similar confusion is visible with respect to key concepts of cosmology, i.e. the Big Bang, DM, inflation, the multiverse idea, as well as predictions concerning the ultimate fate of the universe.
Furthermore, in Part I, it was shown that Newton’s law has been proven to be upheld down to the length scale of about 1 μm (upper Fig. 2). As recently as December 2017, the validity of Newton’s law has been extended by four orders of magnitude down to 10−10 m or 0.1 nm by Haddock et al. . It appears that Newton’s law holds in the atomic range as indicated by the scattering of neutrons. As the electron mass is about 2000 times smaller than the nucleon mass, gravity must result from the nucleons (protons and neutrons) whose size is about 10−15 m; hence, gravity must be governed by the subatomic length scale (lower Fig. 2). Newton’s law must therefore be working on the subatomic scale as well; thus, no energy could have escaped into the postulated higher dimensions at this length scale. So far, higher space dimensions seem to be in basic conflict with all direct measurements (in particular, no evidence for extra spatial dimensions in the universe based on gravitational wave data was found). It is now clear from the recent LIGO data that large-wavelength gravitational waves and short-frequency photons experience the same number of spacetime dimensions , and noncompact space dimensions do not exist, a result that is more or less obvious, because these dimensions should have been detected a long time ago. No deviation of the gravitational amplitude from the inverse luminosity distance relationship in accordance with GR has been observed. In other words, there is no leakage of energy into (noncompact) extra space dimensions, and that concept, according to Figure 2, appears to be no longer tenable as it seems to have been excluded by nature. This has far-reaching consequences, not only for particle physics, but also for cosmology (e.g. the existence of multiverses). The experimental results cited in Part I already ruled out to a large extent any modification of the classical gravitational law proportional to r−2. This includes any modified Newtonian law operating in D = d + 3 dimensions (d denotes the number of extra space dimensions), that is,
instead of the classical gravitational law being proportional to r−2. With the new data from the experiment of Haddock et al. (Fig. 2, lower picture), the idea of a cosmos in the form of a brane world, where gravitons freely roam the bulk space [i.e. the higher dimensional embedding space for the spatial three-dimensional (3D) brane world], has become untenable. Advocates of a brane world hoped that if an extra spatial dimension of extension 10−4 m existed, the strength of the gravitational interaction would resemble the strength of the electroweak interaction. In this case, the so-called hierarchy problem (see Part I) where the relationship of the Planck mass, mPl, GUT mass, mGUT, and the mass of the vector boson, W±,
requiring a large ratio of , could have been interpreted as a so-called holographic effect. It was proposed that a Planck length in our 3D-brane would now be much smaller than the (large) compactified dimension, i.e. m, leading to the holographic effect. This scenario is definitely ruled out by experiment.
Most recent results from experimental particle physics seem to tell us that DM particles do not exist, whereas astrophysical observations (starting about 85 years ago with Zwicky in 1933)  have provided irrefutable experimental evidence for their existence, e.g. based on gravitational lensing. However, the original assumption of DM was that it occurred inside galaxies, but the recent astronomical observations by Bidin, ESA 2010, 2012, and confirmed in December 2014, seem to prove the absence of DM inside galaxies, confining DM to the galactic halo. Despite the criticism by Tremaine and Jovy in 2012 (see Part I), this leads to a scenario requiring an entirely new physical mechanism for the explanation of the deviating galactic orbital velocities. Dark matter has been elusive for more than eight decades, and the recent community report  with more than 100 authors (suggesting numerous novel small experiments, as the large experiments have turned up empty handed) sounds like a desperate attempt to finally detect the missing DM.
Also unexplained is that, based on observations of hundreds of galaxies, it is evident that the velocities of stars orbiting the galactic centre deviate from Newton’s gravitational law at small accelerations, assuming the total gravitational galactic mass is based on the amount of luminous matter. The MOND (Section 5.4) hypothesis gives the correct numerical values but lacks any physical explanation.
An important idea coming from the novel concept of hypercomplex groups, as will be discussed in Section 5, is the closely associated idea of matter flavour – as an analogy to quark flavour or colour flavour. Different types of matter, positive, negative, and hypercomplex matter (including imaginary matter), result from the introduction of extra number systems that make the idea of extra space dimensions superfluous.
Another mystery is that cosmology has no explanation for about 68 % of the energy in the universe7 that comes in the form of DE as confirmed by Planck satellite data in 2013. In Section 6, a novel idea is presented, attributing the existence of DE to the evolving structure of spacetime, and therefore according to EHT DE cannot be produced in accelerators. Such a form of energy neither exists in the SM of particle physics, nor in the proposed supertheories. That is, DE would be the direct result of the formation of the smallest units of space.8
Regarding novel aspects of gravity outside Einstein’s GR, three different types of experiments (2006–2011) are mentioned that may have generated extreme gravity-like fields at cryogenic temperatures. In Section 5, additional gravitational bosons and different types of matter are introduced that may resolve the apparently conflicting data obtained from recent experiments and astrophysical observations as well as elucidating the nature of DM and DE. In particular, it will be argued that the interaction between electromagnetism and gravity, as already surmised by Einstein in 1916, may exist owing to the phenomenon of symmetry breaking in combination with the generation of virtual particles of hypercomplex mass. However, this requires a different type of gravity, dubbed hypercomplex gravity, outside Einstein’s GR but not inconsistent with GR.
As reported in Part I of this article  recent data from atomic physics and particle physics as well as astrophysical observations appear to have invalidated so called supertheories developed over the last four decades and meant to replace the SM of particle physics developed in the early 70s of the last century. In Section 3, a review of the current search for novel particles based on the latest LHC data will be presented along with the repercussions on the so-called supertheories. According to particle physics experiments a DM particle does not exist, i.e. nothing was ever observed. On the other hand, for astrophysicists the existence of DM is beyond scientific doubt – it is an empirical fact. Without DM there would be no galaxies. Even worse, astrophysical measurements (according to recent Planck satellite data) have determined 68 % of the total energy in the universe as DE. Such a form of energy does neither exist in the SM of particle physics, nor in the proposed supertheories. As a consequence, it may be concluded that the universe should not exist. It is obvious that the present extension of the symmetry of the SM () has led to a major confrontation with physical reality. Already, in 1967, B. Pontecorvo postulated a new particle, termed the sterile neutrino, with a mass of about +1 eV/c2 and generally interpreted as some kind of fourth neutrino. Such a particle may indeed exist, as indicated by the most recent experimental data from the MiniBooNE experiment. However, according to EHT, its physical properties need to be completely different from the three known lepton families yet this is not matched by the postulated sterile neutrino, as will be discussed in further detail in Section 7.3.2.
A new era of astronomical observation began with the advent of the Hubble space telescope in the 1990s. Over the last few years numerous additional satellites and telescopes were sent into space and, in combination with sophisticated highly powerful ground based telescopes, e.g. the VLT (Very Large Telescope) of the European Southern Observatory (ESO) in Chile, the new research field of astroparticle physics was initiated. It is possible that astroparticle research may not just supplement earthbound accelerator research, but actually compete with it. As discussed in Part I, the future of the next-generation accelerator with a circumference of about 100 km is uncertain, and Fermi’s 1954 proposition for Globatron, an accelerator that spans the Earth, is no longer a realistic alternative (because of space debris apart from technical issues). Instead, a high luminosity upgrade of the LHC, termed HL-LHC, to 60 fb−1 is planned for 2026 (five times the number of the present number of collisions per unit time and area at the collision point of the two colliding beams), with a further increase to 3000 fb in 2035. In any case, the particle energy provided by the cosmic accelerator cannot be matched, but perhaps improved recording equipment may enable us to make use of it. Already, when CERN was established in 1953, the study of cosmic rays was formulated as one of the major scientific goals of CERN.
The recent results of the H.E.S.S. collaboration (finished 2015) starkly question another cornerstone of today’s astrophysics, namely, the occurrence of the Big Bang. From the proposed hot Big Bang nucleosynthesis comes the main evidence for DM as a type of exotic, nonbaryonic matter. Supersymmetry, based on extra dimensions, provides the framework for a particle species that fits the observed properties of DM. The lightest supersymmetric particle, which is stable, comes in the form of four neutralinos, considered to be the perfect WIMP. WIMPs (weakly interacting massive particles) can only weakly interact with OM, e.g. nucleons. Nevertheless, several experiments (XENON1T, DAMA, CMDS, Edelweiss, PandaX, see below) have been operating for decades to directly detect collisions of WIMPs and OM, but to no avail. For instance, in 2012, the XENON100 experiment at the Gran Sasso Laboratories produced a WIMP cross-section limit of 2.0 × 10−49 m2 for a WIMP mass of 55 GeV/c2 at 90 % confidence level. On May 18, 2017, XENON1T, the successor experiment, reduced this limit (for a spin-independent WIMP–nucleon elastic scattering cross section) to 7.7 × 10−51 m2 for a 35 GeV/c2 WIMP mass at 90 % confidence level . Hence, together with recent ATLAS and CMS data (Section 3), evidence is mounting that WIMPs do not exist.
This futile search also questions the hypothesis of a hot Big Bang (Section 6). If the universe was filled with a hot plasma immediately after the Big Bang the relativistic WIMPs would have collided with each other, and ordinary particles would have been produced by WIMP annihilation. This process would have slowed with the cooling of the expanding universe. Given the strength of the weak force, it can be calculated that today there must be five times more cold relic WIMPs (DM) than ordinary particles. Not a single WIMP was ever detected. Hence, the hot Big Bang picture may not be correct.
In addition, many SUSY theories predict their lightest superpartner to be stable in form of a neutral, weakly interacting particle – WIMP. This ghostly particle is searched for by the LHC – as will be discussed in Section 3 – so far in vain.
According to the ideas of EHT, to be discussed below, the concept of a Big Bang may have to be replaced by a Quantised Bang (Section 7.2). In addition, these nil experimental results also speak against two key concepts in physics, namely, SUSY and extra spatial dimensions, the cornerstones in all advanced particle theories.
Gravity and electromagnetism are the two long-range interactions known in current physics. In 1911, Heike Kamerlingh Onnes in Leiden reported on the phenomenon of superconductivity in mercury below a critical temperature TC, showing that the electrical resistance of a conducting material effectively could be zero. In analogy, the EHT model predicts the existence of a similar effect for gravity, for which the name hypercomplex gravity was coined. That, however, has to be outside GR, which is based on the curvature of spacetime. By contrast, hypercomplex-gravity fields arise from the interaction with electromagnetic fields in spacetime through (additional) gravity mediator bosons of spin 1 and not by acting on spacetime. These fields represent a new, second type of gravitational interaction that is of the same type as the electromagnetic, weak nuclear force and the strong nuclear force. In that sense, it would be correct to state the existence of four fundamental interactions, whereas GR is to be considered as the interaction of the spacetime lattice with any kind of matter, i.e. affecting the motion of all physical entities that carry energy. Einsteinian gravitation does not exhibit the classical property of being a force mediated by bosons. Of course, a spin 2 boson, termed the graviton, can be postulated in order to comply with the general picture of physical force, but it is more a mathematical convenience and not a physical necessity. Moreover, such a boson was never observed. Einstein’s equivalence principle predicts the equality of inertial mass and gravitational mass, but it is not clear that this idea holds at the quantum level, and the latest experiment, July 2018, did not find any hint for the existence of a graviton particle. It also seems that a gravitational Casimir effect does not exist. Quantum theory allows the superposition of states, which means that a massive particle may be in two different states at the same time, and because different states do have different energy levels, they necessarily (remember Einstein’s E = m2) represent different masses. Then the total mass gets also fuzzy and thus may be in conflict with Einstein’s equivalence principle (so far no deviation has been found). It seems that if a particle obeys Heisenberg’s uncertainty principle (meaning that the system has two noncommuting observables), it necessarily may be in conflict with Einstein’s equivalence principle (see the recent article by Zych and Brukner ). However, a lively discussion on the quantum nature of gravity already took place in 1957 (see the final chapter in , p. 260), including the eminent physicists R. P. Feynman, Bondi, Rosenfeld, Bargmann, de Witt, Belinfante, and J. Wheeler. This question has not been decided upon, as the recent article by Marletto and Vedral  shows. They suggest the gravitational field be probed by two masses, extending a proposal by R.P. Feynman of 1957 discussed in . The first mass, being in a superposition of two locations, becomes entangled with the field (similar to diphoton entanglement), while the second mass, also in a superposition, is used to report the entanglement. First, if two quantum systems (i.e. two masses) can be spatially superposed and become entangled through the interaction of a gravitational field, then that gravitational field itself must be quantum capable of possessing two noncommuting observables.
This prediction allowed the pair to propose tests that would tease out the quantum behaviour of gravitational acceleration. So far, the major difference between the two classical long-range forces is that electromagnetic fields can be generated in the laboratory, while gravitational fields (so long as gravity is considered to be of geometric nature) cannot be engineered. By contrast, hypercomplex-gravity fields should be similar in strength to electromagnetic fields for they do not originate from geometry.
According to GR, gravitational fields can only be produced by large static or moving masses, e.g. planets or stars. In Einstein’s time (1915), EM was the only other known interaction, and Einstein devoted the rest of his career trying to unify these two forces, but also searched for a direct interaction between electromagnetism and gravity as Faraday had already surmised. The gravitomagnetic fields predicted by GR are far too small to be of technical interest. This situation will not change because recent observations and simulations by Parsa et al.  are confirming GR also in the nonlinear range. In other words, GR in the form of Einstein’s field equations is encoding the geometric nature of spacetime, at least as soon as it can be represented by a manifold. Owing to the extreme rigidity of the underlying spacetime lattice, manifested by the exceedingly small size of the grid spacing (see next section), the feedback of matter on spacetime is generally extraordinary weak (including black holes). Furthermore, if spacetime ultimately is a lattice, that is, is discrete, the example of a metric
that describes the so-called Poincaré half-plane (see A. Zee Chapters I.5 and IX.6 ) is not possible. This metric requires that the length of a ruler → 0 for y → 0 while the edge of this plane, at y = 0, is infinitely far away from any finite point (x, y). This behaviour is due to the assumed continuity of spacetime, being a manifold at any length scale. A discrete spacetime would invalidate the recent advance in string theory and quantum gravity, known as Ads/CFT (anti de Sitter/conformal field theory).
The supermassive black hole at the centre of our galaxy, known as Sagittarius A∗, has a mass of about 4.5 × 106 M⊙ (solar mass) and is about 8.2 kpc from Earth. It is also hidden behind a cloud of gas and dust; thus, observations are restricted to radio waves or the infrared range. By measuring four relativistic star orbits close to the supermassive black hole, Sagittarius A∗, Parsa et al. provide strong support that GR is most likely to be correct, but their observations will be repeated in 2018, performing interferometric studies of stellar orbits close to Sgr A∗, in order to accurately determine the relativistic parameter of additional stars as well as the impact of the drift motion of Sgr A∗. In mid-2018, there is the opportunity to measure the highly elliptical orbit of S2, the closest star to Sgr A∗. Owing to previous results, GR is expected to be fully vindicated. Although no direct connections to the MOND hypothesis exist, we consider it highly improbable that GR will fail at small accelerations.
As the above discussion should have revealed, a unification of Maxwell’s equations and Einstein’s field equations should not be possible because their physical roots are different. EM originates from spin 1 bosons, while GR is due to the geometric property of the spacetime manifold itself. Second, Einstein and Faraday may have been correct in their search for an interaction between EM and gravity, but this required a new type of gravity, termed hypercomplex gravity, as mentioned above, mediated by spin 1 bosons, and so is outside of GR. In order to incorporate such a phenomenon, the group structure of the SM of particle physics needs to be properly extended (Section 5).
However, during the last two decades, several experiments have been performed that may have generated/measured extreme gravitomagnetic or gravity-like (i.e. acceleration) fields at low temperatures in the laboratory as well as in space, as reported in Part I. If correct, this would be an indication that GR is not the complete description of gravitation. At present, there are three different possible experimental sources as discussed in Part I, for extreme gravitomagnetic fields (although their existence is not conclusive). These are Tajmar et al. and Graham et al., whose experiments were carried out in the laboratory as well as the Gravity Probe-B (GP-B) experiment, launched into a 640 km LEO (Low Earth Orbit) in 2004. GP-B was not devised for the detection of extreme gravitomagnetic or gravity-like fields, but might have inadvertently generated these fields in space.
In addition, the fundamental physical principles, laid out in Section 4, severely constrain the number and types of particles in the universe and establish a fairly restrictive framework of admissible cosmological models that exclude/modify several physical principles of contemporary physics such as unification, higher dimensions, Big Bang, and multiverses.9
Into the universe, and why not knowing,
Nor whence, like Water willy-nilly flowing.
The Rubaiyat of Omar Khayyam,
Verse XXIX , 1995
3 Verdict of the LHC Data
The Rubaiyat of the Persian astronomer-poet Omar Khayyam has been an enigma for western interpreters for centuries. Only recently, P. Yogananda has given a logical and consistent interpretation  of this text.
Our cosmic environment is still a major riddle for us (why not knowing), nor do we currently know its origin (nor whence). Moreover, we are mere spectators, not able to change the course of cosmic phenomena (water willy-nilly flowing).
The LHC is the most powerful scientific tool meant to reduce our state of ignorance with regard to particle physics and cosmology. The CERN exotic search program is believed to provide the necessary data to discriminate between the various advanced theories that are all based on extra space dimensions.
Of course, the CERN program is not the only one searching for new particles and DM in the form of WIMPs that may have any mass ranging from 1 MeV to 10 TeV, a clear sign that theory is at a loss. Numerous experiments in the search for WIMPs have been going on for decades and are being constantly refined (Section 7.3.2). A comprehensive view of the ongoing experimental research programs for DM until the end of 2011 is given in Chapters 6, 8  by N. Prakash. All experiments at that time reported a null result. Hence, only experimental results published after that date will be cited.
The other exotic candidate for DM, the axion particle, was postulated independently by S. Weinberg and F. Wilczek in 1978. Initially, a mass around 0.01 eV/c2 was calculated, whereas the latest search efforts by the recent high-sensitivity ADMX experiment  are focused on smaller masses in the range of eV/c2 simply because no axions were found in the former range, and also the experimental sensitivity improved substantially. The ADMX collaboration uses a so-called haloscope, that is, a cryogenic microwave resonator embedded in a strong static magnetic induction field of about 7 T. It is assumed that a DM axion field generates a current density within the resonator volume, oscillating with a frequency ν = E/ℏ, where E is the axion energy coming mainly from its rest mass. In case the tunable resonator frequency matches the axion frequency, the axion current source delivers power to the resonator in the form of microwave photons, owing to a small but measurable axion-photon cross section. No axions have been detected so far, but the mass range between 2.66 and 2.81 μeV/c2 predicted by several models was excluded by ADMX (April 2018).
Astrophysical observations suggest that the luminous matter in galaxies and the much more massive halo of DM surrounding galaxies are gravitationally bound. It must not be forgotten that DM was not found inside galaxies, according to Bidin et al. (discussed in Part I). However, the solar system in its galactic motion would be moving through its halo, and thus, WIMP–nucleon elastic scattering might be detectable.
3.1 LHC Data and New Particles
The Higgs particle, proposed in 1964, has been the only new elementary particle found by the LHC in Geneva (as of winter 2018). On December 3, 2018, the LHC was shut down to increase its luminosity and is planned to reopen in early 2021 . In the meantime, the large amount of data the LHC experiments have already produced will be analyzed in order to extend our knowledge of fundamental physics. In particular, no elementary particle has been identified that could serve as a DE or DM particle. This is despite a recent extensive experimental search program at the LHC  confirming the nil results of many other experiments, already started some 40 years ago.
As was expressively stated by string theorists prior to the inauguration of the LHC, the LHC would be used in producing DM particles and observing their properties. Thus, the LHC combined with SUSY was supposed to be the key to understanding DM. The lack of DM particles suggests that they likely cannot be produced in the LHC. However, the gravitational impact of DM has clearly been measured (e.g. weak gravitational lensing), which means either DM is not composed of particles or DM particles do not exist in our spacetime (see below). In agreement with the results of the measured valid range of Newton’s law , the updated LHC did not find any energy leaks at length scales of ≈ 10−19 m, further rendering improbable the concept of extra higher space dimensions – which is fundamental to all types of string theories.
Most recently (2017), intensive search for WIMPs has taken place, following the upgrade of the LHC to TeV (centre of mass energy) in 2015. In the particle models beyond the SM, it is assumed that WIMPs may be pair-produced in collider experiments by means of hitherto unknown fields (bosons) by a coupling of DM particle(s) and the baryonic matter of the SM. If any WIMPs were produced in a collision, then by their very nature they would escape detection. However, a transverse momentum deficit should be detectable, yet the CMS collaboration  has been searching for this type of event for several years without finding anything. The results of this search will be discussed further in Part III, where a comparison with self-interacting DM models is made.
Der Intellekt hat ein scharfes Auge für Methoden und Werkzeuge, aber er ist blind gegen Ziele und Werte.
4 Cosmic Principles
Einstein’s words as of 1933 (from the Introduction) proved to be prophetic for our time because the mathematics in almost all areas of theoretical physics has soared to dizzying heights,10 and physical models are marked by extreme complexity as in string field theory, conformal field theory, Calabi–Yau phenomenology, or the newer concepts of membranes of various dimensions. Nevertheless, it is a fact that despite all these advanced mathematical tools, the progress in theoretical physics and cosmology leaves much to be desired . An enormous rift between these complex models and their actual physical reality has become apparent over the past 50 years, and all attempts to bridge this gap by newer (and more complex) modelling have ended in failure. This may be the time for a paradigm shift. Strings should be handed over from the physics to the mathematics department in order to free them from experimental constraints. Their name, M string, may remain the same; that is, it stands for mathematical string. In a recent article termed Theoretical Physics Is Pointless without Experimental Tests , A. Loeb issued a stern warning against the acceptance of purely theoretical models without any shred of experimental evidence. There seem to be theoreticians who believe that the beauty of a model overrides the final experimental verdict. It should be kept in mind that even Einstein was wrong on quantum mechanics and on gravitational waves that he both rejected. A recent critical discussion on novel noncommutative properties of spacetime by T. P. Singh can be found in .
Let us now consider what Einstein said about the fundamental principles in physics when he addressed his audience at Oxford University in his 1933 Spencer Lecture:
These fundamental concepts and postulates, which cannot be further reduced logically, form the essential part of a theory, which reason cannot touch. It is the grand object of all theory to make these irreducible elements as simple and as few in number as possible, without having to renounce the adequate representation of any empirical content whatever.
Heeding Einstein’s advice and before any unified physical theory – if this is possible at all11 – is to be formulated a set of foundational principles, believed12 to represent the essential physical features of the cosmos, needs to be established. These guidelines – called cosmological principles or (shorter) cosmic principles because they are assumed to provide the basis for the construction of the cosmos – shall serve as the description of the physical world as well as the subsequent mathematical formulation of all physical phenomena.
This means that imposing the symmetry properties of groups SU(5) or SO(10) (now ruled out by recent ACME data) probably will not meet with success because the physical evidence contained in the cosmic principles is not adequately considered.
Instead, guidelines for a possible group structure of both spacetime and matter should be extracted from these cosmic principles (see Sections 5.4 and 5.5 in ). The result is an approach that leads to major extensions/modifications of the SM of particle physics with respect to the nature of gravitation, as well as the group structure for particles and the types of matter. The SM of cosmology is questioned, and the concepts of spacetime, DE, DM, and the current hot Big Bang model reformulated, leading to a closed but very large universe of finite existence time.
The fundamental physical principles that are supposed to govern the physical cosmos (the universal order)13 have to be identified from generally acceptable conclusions consistent with the most basic observations (for more details, see the initial chapter of ). It is most astonishing to perceive the far-reaching physical consequences of these simple sounding principles and the constraints they impose on the mathematical formulation of any physical theory. Of course, their validity must be demonstrated by their predictive power as well as their compliance with present and future experimental data.
This process naturally involves a new view of the world (Weltbild), and it is this underlying Weltbild that determines the features of the fundamental theory. Ultimately, the test of a theory is provided by confronting its predictions with existing experiments in general and new experiments in particular. Any fundamental theory must be able to extend the current Weltbild by predicting novel phenomena; i.e. it must be testable by experiment.
The core ideas that are behind the Weltbild cannot be derived from physics itself. The foundations of the Weltbild are outside of physics, but once they have been formulated and the physical model based on these principles is developed, no (basic) further adjustments should be allowed.
Furthermore, once these principles are in place, one could begin formulating the proper Lagrangians and their respective symmetries by observing the constraints resulting from these principles.
As we shall see in the following section, those fundamental principles lead to a drastically enhanced understanding of the basic features of the cosmos, despite their relatively straightforward formulation. They also set up several stringent guidelines for theory.
Measurements of the ESA Integral satellite (Part I) seem to suggest a discrete length scale of space, far below the Planck length, as the grid structure becomes visible at less than 10−50 m14 far below the Planck length. Thus, it appears that the Planck parameters do not apply to the properties of the spacetime grid but characterise the absolute limit of material or energetic processes. There appear to be two different fundamental length scales. The first is the cosmic grid spacing, which is the length scale at about m (the corresponding Compton wave length required a particle of mass kg !), and is deemed to be responsible for the stability of the spacetime lattice. The second is the Planck length at about m, indicating the length scale at which quantum fluctuations cease to exist because they are of energetic nature. Hence, the stability of the spacetime grid is never compromised by even the strongest quantum fluctuations. This seems to be further vindicated by the CDT simulations of Ambjorn et al. , , ,  because in the vicinity of the Planck length the dimensions of spacetime appear to change from four to two. Hence, at or above the Planck length, the spacetime grid can be treated as a manifold, that is, like a continuous theory – as with Einstein’s GR, possibly with quantum fluctuations, that is, the cosmological parameter Λ. This suggests an intriguing possibility. Perhaps there is no need for quantum gravity at or below the Planck length. Obviously, at the length scale at which space becomes discrete, matter does not exist anymore.
It remains an open question whether or not cosmic grid spacings denoting the discrete space and time intervals of the spacetime lattice, represented by Δs and Δt, are time dependent, i.e. have changed in the time history of the universe. For instance, the late B. Heim  proposed a metron size (elemental surface with spin) decreasing in the course of the cosmic evolution leading to a symmetry breaking at the Planck scale and subsequent generation of matter.
Kaluza employed the concept of higher space dimensions in 1919, not based on experimental evidence but borrowing from early science fiction literature in order to unify electromagnetism and gravity. As we know today, his original idea proved to be incorrect (Part I). Nevertheless, the concept of extra dimensions survived (because it sounds so obvious) and was never really questioned, and the most complicated mathematical edifice (string theory) was built on this unproven assumption. String theory is assumed to predict gravity because it requires the existence of a massless spin-2 particle and thus at long distances mimics the couplings of GR. Such a boson was never measured, and it remains uncertain whether a purely geometric theory like GR will produce a real particle that results from the curvature of space and time. In addition, string theory cannot account for the existence of strong hypercomplex gravitational fields that are represented by spin 1 particles (see below). It is a well-known fact that GR is not a renormalisable theory owing to the spin 2 graviton. Perhaps the spin 1 hypercomplex gravity is the kind of renormalisable gravity sought for, whereas GR is outside unification.
The main requirement for any acceptable physical theory is sufficient predictive power to incorporate novel physical phenomena from various sources without having to construct additional contrivances in order to prevent major deviations from experiment. In other words, a valid theory must not grow exceedingly complex to fit new experimental data. A negative example in the history of science is the Ptolemaic model postulating the Earth to be at the centre of the universe. To be consistent with new data, it became increasingly unyielding over the course of time. Whenever new observational data came along, proponents had to resort to ever increasing mathematical complexity to reproduce physical reality. Eventually, the attractiveness of the original simple physical model was completely lost. The basic physical ideas of any theory should be expressible in the form of a few key ideas. The consistent mathematical formulation of the theory is a different story, but as Einstein said, “Ideas are more important than knowledge.”
If this view is correct, string theory cannot be considered a correct physical theory, because there is scant predictive power – the few predictions made that were measurable proved to be incorrect. Often when new experimental evidence became available, major adjustments were required (see the discussion in Part I) to escape the experimental verdict – a running theory.
In the next section, a set of novel fundamental physical principles is presented that will lead to guidelines for the extension of both the SM of particle physics and cosmology. These principles are leading to a Weltbild highly different from hitherto favoured supertheories. In particular, the set of no-go theorems, derived from these principles and discussed below, demands the abandonment of essential current physical concepts, including unification of all physical interactions, higher space dimensions, Einstein–Rosen bridge, Big Bang, etc.
4.1 Formulation of Cosmic Physical Principles
Following Einstein’s idea, the following set of fundamental or cosmic principles are used as the core for any advanced physical theory, prior to its mathematical formulation. Of course, it is unknown whether this set is complete. These principles serve as blueprints (or guidelines) of the mysteries in an otherwise inscrutable universe. The validity of the chosen set of principles can only be justified in their success in describing physical reality. Establishing a basis of principles should arguably be the first step in establishing any comprehensive theory of space and matter. This approach is as close as possible to an axiomatic formulation of physics.
Principle of duality: It governs all physical events in the cosmos as well as the emergence of the cosmos itself. The cosmos, that is, order, requires from its very inception the duality of space and matter.15 It is considered (in EHT) the key principle of physics. In particular, duality requires that formation or creation must be followed by annihilation; i.e. the cosmos itself eventually will be annihilated. In particular, the two primeval, independent entities are geometry and energy. They are interdependent and are to be considered as two sides of one and the same coin. One state is as much a part of its opposite state as are the two sides of a fabric. This geometry–energy duality requires the spacetime lattice (composed of elemental surfaces with spin) and the associated primordial energy, namely, DE, to be generated simultaneously (see also  the discussion of the double helix).
In quantum mechanics, duality appears in the form of the wave–particle duality governed by Heisenberg’s uncertainty relation. Nevertheless, matter always comes in the form of particles.
It is important to note that the duality principle strictly limits the reductionist viewpoint, namely, the belief that all interactions can be unified. First, there exists the duality of geometry (the stage) and energy/matter (the actors) that according to this principle cannot be unified. This means that Einstein’s GR (cosmological gravitational fields) and quantum theory may not be unified in the form of quantum gravity as previously thought and sought, for instance, as sought, see C. Kiefer in Chapter 5 in  and also in his more recent monograph . As will be outlined in Section 5.3 the grid spacing of the space lattice may not be the Planck length but could be many orders of magnitude smaller. This means that the Planck length ℓPl may not be the proper length scale for the spacetime lattice.
Second, the mathematical groups discussed in Section 5.4 predict the existence of a second kind of gravity, represented by three additional gravitational bosons, which should result from the interaction of gravity and electromagnetism. This interaction is termed hypercomplex-gravity fields, which would be many orders of magnitude stronger than the cosmological gravitational fields. It is these four fundamental interactions, characterised by the three constants
to which the unification process should be applied.
GR is a classical theory because in its formulation neither the wave-duality picture is employed, nor is Planck’s constant ℏ needed. Hence, the Planck length,
constructed from , may not be the appropriate length scale for the spacetime lattice. By contrast, the grid spacing of the spacetime lattice may be considered to be governed by the constants
(mp denotes the mass of the stable proton), as will be further discussed below when the principle of dual energy is addressed. With regard to a length scale of gravity, it seems to be more natural to consider the length obtained when the gravitational self-energy of a particle is equal to as the proper grid spacing of the spacetime grid. In particular, as the proton is the most fundamental stable baryon, one obtains for the grid spacing of the spacetime lattice
resulting in a length scale about 19 orders of magnitude smaller than the Planck length, indicating an almost completely rigid grid untouched by almost all comprehensible energetic phenomena. If this were the case, then strong gravity , as obtained from the world of particle physics, that requires
would mean that the curvature of spacetime has to be compatible with this length scale, L, far below the Planck length , and consequently, effects from quantum gravity would not be visible. Furthermore, ℓ is clearly not on the TeV scale as recent LHC data have shown. Thus, gravity could be considered weak in all circumstances. Compared to the Compton wavelength of the electron , Δs is an extremely small number. On the other hand, if one assumes that the universe contains about 1080 nucleons, the radius of the universe is determined as RU ≈ 1026 m. With a grid spacing of 10−54 m, spacetime can be regarded as a manifold with regard to all energetic aspects conceivable. If this length scale is converted into mass, one obtains the Schwarzschild energy scale MS ≈ 1034 mp, and new physics should appear. Such a particle should be totally unstable and immediately decay because of its humongous mass, creating a massive explosion.
Principle of extra systems of numbers: Nature uses additional systems of numbers to produce different types of matter (or different flavours of matter) and to process information. Number systems employed in current physics are the rational numbers ℚ, real numbers ℝ, and complex numbers ℂ. In addition, nature may utilise also hypercomplex numbers (quaternions) ℍ as well as octonions 𝕆 (Section 5.4). Extra systems of numbers then are used, replacing the concept of extra space dimensions. Extra number systems have major ramifications in physics. For instance, they may lead to different symmetries and larger groups compared to the SM of particle physics. In particular, additional gravitational bosons are predicted, leading to extreme gravitational fields outside GR. Such bosons may provide the enabling technology for gravitational engineering and propellantless space propulsion, pursued by space industry and NASA since the 1960s (Section 8).
Principle of optimisation: This means that nature is perfectly optimised within the framework of existing physical laws. If only a single bit of information would be removed from the total information content of the cosmos, it would break down. Nature does not use trial and error processes in her physical laws. Instead, all of her processes take place such that no process can be found that would result in a higher degree of optimisation (e.g. Carnot engine). In this regard, there is absolutely no redundancy in nature. Mathematically, this principle can be expressed by the Hamiltonian (or variational) formalism that is applicable to both classical and quantum physics (see below). This principle was already employed in the 17th century by the French mathematician Pierre de Fermat to set up a universal law for the propagation of light. This principle also requires that gravity has to be derived from an invariant action principle as is the case for GR (D. Hilbert, 1915).
Principle of quantisation: Nature does bookkeeping (accounting and counting), i.e. uses (half) integers only. In reality, there are no continuous physical quantities, only discrete quantities. This means infinities or singularities cannot exist in physics. In addition, spacetime ultimately has to be a lattice, which in principle should have an effect on the propagation speed of light waves with respect to their angular frequency ω, although such an effect would be extremely small (see the remarks on the ESA Integral satellite measurements Section 5.3).
Principle of quantum fluctuations: Nature does not know static physical states. Even in the case of zero external energy, there exists constant interaction with the virtual particles of the vacuum (i.e. the state with no real particles) of spacetime. This manifests itself in the form of quantum fluctuations, governed by the Heisenberg uncertainty relation and the duality principle.
Principle of finite existence time: All objects (material or immaterial) as well as structures (e.g. spacetime itself) in the cosmos (universe) have a finite time of existence because of duality and quantum fluctuations. This principle is not independent of the other principles, but because of its great importance, it is listed separately. This also includes the cosmos itself. In order for something to exist, the cosmos must exist in a dynamical but time-limited state. For example, directed motion is ubiquitous, but nothing can inflate to eternity.
Principle of causality: There exists an arrow of time (realised by ubiquitous motion of physical objects) putting the equivalence principle into action by governing the direction of all physical processes and the building of ever more complex structures (information content). Under limited circumstances, the arrow of time may be reversed by cosmic symmetry breaking, such as low cosmic background temperatures acting as a governing parameter by forming a Bose–Einstein condensate.
Principle of dual energies: In order to build a universe, both structure and matter are required. Structure reflects potentiality through admissible forms of organisation (information), while matter is employed to actualise its form. Hence, there must exist two fundamental types of energy (duality principle).
First, there is the relation between energy and information, that is experimentally verified, i.e. energy of information that is connected with the name of L. Szilárd, 1929,
where nbit is the number of bits comprising the total information content of the physical system (i.e. the crystal like space lattice), and T denotes temperature. A single bit of information is equivalent to the amount of of energy. The same amount is necessary for deleting one bit of information. We this call Szilárd’s equivalence principle of energy and information. It should be mentioned that an experimental value of has been established (see Chapter 9.7 in  that discusses an Ising model of spacetime) along with the concept of entropy16
where Ω denotes the number of microstates, and the assumption of an adiabatic expansion process of the universe, the classical formula
can be used to define the concept of temperature that has to be utilised in (2). It is evident that the universe must have started with the lowest entropy possible, comprising two elemental surfaces only – the smallest universe conceivable. There are only two (spin) orientations possible, parallel () or antiparallel (). The (negative) potential energy resulting from the spin interaction of the first two metrons may be described by an Ising model. Energy conservation then is satisfied by the production of DE (particles) that are of positive energy density. That is, the initial entropy of this universe is given by
The inflationary evolution of the space grid describes the rapid creation of elemental surfaces, driven by the generation of DE (and vice versa). This leads to exponential growth in the number of microstates Ω, accompanied by the elevation of temperature T. Hence, the negative energy of information, (2), has to be exactly compensated by the positive energy of matter, (3), as a consequence of universal energy conservation. This, in a nutshell, is the basic idea of how our universe might have started from nothing. Hence, it is described by the term Quantised Bang.
Second, there is the energy of matter
that describes material particles with rest mass m or radiation of angular frequency ω, respectively. It is synonymous with the name of A. Einstein and his historic articles published in 1905 and 1906. The two types of energy are assumed to have been converted into each other during the inflationary phase of the universe and therefore are regarded dual to each other. Moreover, if the universe enters into its contraction phase, marked by a reduction in the number of elemental surfaces, DE necessarily diminishes as well and with it all kinds of matter.
Principle of energy conservation: Energy is the physical quantity that produces changes in the states of physical systems. The total energy in the universe is
In conjunction with the duality principle, this energy may be split into negative (atoms of space) and positive (DE) energies. Due to the optimisation principle, energy is conserved. Owing to the quantisation principle, energy comes in discrete quantities only. Because of the principle of quantum fluctuations, energy change ΔE during a period of time Δt obeys the Heisenberg uncertainty principle,
Principle of dual universe: According to the renowned British physicist R. Penrose , there exists a mathematical universe (platonic world) for the blueprints of coupling constants and physical laws (cosmological principles) beside the physical universe that enacts their actual implementation in space and time by means of material particles. This means coupling constants cannot be derived from physical theory.
Principle of non–self-interaction: Any physical system that comprises a single system cannot interact with itself (Fig. 3). In particular, a self-force does not exist because it is based on the concept of retardation owing to special relativity (SR), which is not applicable to a single physical system.17 Otherwise, there would be a contradiction, because if there is a single physical entity only, nothing exists to interact with. For instance, this means that an electron e− cannot interact with its own field as assumed in classical electrodynamics for there is no second partner available for physical interaction. In particular, a self-force does not exist, because it is based on the concept of retardation owing to SR, which is not applicable to a single physical system. However, in general the concept of a single entity system will depend on the type of the specific interaction but is independent on spatial separation and on the number of particles. As long as the system can be described by a single wave function with regard to the physical phenomenon considered, e.g. two entangled photons, such a system nevertheless comprises a single entity, to which the concept of space and time is no longer applicable (see below). This means that, for instance, the spin of two entangled photons must be considered as a single quantity; i.e. two independent photon spins do not exist. In case the description by a single wave function ceases to be valid (because of decoherence) (interaction with the environment that destroys entanglement, e.g. exchange of energy), the system loses its single identity and separates into distinct parts, now capable of interacting with each other.
Principle of organisation: The duality principle not only requires dual energies but also demands a dual to entropy, which is organisation. This principle is at work everywhere and is enabled by energy. With regard to matter, it is at work forming the entire material hierarchy, combining primordial gluons and quarks to elementary particles like the nucleons, proton, and neutron. Nucleons are then assembled into more and more complex atoms that in turn are employed in the formation of molecules. Eventually, organic molecules are built from carbon chemistry and are finally assembled into polypeptides that are at the foundation of life itself. The next step in this chain is the formation of biological cells, already at a mass of about two nanograms, comprising millions of proteins, which are the basis of all living structures, which distinguish themselves by ever increasing organisation levels. Although energy is necessary in the formation of complex structures, it is the organisation principle that governs and directs the evolution of most complex biological structures, clearly demonstrating the existence of Aristotle’s ancient concept of entelecheia,18 that is, demonstrating that this universe has a clear objective, namely, providing the basis for life. Life is not a process that started accidentally; that is, life is not the result of a trial and error process. The complexity of the human body is so enormous that it is totally impossible to build such a structure by pure chance, hence the entelechial and aeonic dimensions. The associated probability is zero; therefore, the definition given by NASA that “life is an autonomous chemical system that is capable to follow Darwinian evolution” is far too limited and provides zero chance for life to exist. The picture portrayed by the late B. Heim , who stated that the process of life proceeds in phase transitions, i.e. abrupt changes (steered by the internal entelechial dimension), followed by a longer era of evolution (aeonic dimension), appears to be more logical than the 19th-century Darwinian view that covers only a very limited section of the entire cycle of life and thus arrives at insufficient conclusions.
4.2 Cosmic Principles and No-Go Theorems
Some of the most stringent physical consequences of the cosmic principles shall be outlined below. It will be shown that several essential physical concepts pursued for decades are no longer tenable within the framework of these fundamental principles.
A number of these constraints have been formulated in the form of no-go theorems to emphasise the limits on the physical features of the universe imposed. The cosmic principles impose severe and unforeseen, but far-reaching restrictions on any theory .
No-Go Theorem: Geometrisation of Physics: The geometrisation of physics as pursued by Einstein, Schrödinger, Wheeler, and, most recently, in the form of superstring theory is not compatible with the principle of duality (see below) and thus cannot succeed regardless of the level of mathematical sophistication.
No-Go Theorem: Unification of Physical Interactions: The duality principle excludes the complete unification of all physical interactions. In particular, it does exclude the conversion of fermions into bosons and vice versa via some kind of supercharge. Without any further mathematical investigation, the duality principle is ruling out any unification by the so-called supertheories. That said, duality must be a global symmetry that remains unbroken. So far, no deviation from this rule has been observed. This feature seems to be inherent to all physical phenomena.19
With respect to unification, Maxwell’s theory of electromagnetism was unified with the weak interaction. It should be possible to unify the strong force with gravity, which means hypercomplex gravity, but not with Einstein’s GR, which rests on the curvature of the space lattice and not on mediator bosons. However, it should not be possible to further unify all four interactions. This may be one of the rare occasions where A. Einstein’s intuition turned out not to be correct. However, as surmised by Faraday and Einstein, an interaction between electromagnetism and gravity may be possible, providing the cause for the postulated hypercomplex-gravity field. However, such a field is not part of GR and exists as a second, independent type of gravity.
No-Go Theorem: No Majorana Fermions in Nature: It is interesting that, according to the principle of duality, Majorana spinors should not exist. These fermions should annihilate each other as they are their own antiparticles. Also, according to the Feynman–Stueckelberg interpretation, antiparticles go backward in time, and thus, a Majorana neutrino must both go forward and backward in time. As there is no known physical mechanism for a neutrino to select the direction of time, this is considered a dubious behaviour. Hence, the ongoing GERDA experiment  should not succeed in finding any Majorana neutrinos.
No-Go Theorem: No Big Bang: The fact that the energy of the universe EU ≡ 0 at all times excludes the occurrence of a Big Bang event. Energy is globally conserved at all times. Additionally, singularities do not exist in physics. Instead, the initial formation of the space lattice starts when two metrons are created by quantum fluctuations (Quantised Bang), and their spin interaction produces the first DE particle (owing to energy conservation). This is a self-accelerating process that finally comes to a halt when DE is converted into matter.
No-Go Theorem: No Infinite Universe: The fact that there are no infinities in physics means that the existence time for the universe is finite. Thus, there is the requirement that the current expansion era will have to come to an end, and a turning point in the cosmic motion will be reached, followed by an epoch of contraction.
No-Go Theorem: No Open Universe: For the very same reason the universe cannot be open, it must be closed or cyclic.
No-Go Theorem: No Wormholes: Wormholes cannot exist in the cosmos, because nature does not allow for singularities. Wormholes are admissible mathematical solutions of Einstein’s field equations but are not realised by nature. They are simply an expression of the infinite mathematical continuity of the equations that become invalid when the length scale considered is of the same order of magnitude as the grid spacing of the spacetime lattice, and the equations break down.
No-Go Theorem: Feynman Diagrams: Suppression of virtual particles, that is, in a physical process there is only a finite number of particles that are off-shelf, i.e. . Instead of equations, Feynman used diagrams to account for all probability amplitudes that can occur in a scattering process, for instance, in the scattering of two electrons. In the formulation of quantum electrodynamics (QED), comprising the equations of Maxwell and Dirac that are both linear hyperbolic partial differential equations, a mathematical solution is obtained utilising Green’s functions in combination with an iterative procedure. Mathematically, such a scheme generates an infinite number of terms, but nature will stop the generation of an infinite number of virtual particles by virtual particles as required by Feynman diagrams, etc., once a certain nesting depth of the loops has been reached (far above the Planck length).
No-Interaction Theorem: Single Entity Physical System: Interactions within any physical system will cease if the information that permits to identify at least two distinguishable parts in this system is destroyed by changing the governing physical parameters, e.g. lowering the temperature (see above, the principle of non–self-interaction). A many-particle system (two or more) can no longer interact if being reduced to a single particle system. Thus, there is a major difference between leptons and hadrons. It should be noted that the concept of single entity system depends on the type of interaction. Two entangled photons form a single entity system with respect to their spin. A single entity system responds as a whole, which means that if a physical field were involved, retardation effects do not exist anymore. For instance, the electrodynamic potential A is no longer retarded by the time required for a light wave to propagate from point x to x′, i.e. concepts of time, distance, and propagation speed are not applicable to single entity systems. Thus, such a system is no longer subject to Einstein’s theory of SR. It is most remarkable that nature has realised such single entity systems.
These relatively simple fundamental principles, nevertheless, have produced far-reaching and surprising consequences for physical theories in general and also for the type and topology of admissible universes as will be discussed in greater detail in the next section.
4.3 Cosmic Principles and Gravitation
In particular, the field equations of GR may be directly deduced from the duality principle in conjunction with the assumptions that the total energy of the universe at all times, conserving total energy.
In the following, several additional physical constraints resulting from the fundamental principles are elucidated.
In 1948, Feynman published a third formulation of quantum mechanics based on the concept of the propagator D (or K). His approach has the great advantage that the interaction of two particles (calculating the scattering cross section) can be pictorially described by an infinite set of diagrams (attributed to an infinite series obtained by an iteration process) that became known as Feynman diagrams (a modern discussion, for instance, is given by A. Zee ). Nothing can be said about the convergence of this infinite series in general, nor should it be taken for granted that each term in the series does have a physical meaning. As it turns out, the recent works of Z. Bern ,  and A. Zee, Part. N in , show that this is actually not the case!
In certain physical situations, the principle of finite phenomena may be at odds with the general picture of Feynman diagrams, because these diagrams rely on the generation of an infinite number of virtual particles, e.g. coming from loops within loops, etc. Although it is claimed that Feynman’s approach has led to the most accurate computation in physics, viz. the calculation of the magnetic moment μe of the electron, this picture may be incorrect in certain situations. There are numerous physical situations – as was clearly revealed recently by Z. Bern and others – where his approach is a complete failure and leads to divergent integrals for the probability amplitude. In particular, this is true in the process of the quantisation of linearised gravity assuming a flat background Minkowski metric
where h describes the (small) deviation from the chosen background metric. This approach is fully divergent, and the problem seems to lie in the accumulation of an infinite set of (physically nonexisting) Feynman diagrams producing (nonexistent) off-shelf or virtual particles (on-shelf in this context denotes real physical particles of the SM).
A similar problem occurs in the calculation of the classical multipole radiation of order ℓ for which the average radiation power (averaged over the surface of a sphere), valid for both magnetic and electric multipole radiation, is given by the expression
with , where k is the wave number and r0 denotes the linear dimension of the radiator, e.g. an electron oscillating in an electromagnetic field of an atom. As there is no mathematical limit for ℓ, this expression would lead to for . In electrodynamics, this is not considered a problem, because, e.g. for light one can choose k = 107 m−1 and r0 = 10−10 m for an atom. Hence, electric quadrupole radiation (ℓ = 2, so-called forbidden spectral lines in atoms, but observed in the interstellar gas) is suppressed by the factor compared to dipole radiation (ℓ = 1). On the other hand, if an electron radius r0 = 10−20 m (LHC limit) is assumed and ℓ = 106, then k = 1024 m−1 is an admissible value and results in the factor
which obviously is a nonphysical value for the multipole radiation power Pℓ. Of course, such a value has never been observed, and it is physically totally meaningless, because the linear dimensions of the radiators have to be taken into account. In general, if these dimensions are not too large, only a few multipole fields are needed that are orthogonal to the prevailing dipole fields. It can be expected that something similar holds true for Feynman diagrams. The mathematical solutions of the spherical wave equation, according to which any electromagnetic field can be expanded, form an infinite series. However, only a few terms of this series are utilised by nature because of the inherent length scale limit for the radiators. Hence, nature does not produce any divergences, but the mathematical solution does, for it knows nothing about limits of k or r0, nor ℓ. In Feynman diagrams, it can be expected that the generation of virtual particles by virtual particles that were generated by virtual particles and so on is suppressed in a similar way.
The principle of distinguishability means that only distinguishable physical systems can physically interact; e.g. only two particles that can be distinguished by experimental means can physically interact. This question is not as innocent as it may sound because the particles must be close enough together so that their probability density amplitudes overlap significantly – whatever that may mean in practice. An important consequence of this principle is that as soon as the wave functions of individual particles can no longer be separated, they form a single physical system, and the concept of locality is no longer applicable. Einstein’s SR is not valid for such a system. Consequently, physical interactions should diminish depending upon the extent of the overlap and eventually disappear. Particles must be identifiable (information content) as independent physical systems in order to be able to interact with each other.
A single physical system cannot interact with itself. For instance, this may be exemplified regarding the electron. It is incorrect to consider an electron as an entity comprising two separate parts, namely, a particle and a field at the same time. This means that formulating the electron Hamiltonian dynamics via the classical action describing two features – the action function for a free particle of rest mass mc2 and a second action resulting from the charge q of the electron and its own field given by the four-vector A(x) – necessarily leads to a contradiction. In order to describe this particle-field system, the action of the free electromagnetic field, specified by the field tensor F, must also be added. The total action S can be regarded as a functional of the variables xμ and the field components Aμ. The field components are determined from the charge q of the moving electron acting as the source (RHS) jμ in the d’Alembert equation for the Aμ. However, the equations of motion resulting from this formulation are nonphysical because the motion of the electron is self-accelerated, producing an infinite self-acceleration over time. Although mathematical techniques in QED have been devised to avoid this kind of trouble, the problem is with the nonphysical and logically contradictory assumption that a physical system, which is to be understood as a single entity, does interact with itself. It should be remembered that in quantum mechanics either the particle or the field picture is applicable, but not both at the same time. Hence, in the case of the electron, the formulation of the action S utilising three different terms is physically incorrect. Born and Infeld have tried to remedy this problem by adding nonlinear terms to the Lagrangian, but this comes at a cost of, for instance, a charge density ρ ∼ r−13.
By contrast, if the spatial extent of the wave functions of two physical systems is increased by changing external physical parameters so that their wave functions overlap (e.g. by lowering the temperature), and finally only a single indistinguishable system remains, then this resulting single entity no longer exhibits any physical interaction. Such a system is a single system, and no information is available to identify two different parts incapable of physically interacting with itself.
This may lead to dramatic consequences that should be experimentally observable. The Compton or de Broglie) wavelength of a particle is given by and . Two particles are distinguishable if their spatial distance (separation) . The de Broglie wavelength λp strongly depends on temperature T. Hence, a physical system may become indistinguishable with lower temperature. This fact will be exploited in an attempt to explain the systematically different values measured of Newton’s constant GN that will be dealt with in Part III. One class of experiments is performed at room temperature utilising torsion balances, whereas a second class of experiments uses ultracold 87Rb rubidium atoms (atomic number 37) at 1.8 mK. A further important consequence of the non–self-interaction principle may be the suppression of the generation of virtual particles in Feynman diagrams.
Physics without strings is roughly analogous to mathematics without complex numbers.
Edward Witten, October 1998, on the importance of number systems in physics :
If supersymmetry plays the role in physics that we suspect it does, then it is very likely to be discovered by the Large Hadron Collider (LHC), or its upgrades, at CERN in Geneva, Switzerland.
Edward Witten , November 20, 2012
5 Cosmic Symmetry Group
In today’s physics, the concept of the physical field, characterised by the associated symmetries that are described by mathematical groups, is playing a central role. However, quantum physics requires that fields, considered to be continuous manifolds, are to be quantised in order to produce the elementary particles that have been measured for decades in generations of accelerators or detected in astrophysical observations. There are numerous and diverse fields in physics describing all known physical interactions and their respective particles.20
5.1 Hermetry Forms
In the 1925 theory by Kaluza, it is necessary to construct a 5 × 5 metric tensor gμν including the components A = (Aμ) of the electrodynamic potential in order to unify gravitation and electrodynamics (although this was later proved to be incorrect by Jordan in 1949, see Part I). That is, at each point in spacetime, a circle is attached (i.e. the fourth space dimension is curled up and thus is not directly visible) but is considered a real space dimension. In string theory, six (or more) curled-up space dimensions are added that lead to a 10-dimensional metric tensor aimed at describing gravitation, electrodynamics, and strong and weak forces, as well as quarks and leptons (matter). Such a metric is termed monometric. However, as was shown in the previous section, this approach is not supported by experiment.
The alternative physical model presented in this article, termed extended Heim theory or EHT (not to be confused with the theory of the late B. Heim, see our remark in Part I), employs an internal gauge space of eight dimensions, termed Heim space () that gives rise to a polymetric tensor comprising a set of 15 (hence poly) metric subtensors represented by 4 × 4 matrices. A metric subtensor ( Chapter 5.2) is called a hermetry form. The term Hℓ specifies the selection of internal coordinates ξa out of which the specific hermetry form is to be constructed. Hermetry is a combination of the two Greek words hermeneutics and geometry, denoting metric subtensors that have physical meaning, a term coined by B. Heim. It should be noted, however, that hermetry forms are obtained from the internal geometry of which is the gauge space of EHT. Each hermetry form stands for a family of particles or fields and by utilising the complete set of hermetry forms a classification scheme for all physical interactions and particles. Hermetry forms are associated with groups as discussed in Section 5.4 (see also  Chapter 9.2 and , ). Employing the field of hypercomplex numbers (known also as quaternions) results in the so-called cosmic group O(32, ℍ) (see below). As one of the major consequences of this group, a total of six gravitational bosons are predicted. The cosmological gravitational fields as described by GR, (static mass), and (moving mass, analogous to electromagnetism) and their interaction with matter as well as DE are (only formally) characterised by three gravitational bosons. In addition, the postulated interaction of electromagnetism and gravity (based on the phenomenon of symmetry breaking) predicts the existence of extreme gravitomagnetic fields or hypercomplex-gravity fields, which may be generated in the laboratory, similar to the generation of magnetic fields. Hence, the source of these fields is entirely different from GR; that is, these fields cannot be due to the presence of large static or moving masses.
The eight-dimensional gauge space is a fibre bundle attached to each four-dimensional spacetime point x (external space) and comprises four subspaces that determine the associated symmetries and hence also the group structure. Internal coordinates are used to construct the set of 15 internal metric tensors – in analogy to GR. Hermetry forms describe physical properties of the respective particle families. According to EHT, the set of hermetry forms is meant to account for the total particle inventory of the cosmos. Obviously, this approach is different from the so-called geometrisation of physics that solely relies on spacetime geometry.
The four subspaces of are as follows:
with three internal coordinates describing matter (i.e. only those hermetry forms that contain at least one of these coordinates describe material particles),
with coordinate ξ4 for energy,
S2with the entelechial and aeonic dimensions, representing organisation, with coordinates ξ5, ξ6, respectively,
with coordinates that enable the exchange of information among physical systems, expressed by Szilárd’s energy principle, (2).
The concept of Heim space with its subspace structure is reflected in the mathematical group structure of EHT as discussed in this section. In particular, the entelechial dimension, ξ5, can be interpreted as a measure of the quality of time varying organisational structures (inverse to entropy, e.g. formation of complex molecules or plant growth), while the aeonic dimension, ξ6, tends to steer these structures towards a dynamically stable state (an aeon denotes an indefinitely long yet finite period of time). From these four subspaces, the 15 hermetry forms (metric subtensors) are obtained, as outlined in detail in Section 5.2 of .
This approach provides a relationship between geometry and physics (in some ways as foreseen by Einstein), but the major difference is that hermetry forms are living in an internal gauge space and not in spacetime. The geometrisation of physics, i.e. reducing physics to pure external geometry as pursued by Einstein and Wheeler, does not seem to be achievable.
As the universe cannot assume a static state (which would be unstable), the current expansion of the universe eventually must reach a turning point, reversing its current mode from expansion to contraction.
In the subsequent discussion, the concept of hermetry forms is no longer needed, once the group structures have been established for both spacetime and matter, which follow from the subspace structure of .
5.2 Extra Number Systems versus Extra Space Dimensions
An extensive discussion was presented in Part I  cf. figure 1 concerning the physical likelihood of extra space dimensions. Substantial experimental evidence recently derived from diverse areas includes the validity of Newton’s law, the dipole moment of the electron, and, in particular, the latest LHC results (2018) that did not find any hint for the existence of these dimensions. On the contrary, based on these experimental data, it seems to be more likely to conclude that extra dimensions do not exist.
With the advent of gravitational wave astronomy by the LIGO and Virgo Scientific Collaboration, numerous studies have been performed to detect signatures of extra dimensions in gravitational waves, most recently by Andriot and Gómez . From a logical point of view, it is clear that if these dimensions exist they must have some kind of impact on the amplitude or frequency, etc., of the observed gravitational wave pattern, and that should lead to a deviation from the predictions of Einstein’s GR – at least theoretically.
If these deviations existed, it appears implausible that they would only manifest in a modified gravitational wave pattern but not exhibit, for instance, in the form of a modified Newtonian law. Extra dimensions clearly must have an influence on the energy that goes in the higher dimensions, and any energy leakage should be detectable at the length scale of these higher dimensions as predicted by Arkani-Hamed et al.  in 1998. However, as it is evident from Figure 2 (see lower figure, updated from Part I), there is now firm experimental evidence that extra spatial dimensions cannot exist. As a direct consequence, the idea of SUSY appears no longer tenable, and an alternative concept has to replace it. In EHT, as was stressed in Part I and also in this article, the novel concept of extra systems of numbers is introduced, and as a result, physical entities may be described by groups utilising the field of hypercomplex (or quaternionic, ) numbers. This would also give additional meaning to the equation for the Planck mass (4) that now can have mass values . In analogy to the concept of quark flavour, the novel concept of mass flavour is introduced.
For , there are the following five solutions of (4), namely
where it is assumed that negative mass represents DM, and hypercomplex masses i m, j m, k m may be generated that belong to the category of virtual particles, which are always created in pairs, e.g. +i m and −i m, etc. These particles may be produced during a reaction, acting as some kind of catalyser, but owing to their ephemeral nature, they are off-shelf.
The Planck satellite data revealed that 68.3 % of matter is in the form of DE. Extended Heim theory describes DE as the nonluminous precursor of matter and as such is not contained in the elementary particle scheme. The 4.9 % of OM (visible matter) residing in our de Sitter spacetime dS1,3 is composed of positive mass m > 0, whereas the 26.8 % of DM has negative mass m < 0 (fourth particle family, Fig. 4) and is located in dual spacetime DdS1,3, and thus, these particles are not directly detectable by experiment. Masses 19 are denoted as hypercomplex or quaternionic masses (i.e. virtual particles that are predicted to be generated in the conversion process from electromagnetism to hypercomplex gravity). Furthermore, if Higgs boson(s) are composite particles , then hypercomplex particles might play a role in providing the building blocks for Higgs particles, similar to how quarks are confined in order to constitute baryonic matter.
In 2005, P. Mannheim  published a comprehensive review of conformal gravity, a proposed alternative theory of gravitation to GR, in which he claimed that CG reproduces galactic star rotation curves without DM and DE, also mimicking the observed accelerated expansion of the universe. As we now know, DM does exist as is revealed from the collisions of galaxy clusters, e.g. bullet cluster, see figures 9.18–9.20 in ,  that clearly illustrate the DM distribution, and these claims therefore must be incorrect. Hence, CG cannot be a viable alternative to Einstein’s GR.
5.3 Lagrangians, Symmetries, and Groups
The question naturally arises whether the cosmic principles formulated in Section 4 can be quantified sufficiently to provide the guidelines in constructing the mathematical groups representing the complete inventory of the physical particles comprising all types of matter existing in the universe. Of course, one could select a group, for instance, such as SU(5) or SO(10) (unification unitary groups are needed to conserve probability density), and based on the number of their generators (n2 − 1) for the SU(n) group and for the SO(n) group, one could try to identify the proper particles. However, this approach has been tried for decades and proved not to be successful. A modified procedure will be presented below in an attempt to describe the (complete) particle inventory of the universe including the spacetime lattice itself. One of the results of this approach is the prediction of the additional hypercomplex-gravity fields, mediated by two additional gravitational bosons of spin 1 and one spin 0 boson.
M. Faraday (1791–1867), one the most influential experimental scientists in the field of electrodynamics in the 19th century, was the first to introduce the concept of electric field E and magnetic field B. The field concept turned out to be instrumental in the formulation of electrodynamics by James Clerk Maxwell in 1864. Several decades later in Einstein’s special and general theory of relativity, Faraday’s field concept was extended to spacetime itself (also termed the vacuum) that is described by the metric tensor . In special theory of relativity, one uses with and signature . From the constancy of the speed of light in vacuum c in all inertial systems, S follows Einstein’s famous equation E = mc221 and the invariance of the spacetime interval under the so-called Lorentz transformations , where Λ is a real 4 × 4 matrix, and a is a four-dimensional translation vector. The laws of nature are invariant under the noncompact Poincaré group of transformations . Today, physics is described by various fields, , and their symmetries.
To this end, the Lagrangian (this formulation also holds for GR where the field is given by the metric tensor ) is used to characterise the phenomena of nature, where the variables of the Lagrangian are the field ϕ and its first derivative (to be considered as two different calculus variables x and y). The optimisation principle employed by nature is expressed by setting the variation with the action given by . This approach leads to the corresponding equations of motion, named the Euler equations. There is, however, a substantial variety of fields ϕ, which may be real or complex (charged) scalar fields (or pseudoscalar with respect to the Lorentz transformation Λ) of spin 0, which satisfy the Schrödinger (obtained from replacing observables by operators, i.e. and ) or Klein–Gordon equations, or the scalar field of the Higgs boson. The well-known electrodynamics potential Aμ represents the photon of spin 1, whereas the metric tensor , assumed to describe the hypothetical graviton, is of spin 2. The particles of matter, fermions, are represented by the Dirac (Weyl) four-spinor ψμ, and possess half-integer spin (electron, etc. 1/2, 3/2).22
This means that the physics is represented by a collection of fields that are parametrised by points in four-dimensional spacetime x. However, there is a subtle difference among the fields, because the spacetime manifold, described by the collection of points x, is used to parametrise all other fields. In Part I , it was shown that there is strong evidence from the ESA Integral satellite measurements that spacetime may become discrete at a length scale of about 10−50 m only, many orders of magnitude smaller than even the Planck length. The mass scale associated with the heaviest known particle, the top quark, ends at some 177 GeV/c2 which corresponds to a length of about 10−18 m. In other words, material phenomena only see a spacetime continuum.
The most successful (and fully confirmed) theory of physics, the SM of particle physics, is characterised by the group , where the indices stand for the strong (colour), weak, and electromagnetic interactions, respectively. These groups possess eight, three, and one generators, and hence the number of mediator bosons (exchange particles of physical forces), require that there exist eight gluons g (and eight antigluons ) for the strong force, three vector bosons () for the weak interaction, and the photon γ for the electromagnetic force.
This theory is incomplete because gravity is not accounted for, and it also requires a set of 26 free parameters (if neutrinos with rest mass are to be included), which is not satisfactory. Furthermore, neither DM nor DE, nor the Higgs boson(s) are included in this group. In order to include the novel hypercomplex-gravity fields proposed by EHT the group structure needs to be extended further.
In contrast, SUSY exclusively describes gravity by a spin 2 boson, termed the graviton , and other gravitational fields are unknown (except for the nonexisting but postulated gravitino of spin 3/2). Supersymmetry, first proposed in the 1970s, is an extension of the group underlying the SM and is one of the basic requirements of string theory, which aims at unifying gravity with the elementary particle forces. Supersymmetry tries to unify all material particles (i.e. fermions) with the mediators of the physical forces (bosons). This is not in accordance with the duality principle and thus – at least under EHT – is not a possible symmetry. In other words, the principle of duality does not permit SUSY. Therefore, the prediction of EHT is that supersymmetric partners cannot be found in our spacetime, and any attempt to produce these particles in even the most powerful accelerators will be in vain. An alternative must be sought.
The most widely known and highly developed theory beyond the SM is superstring theory, whose early roots date back to an accidental finding in 1968 by G. Veneziano concerning the beta function. Force carriers (bosons) and matter particles (fermions) are quantum fluctuations of the relativistic string. The most impressive feature of string theory is that it naturally predicts the existence of a spin 2 boson (assuming such a boson exists at all) as one of the quantum fluctuation modes of a heterotic string. Identifying this boson with the graviton, superstring theory unifies gravity with the three other elementary interactions. However, as was shown in the preceding sections, there is substantial experimental evidence holding against this concept. As will be shown below, there may exist extreme gravitomagnetic fields (hypercomplex-gravity fields) resulting from an interaction of electromagnetism and gravity (as predicted by EHT). Three additional gravitational particles (SU(2)) are required for these fields, namely, the spin 1 graviton , the gravitophoton of spin 1, and the spin 0 quintessence particle (see the discussion on groups in Section 5.4).23 Hence, any further experiments (see Part I) reporting on the generation of these gravity-like fields (or hypercomplex-gravity fields) would be further experimental evidence in contradiction to string theory.
The novel idea chosen in EHT employs extra systems of numbers instead of extra spatial dimensions. This idea is a straightforward extension from the systems of numbers already used in physics, namely, the fields of real numbers ℝ and complex numbers ℂ. The next system of numbers are the hypercomplex numbers (or quaternions) ℍ, and thus in the next section, this field of numbers will be used in conjunction with orthogonal groups.
It should be noted that there is a homomorphism for hypercomplex groups (a one-to-one mapping between hypercomplex groups and rotations), which results in a dynamical symmetry . This means that when the original group O(n, q) is broken, its rank n is retained, independent of how the symmetry group is broken. Therefore, the number of Casimir operators 24 of the original group remains invariant regardless of the symmetry breaking. That is, the number of physical quantities (or quantum numbers characterising the physical system) that can be simultaneously measured does not change owing to symmetry breaking. As a result, the system will have n good quantum numbers (conserved physical quantities) that can be measured simultaneously together with the energy. These observables characterise the degeneracy of the energy eigenvalues of the system and determine the dimension of the multiplet (the total number of states that have almost the same energy eigenvalue or mass). For instance, group SU(3) has eight generators (Gell–Mann matrices , where two of the generators are commuting. A multiplet comprising three quarks with total spin J = 1/2 and baryon number B = 1 (three quarks where two spins are antiparallel)25 characterises the energy of this state. Associated with this degenerated energy value is a multiplet of eight particles (octet) as depicted in the plane formed by the projection of the isospin number I3 and the hypercharge Y, the plane, comprising the following particles: n, p, and Λ, which is a singlet with isospin number I = 0. For J = 3/2 and B = 1 (three quark spins are parallel), where the baryon number B can be , the multiplet is a decouplet. For the group SU(2), which is of rank 1, the number of multiplet states for a given angular momentum J is 2J + 1.
5.4 Cosmic Symmetry Group from Hypercomplex Numbers
If symmetry is the key to the cosmos, then this question naturally arises: What is the group structure of our cosmos? Is there a group describing the entire cosmos, i.e. both the stage (external spaces) and the actors (particles and fields)?
In EHT the cosmic group has to contain both the de Sitter spacetime and dual spacetime . In addition, all types of matter (fermions and bosons) as well as the associated internal gauge space () have to be accounted for. The cosmic group thus stands for a special symmetry between space and matter. The presentation that follows serves to demonstrate certain aspects of the nature of physical phenomena underlying the cosmos.
The cosmic group , with , where ℍ denotes the set of hypercomplex numbers (see the more detailed discussion in Section 5.5.3 of ), is considered the fundamental group for everything existing in the cosmos, i.e. both spacetime (stage) and matter (actors). However, the cosmic group should not be taken too literally as a group as it is immediately broken into a sufficient number of subgroups (32 = 25 allows to build a group hierarchy with the sufficient number of subgroup levels) to arrive at physics supported by the experimental facts. The anticommutative algebra of hypercomplex numbers is employed as it seems to reflect the fundamental symmetry structure of nature. This includes (normal) gravitation in the form of Einsteinian gravitation, which may be only one aspect of the entire range of gravitational phenomena as well as the existence of extreme gravitomagnetic (or hypercomplex) fields.
Next, the symmetry of the cosmic group, , has to be split into two groups (duality principle) in order to create the two separate entities of space and matter. Therefore, it is immediately obvious that matter cannot be solely derived from geometry. In the cosmic model of EHT, DE and the spacetime lattice are generated simultaneously owing to the duality principle (see above), and the total energy EU = 0 (see Chapter 9 of ).
However, if the cosmic group is to represent the overall symmetry of the cosmos, a sequence of symmetry breaking processes is needed to produce the multitude of different physical phenomena observed in the cosmos. To this end, both spacetime and matter have to be initially created. Hence, symmetry needs to be broken further in order to obtain both the structure of matter, i.e. the families of particles, in conjunction with DE from which all material entities are derived and spacetime itself. Finally, a mechanism for the dynamics of the cosmos, that is, the expansion of spacetime (creation of motion), has to be provided.
This symmetry has to be broken if the cosmos is to enfold. To this end, the group is assumed to have been broken down into
where subscript S, denotes any type of space, i.e. both external and internal spaces, and M stands for all types of matter (fermions and bosons), i.e. OM and nonordinary matter (NOM). The cosmic group is thus separated (by spontaneous symmetry breaking) into two subgroups whose symmetries are at the roots of spacetime and matter. The cosmic group is the fundamental group for everything existing in the cosmos, i.e. both spacetime (in a very general sense) and all types of matter, including novel extreme gravitomagnetic fields or hypercomplex fields.
The cosmos comprises both external and internal space (duality principle). Therefore, for the description of the cosmic group, we start with the notion of the general space and its associated group , which mathematically is assumed to describe all aspects of space in the universe.
The following symmetry breaking should not be taken literally, as the group will be used to describe the symmetry of the 15 hermetry families plus hermetry form 16 (see below). In order to proceed with an enfolding physical cosmos, the symmetry of the space group needs to be broken further in order to obtain the structure of matter (i.e. the families of particles), as well as spacetime itself, and to provide the means of an expanding spacetime.
Because of the duality principle (i.e. the existence of both external and internal space), the symmetry of this group is necessarily broken as
where group describes the symmetries of the external space and is the above-cited Heim group (Section 4). The group itself is broken into two groups denoted by
which represents a cosmos with spacetime, but without matter. In EHT spacetime is a lattice of elemental surfaces with spin. Its four generators stand for the two different atoms of spacetime (exo-spin and endo-spin atoms, see Fig. 5), which are distinguished by their orientation. The other two generators constitute the two particles of the semimaterial DE field, denoted by (attractive to matter and repulsive for spacetime, positive energy density) and (repulsive to matter and attractive to spacetime, negative energy density) (for a more detailed discussion of the Ising model and DE, see Section 9.7.2) .
Dark energy is considered the precursor of matter, from which all material structures are ultimately derived. This group was broken (using set theory) into four subgroups , because in EHT only these four physical entities exist. At this stage of the cosmic evolution, none of the Higgs fields is present, i.e. group has not yet been activated; therefore, neither matter (in any form) nor inertia should exist.
From cosmological observations, it is known that inside a galaxy an acceleration m/s2 pointing towards the galactic centre must be obtained, which seems to be a global value for all galaxies. If DE is responsible for this acceleration, then the value of Λ inside a galaxy must change, and Einstein’s cosmological constant must be promoted to cosmological function . Obviously, this can only be caused by the presence of the second field, the DE field (possessing negative energy density and contracting spacetime), whose interaction with visible matter inside the galaxy should lead to a repulsive gravitational force, substantially reducing the number of particles inside a galaxy. The only difference between intergalactic space and the space inside galaxies is the density of visible matter, whereby the density inside galaxies is about 107 times larger. For the two different types of DE there are two corresponding cosmological constants that, in accordance with Einstein’s cosmological constant Λ, are denoted by (labelled negative for it causes spacetime to expand but possesses positive energy density) and (labelled positive for it causes spacetime to contract but represents negative energy density). In general, there should be a complete symmetry between and , that is, Einstein’s cosmological constant Λ should be zero – in a universe without matter. Because of the (small) presence of matter in intergalactic space, this symmetry is broken and Λ > 0 resulting in the current era. As a consequence of the presence of large amounts of visible matter inside galaxies (compared to intergalactic space), the value of Λ is supposed to change drastically.
In the present era, (note that , ) because an expansion of spacetime is observed. Note that the expansion or the arrow of time is similar to symmetry breaking as a preferred direction exists on the cosmic time scale and thus might be responsible for special physical phenomena. Therefore, Einstein’s cosmological constant, given by , should be a function of time, and in this epoch, the value is . The value of Einstein’s cosmological parameter currently is 1/m2, which is deemed to be responsible for spacetime expansion. It should be noted that for the pressure of the vacuum (intergalactic space) the equation
holds. That is, the current value of Λ exerts a negative pressure and thus causes an expansion of the spacetime lattice (universe). No singularities should exist (one of the fundamental principles of EHT); therefore, Λ(t) will have to change sign owing to the expansion process itself once the universe has reached a certain size. As the energy density of repulsive (regarding spacetime) DE is counted positive (), visible matter and negative DE attract each other. For attractive (regarding spacetime) DE (), the opposite effect occurs; that is, visible matter and positive DE repel each other. According to the EHT particle classification scheme, the boson mediating that force is termed quintessence particle, νq.
The two different particles of DE, each producing a substantial gravitational interaction by itself, (attractive to matter) and (repulsive to matter), almost cancel each other in intergalactic space, and thus, the resulting Λ is extremely small. However, this does not mean that the two DE particles and did combine into a single, only slightly repulsive DE particle. If this were the case, the MOND hypothesis could not be explained (see below). A further important consequence is that a postulated symmetry group SO(10) cannot be split into SO(8) ⊗ SO(2), whereas SO(2) is meant to describe DE. As DE is assumed to be described by two different particles (otherwise, there is no polarisation effect possible), this group cannot be correct as SO(2) has one generator only. In addition, the concept of a 10-dimensional space in which our four-dimensional spacetime is embedded is not needed in GR, because curvature is an intrinsic feature of spacetime. Moreover, the existence of a symmetry group SO(10) following from the concept of a 10-dimensional embedding space does not seem to be possible as such a group is no longer tenable owing to recent ACME results (see above). The Einstein value of Λ is valid in intergalactic space only where matter density is very low, and any polarisation effect owing to the presence of visible matter can be neglected, but this quasi-equilibrium is changed in the presence of visible matter, i.e. inside galaxies. Of course, the volume of all galaxies in the cosmos compared to the volume of the cosmos itself is negligible. Thus, Einstein’s cosmological constant Λ is valid on the cosmological scale, but not (locally) on the galactic scale.
It is postulated that Λ substantially changes inside galaxies because of gravitational polarisation. This weakens the value and thus causes an acceleration towards the galactic centre. Moreover, Λ can also be changed by the local presence of extreme gravitomagnetic fields, and therefore, in general, .
The effect of (contraction of spacetime) being repulsive with regard to the visible matter inside a galaxy (because of its negative energy density) is largely neutralised inside a galaxy. Therefore, inside the galaxy, only the attractive gravitational effect of on OM remains. The reason why the is neutralised inside a galaxy is due to the fact that a galaxy contains a large amount of OM, i.e. both visible matter inside the galaxy and DM (both visible matter inside the galaxy and DM in the halo with about 80 % of the matter in the halo). The surplus of cannot cause an expansion of spacetime because this is prevented by Newtonian gravitation of the visible mass inside the galaxy.
In order to provide an estimate for the magnitude of the MOND acceleration (here we consider only the observed measurements), we are resorting to several hand-waving arguments not meant to amount to a real derivation. The acceleration due to the DE field is given by the well-known equation
where r is the distance from the centre of the galaxy. It is assumed that this equation is valid up to the finite distance RU. Assuming that the polarisation effect generates an opposite acceleration outside the galaxy (consider a simplified universe filled with DE particles, and , but which contains only a single galaxy), the MOND acceleration a0 can be calculated as the acceleration averaged over the entire universe (the radius of the galaxy being negligible), and one obtains
The numerical value is given by
where the minus sign was chosen to indicate an acceleration towards the galactic centre, RU denotes the Hubble radius, years is the age of the universe, and , i.e. the amount of DE depends on the age of the universe.
There is presently a controversy about the validity of MOND. The research group of S. Mc Gaugh on November 13, 2018, published an article  in which the MOND hypothesis with cm/s2 has been confirmed contradicting the article by Rodrigues published on June 18, 2018, that arrived at the opposite result. In their reply to McGaugh’s article , , , they upheld their conclusions questioning the statistical approach by S. McGaugh et al.
This means that in the long run expansion needs to slow down and a minimal value has to be assumed. Perhaps at this point in time, quantum fluctuations can reverse the sign of , and contraction is initiated.
The coupling factor J in the Ising model (Fig. 5) generally depends on temperature T but may also depend on position (i.e. on indices i, j, k). The number of the atoms of space N(t) (sometimes also called spacetime atoms) depends on time (i.e. on the diameter D(t) of the universe). In the beginning phase of the evolution of the universe (as soon as the concept of temperature T could be employed, which is a macroscopic physical parameter), the interaction energy becomes a function of T, and as T = T(t), the interaction energy depends on the age of the universe itself. It is assumed that there is a critical temperature Tcr for which for . At this particular cosmic time, ttp, the expansion of the universe reaches its turning point, and the production rate of DE goes to 0 (contraction will set in). Although the value of the critical temperature Tcr is not known, it appears to be logical to identify temperature T with the temperature of the cosmic background radiation, currently at 2.7 K, which is regarded as some kind of universal cosmic temperature. If this symmetry breaking is comparable to a Bose–Einstein condensate, then at K symmetry breaking might set in, allowing us to speculate on the lifetime of the present universe. As was mentioned in the text, the grid spacing Δs (excluding pathological metrons that are extremely stretched; otherwise, one cannot use the concept of grid spacing) of the space lattice may be much smaller than the Planck length ℓPl, and thus, the sum of discrete spins may be replaced by a continuous variable – a scalar bosonic field ϕ as will be discussed in more detail in Part III, see Section 9.7.2 in . Hence, the application of Einstein’s field equations to a universe that only contains DE, being characterised by a cosmological parameter , and its geometry would be justified at practically all stages in the evolution of the universe, that is, after inflation.
The group , which is representing the actual stage(s) for physical events, is broken into
where one of the two subgroups represent the dual spacetime and the other one to group (Minkowski spacetime ). In this case, the no-go Coleman–Mandula theorem (see M. Kaku ) does not apply, as the parameters of the Lie group are hypercomplex numbers that are anticommutative. Our (local) physical spacetime (Minkowski space) is characterised by the symmetry group of the Lorentz transformations (depending on the order of the coordinates, here is used). It is assumed that spaces and , (the latter of which is the location of negative DM) have the same spatial coordinates (spatial entanglement and differ only in the time coordinates ct and with due to spatial entanglement. Therefore, these two intertwined (entangled) spaces can be considered to comprise a five-dimensional de Sitter space , because the product . According to J. Malcadena ,  (see also S. Carroll ), a five-dimensional anti-de Sitter space (AdS) (see A. Zee p. 662 ) with gravitation is equivalent to a flat four-dimensional QFT without gravitation. Everything that can happen physically in the five-dimensional AdS with gravitation is exactly analogous to a theory in the four-dimensional flat space without gravitation.
According to EHT, DM resides in dual space and hence is principally unobservable in , i.e. only the gravitational effect of DM is active in our spacetime.
The symmetries of matter and physical interactions are governed by the Heim group. With respect to the Heim group, , one should remember that, as shown in (11), it is broken into the four subspaces
where for the moment only the symmetry group of the hermetry forms is considered. The 15 generators of this group denote all possible particle families (not individual particles), whereas the individual particles are described by group , which consequently is broken into the two groups of fermions and bosons
where and denote the well-known symmetry groups for the bosons and fermions, respectively. In EHT, no supersymmetric particles exist. The physical meaning of these groups was already specified by (12) and (13). The two groups with its 12 generators denote the set of 12 charges or Higgs fields that are existing in physics; that is, there should be three Higgs bosons with masses GeV/c2 and m2 and m3, where m2 and m3 are supposed to be hypercomplex masses that are deemed to be responsible for the existence of extreme gravitomagnetic (or hypercomplex) fields that may have been measured by Tajmar and the proposed Heim experiment, respectively. In addition, there are organisational fields o1, o2 that are supposed to be the cause for the emergence of higher organisational structures. This is an expression describing the capabilities of nature to produce evolving organisational structures, enabling elementary particles to eventually form complex organic molecules. Furthermore, there are four quark charges that lead to a total of 15 gluons. Charges 11 and 12 are the two weak charges (or Isospin charges of fermions with values ±1/2). At the end of these three stages of symmetry breaking, four groups are obtained, denoted by
that are describing, respectively,
the stage(s) of the cosmos (the external spacetime
as well as the associated internal Heim space
and the organisation of matter by the hermetry forms describing families of particles from which bosonic and
fermionic particles should eventually be obtained.
In the following, the meaning of the four subgroups will be explained further. Group contains all generators that stand for a cosmos without matter, which means both the atoms of spacetime and the two DE particles. Consequently, this group is further broken down
where the (primeval) subgroup (with four generators, two of them for the atoms of spacetime, that is, exospin and endospin atoms of space, and two generators for the two particles of the DE). The subgroup structure contains the generators for a generalised spacetime encompassing our spacetime, , (group ) and a dual de Sitter space with imaginary speed of light ic and imaginary time −it. Space is supposed to contain – the up to now in vain – the long-sought DM in the form of negative mass (for details, see Chapter 9.2 of ).
The Heim group provides all existing particles and fields. The symmetry of the Heim group is broken (according to our set theory algorithm) into a 3, 2, 2, 1 subgroup structure
Here, we are interested in the group of hermetry forms with its 15 generators that provide all families of particles. In the next step, the individual particles of these families and their mutual interactions are to be established (i.e. groups and ). That is, the groups are meant to describe all individual bosons and fermions.
In present physics, NOM does not exist. The eight hermetry forms of the outer cube comprise the DM neutrino νχ and the DM particle χ and the three gravitational bosons from hypercomplex gravity, as well as the imaginary quark qi, imaginary gluons gI, and the imaginary photon γI. A hermetry form stands for a family of particles represented by its proper symmetry group. In this regard, there is no single supergroup structure in physics that contains all forces and particles. Instead, a hierarchy of groups seems to exist.
It should be noted that in EHT gravitation is characterised by a total of five gravitational constants and six gravitational particles: three for the cosmological gravitational fields () and three for the extreme gravitomagnetic fields () that appear in the interaction between electromagnetism and gravity probably at cryogenic temperatures. Hence, these three particles are sometimes referred to as cold particles.
Newtonian gravitation, formulated in 1687, is supposed to be mediated by the graviton , which is assumed to be a spin-2 tensor particle. Newton’s gravity was reformulated by Einstein in 1915 (GR). The coupling constant chosen by Einstein is GN, but in EHT the value GE is used that differs very slightly from GN by the value of Gq in order to account for the interaction between matter and DE, causing spacetime to expand (in this respect, gravity can be considered repulsive). Therefore, gravity seems to have a multifaceted nature.
For the individual bosons, the group structure is given by the subgroup structure
where the group comprises all existing physical field quanta. The subgroups with three and with three generators are containing the three gravitational bosons of the cosmological gravitational fields (Newtonian gravity) and the three gravitational bosons representing the hypercomplex-gravity fields, resulting from the conversion of electromagnetic into gravitational fields as described above (Fig. 6).
The group represents both the fermions of OM and NOM (negative and imaginary matter). The subgroup structure for the individual fermions is given by
where stands for the 15 quarks of EHT, index tq stands for top quark, and ν indicates the three neutrinos of the SM of particle physics. NOM is nonordinary matter, which goes beyond the SM, namely, the (virtual) electron of imaginary mass and the quark of imaginary mass qI. In addition, there is a fourth family of leptons that has negative mass, namely, the DM neutrino νχ of mass eV and the DM particle χ of mass GeV. It should be noted that the masses of the forth family were already published in November 2015 , 1 year before the experimental results of the AMS spectrometer results were released, measured on board the International Space Station. The mass of the DM particle was finally published in October 2016 . The explanation of the physical meaning of the other subgroups is discussed below.
For the definition of imaginary mass as used in  (cf. Section 5.5.2), electrons of imaginary mass, , and quarks of imaginary mass, qI, are assumed to result from some kind of delayed symmetry breaking. Instead of spontaneous symmetry breaking, the existence of these virtual particles of imaginary mass is postulated. They might have been generated, for instance, in the cryogenic rotating Nb ring in the experiments by Tajmar et al. at low temperature, and are not tachyons, because of their electromagnetic interaction. That is, particles of imaginary mass should not possess gravitational mass but should be subject to inertia. This should result in an interaction between electromagnetism and gravitation, which might explain the extraordinary strength of the possibly observed gravitomagnetic fields by Tajmar et al. The two DM particles, i.e. the fourth lepton υdm and the corresponding fourth neutrino νdm, are considered to have negative masses of −80.7 GeV/c2 and −3.2 eV/c2, respectively (see also Fig. 4). Neutrino oscillations are observed experimentally; that is, the various types of neutrinos can change into each other. However, DM particles are supposed to exist in dual spacetime , and therefore only their gravitational interaction with matter in our (de Sitter) spacetime can be observed, but not their speed. In other words, DM is dark, because the particles do not exist in our spacetime and hence cannot be directly measured.
After the Big Bang, the universe rapidly expanded from an incredibly small region with dimensions of 10−33 cm and an unthinkably high energy density of 1094 g/cm3 – this initial phase of the universe is known as the Planck aera. The Grand Unified Theories of today suggest…
W. Greiner 
At the time of the Big Bang,
the universe was concentrated at one point.
E. Zeidler , p. 227
6 Cosmological Riddles
Recently, the concept of inflation was challenged by Steinhardt et al. . However, as we will see in the following, inflation appears to be vindicated by experimental data. Nevertheless, Steinhardt is correct in criticising the models for the inflationary universe. First, as was shown in Part I (e.g. by the recent work of Kramer and Wex), Einstein’s GR is experimentally vindicated, and thus, all inflationary models based on a modification of Einstein’s GR need to be abandoned, including Steinhardt’s own model of 1989 (discussed extensively in Chapter 6 by N. Prakash ).
Instead, it seems that the concept of the Big Bang itself needs to be questioned. In current cosmology, as for instance summarised by the eminent theoretical physicist W. Greiner in 2004,26 the present prevailing Big Bang picture for the evolution of the universe requires the universe to originate from a quasi point-like, almost infinitely dense and hot entity (singularity), coming out of nowhere. Our sceptical view on the physical validity of the present Big Bang scenario is supported in the recent article of H. Traunmüller of Stockholm University entitled “Towards a More Well-Founded Cosmology” . However, the existence of DM has been proved by observation , a fact not cited in , which leads to incorrect conclusions about both GR, an experimentally completely vindicated theory, and DM. The Big Bang picture defies logic, because no physical explanation can be provided that explains how such an improbable initial physical state may have been reached; neither does energy conservation hold in the inception of the universe. Giving up on conservation principles permits any kind of voodoo physics, for instance, the long-sought perpetuum mobile, a device that generates an infinite amount of energy from nothing.
The problem remains even if quantum physics is utilised. According to quantum mechanical rules, the universe cannot have started from a point but from an extremely small region of about 10−25 m filled by a so-called (unknown) inflation field, which was in a metastable, highly energetic state possessing a negative pressure. With respect to the (classical) Einstein field equations, it was this negative pressure that was responsible for the exponential inflation of spacetime. However, this scenario is equally implausible and contrived as the one described by W. Greiner et al.
Einstein’s equations are valid for a spacetime continuum in the form of a manifold, but not at the very instant describing the origin of the universe. Obviously, the universe must have evolved from a unique unthinkably universal state of perfect symmetry. Furthermore, one assumes that matter is the result of a transition from the initially existing true vacuum state with zero energy to an energetically deeper state of the vacuum. The assumption of such a false vacuum is not a real solution because the question remains how such a symmetric false vacuum could have been created. Energy has proven to be strictly conserved throughout the cosmos, so how could the universe itself owe its very existence to a stark violation of this fundamental physical principle? In other words, the Big Bang appears as an unphysical event.
Clearly, this scenario is not in accordance with the above stated fundamental principle of duality as will be discussed below. Furthermore, in order to ensure energy conservation and to follow the principle of optimisation, the simplest assumption is to assume that the total energy of the universe was, is, and will be zero, .
Owing to the principle of quantum fluctuations in conjunction with the idea for the existence of oriented elemental surfaces (the so-called metron concept) as the basic building blocks for the space lattice, the universe is assumed to have been caused by an initial quantum fluctuation producing the first metron; i.e. it was built from nothing. This triggered a chain of events – governed by the duality principle – that eventually led to the evolution of the cosmos as discussed in further detail in the forthcoming Part III.
The cosmological principle is one of the cornerstones of the SM of cosmology. It requires the distribution of energy (including all kinds of matter) to be homogeneous and isotropic in the universe on large-enough scales (several hundred Mpc). The recently detected temperature anisotropy of the cosmic microwave background (CMB) radiation does not violate this principle because it should not be considered as an exact symmetry, but rather a broken symmetry due to the generation of ordinary and DM from photons (radiation). Photons are the first real material constituents in the universe, resulting from the conversion of DE and subsequently heating up the universe. The conversion to photons signals that the inflation process is coming to an end. The random generation and distribution of matter in the evolving cosmos lead to growing spatial anisotropy. Enhanced by Einstein’s gravitation over the course of time, the radius of the universe should be slightly fractal, depending on the direction of observation.
Observing an accelerated expansion of the cosmos does not demonstrate that the cosmic principle is no longer valid. However, as will be discussed in Section 7.3, the observed nonuniform distribution of matter in the universe that is creating an inhomogeneous spatial curvature distribution secDark produces a backreaction that might provide an alternative explanation to the standard Friedmann–Lemaître–Robertson–Walker (FLRW) model of fixed spatial curvatures and thus might replace the accelerated expansion hypothesis. The timescape models go even further and claim that DE is not needed, which is in conflict with CDT simulations (see below). Also, in EHT, DE and the spacetime lattice develop at the same time, and matter is the result of DE. So long as only DE and the spacetime lattice exist, isotropy is a global symmetry that should be correctly described by the FLRW metric. In other words, the onset of radiation, i.e. the formation of photons from DE, marks the end of the inflationary phase, and spatial isotropy (symmetry) is slightly broken.
One may further argue that the arrangement of galaxies along so-called filaments (see Section 7.2) that form some type of cosmic web (large-scale structures extending across distances of several billion light-years) is a sign that this principle is no longer valid.
The latest Planck data on the mass distribution in the universe gauge the composition of DE, DM, and visible matter (baryonic matter) at 68.3 %, 26.8 %, and 4.9 %, respectively. It becomes clear that so long as DE is assumed to be homogeneously and isotropically distributed, the irregular distribution of dark and baryonic matter in the form of galaxies and clusters of galaxies only locally affects the spatial curvature of the FRLW topology and does not have a global effect. Hence, the cosmological principle should remain approximately valid, but not as an exact symmetry, a situation well-known from particle physics.
In EHT, the concept of a Quantised Bang replaces the Big Bang, as will be discussed in the subsequent sections.
7 Novel Physical Concepts for Cosmology
The ensuing discussion rests on the assumption that the creation of the universe is foremost based on the principle of duality. Duality is at the very root of the existence of both space and time as well as the concept of relativity. Duality requires finiteness of all physical entities and processes. The cosmos does not contain anything infinite. Without duality, none of the (finite) building blocks that comprise the cosmos could exist. Any observation of a planet in the sky requires that this object exists relative in 3D space and time with respect to the observer.
The duality principle has a major implication on the primordial event that caused the cosmos to come into being. From the very first instant, two physical phenomena must have been present. A hot Big Bang as currently proposed by the SM of cosmology does not seem to be compatible with the duality principle (Chapter 9 of ). Instead, the creation of the universe seems to be based on the simultaneous generation of
Spacetime (in the form of an active stage, providing structure and information, motion) and
Dark energy (primordial substance). The validity of Einstein’s famous equation is instrumental because it allows the freezing of energy into matter. According to the optimisation principle (Hamilton’s principle), the total energy of the universe was, is, and will remain zero, governing the conservation of energy.
The sum of the negative energy (space lattice) and positive energy (DE) needs to add up to zero. The duality principle, along with the existence of quantum fluctuations (uncertainty principle), requires the total cosmic energy to be zero. Additionally, under EHT, DE is considered to be the precursor of all types of matter (ordinary as well as DM) in the cosmos (see also the discussion in Chapter 9 of ).
New symmetries are needed to account for additional particles. Mathematics knows four different number systems that possess a division algebra, namely, the real numbers, ℝ; the complex numbers, ℂ; the hypercomplex numbers or quaternions, ℍ; and the octonions, 𝕆 (a more comprehensive discussion is given in Chapter 9.4 of ). The basic idea for a further extension of physics is to employ additional number systems beyond real and complex numbers. For instance, the extension of physics can be obtained by using the following sets of numbers: the hypercomplex numbers ℍ (three additional types of imaginary masses as there are now three imaginary roots of −1) and octonions 𝕆 (seven additional types of imaginary masses, since there are now seven imaginary roots of −1).
The physical consequences of quaternions and octonions (complex quaternions) are not entirely clear yet, but they are giving rise to new types of matter and may also be related to organisational structures and (in a purely speculative exercise) consciousness itself. For instance, if hypercomplex numbers or octonions are used as the fundamental number system, different kinds of matter should exist.
7.1 Speeds of Light and Gravitation
Gravitation, identified with the Newtonian force formulated in 1687 and reformulated by A. Einstein in 1915 as the general theory of relativity, is still a mysterious force. Gravitation is supposed to be mediated by the graviton, , a hypothetical spin-2 tensor particle according to modern quantum field theory. The force acting between two masses () is characterised by the gravitational coupling constant GN (the index N stands for Newton), which is the same in both Newtonian and Einsteinian gravitation. Recent measurements of GN (there is no theory to calculate its value) have shown strange deviations in the results despite the accuracy of the measurement techniques. So far, this problem remains unresolved . It is generally assumed that the speed of photons in vacuum as obtained from Maxwell’s theory and the propagation speed of gravitational waves as utilised in GR are the same, namely, .
Calculations in Section 5.5.2 Non-Ordinary Matter in  (and possibly also experiments) suggest that gravity might have a more subtle structure. In EHT, gravity exhibits a multifaceted nature comprising three gravitational constants: Gp for hadrons, Ggp for leptons, and Gq for the interaction with DE (the vacuum field of spacetime) and the spacetime lattice (or, considered to be a continuum, depending on spatial resolution).
This means that the Newtonian gravitational constant should be a combination . To account for the interaction of luminous matter with the vacuum field (DE, characterised by Einstein’s cosmological constant Λ), a further gravitational constant needs to be introduced. It is termed Einstein’s gravitational constant, because it plays a role only in GR, given by . Now Newtonian and Einsteinian gravitations exhibit slightly different gravitational constants because in Newton’s theory space and time have absolute character (static), whereas in Einstein’s GR spacetime is a dynamical field generated in combination with the DE field (duality principle).
As a result, because of the presence of
owing to the existence of the DE field, a very small difference in the propagation speed of gravitational waves cG and the speed of light in vacuum c could exist, given by the relation
However, when this relation was found, such a difference in propagation speed appeared to be far too small to be measured, notwithstanding the fact that at that time only indirect proof for the existence of gravitational waves existed. In 1974, Hulse and Taylor discovered a binary star system comprising a pulsar and a neutron star (a pulsar is a radiating neutron star, diameter about 10 km), termed PSR B1913+16 pulsar, orbiting their common centre of mass. The emanation of gravitational radiation was concluded by the rate of decrease of the orbital period at s per year in precise agreement with the loss of energy of W due to gravitational radiation as required by GR.
Nevertheless, according to EHT, gravitational waves should propagate slightly faster than electromagnetic waves. This effect is extremely small and can be seen only on Earth if there are detectors for gravitational wave signals and for measuring photons from a source several hundred million light-years away, that is, far outside the Milky Way galaxy. In addition, it must be possible to unambiguously associate gravitational waves and photons with such a source. Gravitational waves generated from colliding black holes or neutron stars should arrive somewhat earlier than the corresponding photons.
In his series of three lectures on string theory from January 29 to 31, 2007, at CERN, B. Zwiebach of Massachusetts Institute of Technology (MIT) predicted that the gravitational effects of strings by forming cusps and kinks propagating on string loops are expected to produce powerful bursts of gravitational waves to be observable by LIGO. Waves may even have values as small as , where GN is Newton’s constant, and μ denotes the mass per unit length and for . The LIGO, Wilkinson microwave anisotropy probe (WMAP), and COsmic Background Explorer (COBE) data have checked an enormous parameter space, especially the Advanced LIGO data from 2015–2016 for both stochastic and burst measurements, and comprehensive results were eventually published in May 2018 . Not the slightest signal has been detected (cf. fig. 2 in ). It now seems to be justified to conclude that cosmic strings do not have physical reality. It is a sobering fact that measurable predictions issued by string theory over the years all have been invalidated by experimental data as soon as experimental techniques became sophisticated enough to overcome the substantial experimental thresholds posed by string theory predictions. Originally cosmic strings were proposed as the main source for cosmic density fluctuations, but this would require a value of . This assumption was already ruled out by the COBE Satellite data. Density fluctuations are now thought to be the result of quantum fluctuations during the inflationary period. In later years, WMAP data reduced this value further below , lower than the value calculated from GUT, which is given by , where M∗ is the string mass, that, if it were to result from a GUT, would be . Both the COBE and WMAP data delivered the unequivocal experimental message that GUT theories may not exist in physics.
Only a few years ago, such a scenario would have been impossible to measure, but with LIGO (the detection of gravitational waves by laser interferometry was conceived in the 1990s, but the first measurement took place in 2017), cosmology has entered the age of gravitational wave detection. In October 2017, double-messenger wave astronomy has been established  with the simultaneous observation of gravitational waves from a binary neutron star merger and the photons emitted by gamma ray burst GRB170817A. These observations led to a constraint of the speed of gravitational waves given by the inequality
which is almost the same numerical value as calculated from the above equation. This constraint also rules out most of the alternative tensor-scalar theories of GR. As was already shown on p. 363 in , the results of M. Kramer and N. Wex do not seem to leave any room for measurable deviations from Einstein’s GR.
The value for the speed of gravitational waves from EHT, cG, is given by the equation
Because the gravitational waves and photon signals from the merging neutron stars arrived from a source 130 million light-years away, the path length difference of the two signals should be
where tS denotes the signal travel time. As the measured distance in the arrival time for the two signals is s, one calculates
where tS was converted into seconds. It should be noted that neither the expansion of the universe nor curvature was considered in the simple calculation for dS. Compared to the measured value of 1.7 s, the agreement with the calculated time difference of 5.7 s appears reasonable, but it must be mentioned that according to EHT the gravitational wave should have arrived first. The effect of the DE field is very subtle, but definitely visible. It remains to be seen whether this interpretation can stand the test of time as there is no doubt that further binary systems of black holes and neutron stars will be found in the near future. The prediction is that there should be small differences in arrival times for gravitational waves and photons; i.e. the gravitational wave should always be slightly ahead of the photon signal. If this were the case, it would be a hint that novel physics may have been found.
7.2 Big Bang Scenario Questioned
As we have argued in , there is now substantial experimental evidence against the existence of real extra dimensions, questioning the validity of these Grand Unified Theories and their accompanying supersymmetric particles. As a consequence, the concept of a hot Big Bang should also be regarded critically. Cosmic inflation, that is, the exponential expansion of the spatial dimension of the initial universe by a scale factor of about 1043 between the time period of 10−36 s and 10−34 s, is supported by the observed homogeneity of the CMB radiation. Otherwise, any two photons located at two opposite points as seen from Earth and separated by a distance larger than 1/2 c TU (TU denotes the age of the universe) would not have had time to interact. Hence, without causal contact, the homogeneous distribution of the CMB radiation must be considered a purely random phenomenon – with practically zero probability to occur.
According to the fundamental principles laid out in Section 4.1, there are no singularities in physics. In other words, according to the Big Bang, the origin of the physical universe would be due to a nonphysical event.
Energy conservation would be maximally violated by a Big Bang, whereas there is no known physical process that does not respect the principle of the conservation of energy. While theoretically conceivable, this would run counter to all prevailing experience and experiment.
Moreover, the recent H.E.S.S. (High Energy Stereoscopic System) measurements , an array of ground-based Cherenkov telescopes, are clearly in contradiction to a Big Bang because none of the Big Bang predicted WIMPs could be detected. In physics, a theory is wrong if there exists a single experiment that is in conflict with it. Although the failure of the H.E.S.S. collaboration to see any WIMPs (as postulated by SUSY theories) should not (yet) be considered fully conclusive, it renders the Big Bang concept highly questionable and justifies the search for alternatives that are in better agreement with commonly accepted physical principles.
These measurements are independently corroborated by most recent data from the XENON Collaboration (November 2017)  obtained by the XENON1T experiment that has been and is searching for the nonrelativistic nonbaryonic component of the Λ cold DM model (Λ CDM) using underground detectors located deep inside the Gran Sasso massif. WIMPs should be detected from their nuclear recoils interacting with the 3200 kg of ultrapure He, which is an increase of two orders of magnitude with respect to the previous experiment. Since the February 2017 beginning of XENON1T, no WIMP events have been recorded, and WIMP masses above 10 GeV/c2 seem to be excluded. The experiment continues with detectable mass limits being lowered and the collection of more data during winter 2018. As of June 2018, the CERN Atlas collaboration did not see the decay of any DE scalars to SM particles, nor were any DM particles detected, and no superpartners were found .
An even more stringent WIMP mass limit comes from the CERN CMS experiment that will be discussed in Section 7.4 together with the DM problem.
In order to have a universe that follows its own conservation principles, an alternate explanation to the Big Bang is suggested. By contrast, a universe that conserves energy at all times and is governed by the principles of duality, optimisation, and quantum fluctuation is supposed to have the following properties:
A universe with total energy
obeying global energy conservation not only from its very beginning but also asserting its validity at all times would be in accordance with the optimisation principle of nature as stated in Section 4 as well as Occam’s razor, a major guiding principle in physics.
The duality principle causes the spacetime lattice and DE to be generated simultaneously, where the structural energy of the spacetime grid, , is obtained from Szilárd’s energy principle (below) relating information and energy, whereas the scalar DE field has positive energy and is related to Einstein’s equivalence principle. Hence, the total energy of the universe is given by the equation
where the positive and negative energies globally add up to zero at all times under the assumption that all forms of positive energy (e.g. DM, radiation, or baryonic matter) are ultimately converted from DE.
Dark energy is considered the source for all types of matter that exist in the universe. A Lagrangian L = 0 is not unusual because in SR (spacetime interval) for photons. Any type of matter and radiation in our universe is supposed to originate from DE as a direct result of the principle of energy conservation.
Besides duality, the quantisation principle has proven to be a cornerstone of physics. In general, all physical entities are quantised, and this should also apply to spacetime itself (i.e. spacetime should form a lattice). However, the grid spacings of spacetime may be much smaller than the Planck length as indicated by the ESA Integral satellite measurements (Section 5.3). Hence, the universe is assumed to owe its existence to a Quantised Bang, which, in conjunction with duality, forms both the mutually dependent spacetime lattice and the DE field. According to EHT, the formation of the universe appears first in the form of the spacetime lattice (quickly turning into a continuum) and the scalar DE field. This is based on the two fundamental energy equivalence principles from Szilárd and Einstein, according to (2) and (3).
When quantum fluctuations created the very first elemental surface of spacetime, the very first quantum of DE was produced simultaneously such that the potential energy of the spacetime lattice and the ensuing DE field added up to zero – at all instants of time.
In a recent article, A. Ijjas et al.  challenge the scenario of an inflationary universe, claiming that new data from the Planck satellite have raised considerable doubt on the validity of inflation. The theory of inflation was introduced by A. Guth of MIT some 35 years ago. Since then, it has become a cornerstone of modern astrophysics and cosmology. The other main concept in current astrophysics is the Big Bang, in which all the energy in the universe allegedly emerged from some kind of spatial singularity. It is presently assumed that immediately after the Big Bang the early cosmos underwent inflation (exponential spatial expansion). The article by Ijjas et al. triggered a fairly harsh reaction from established science, eliciting a letter to the editors of Scientific American signed by 33 leading scientists, sharply rejecting the claim that Planck data have invalidated the empirical testing of inflation. The undersigned of this open letter state that about 14,000 articles by 9000 scientists have been published in support of inflation, and thus, there existed a major consensus for its correctness. The situation bears resemblance to the open letter from 90 years ago known as “100 authors against Einstein.” As Einstein simply remarked, “Why 100 authors?” If GR is wrong, one is enough, clearly pointing out that science is not democratic, but elite; sheer numbers do not count.27
However, inflation (like GR and unlike the Big Bang concept) has passed several important empirical tests. Despite the fact that primordial gravitational ripples have not (yet) been found, inflation appears to be capable of solving the problem of flatness, homogeneity, and isotropy of the cosmos. A different question is whether inflation exclusively follows from the proposed Big Bang. It is obvious that the Big Bang is not in conformity with the quantisation principle that excludes physical singularities, even though singularities are mathematically admissible solutions of Einstein’s field equations. In the next section, an alternative picture for the birth of spacetime is presented. It employs the quantisation principle and also leads to an inflationary universe, but is based on the genesis of DE and the unfoldment of spacetime.
7.3 Origin of Dark Energy, Dark Matter, and Baryonic Matter
In 2011, Perlmutter, Riess, and Schmidt were awarded the Nobel Prize for the discovery of DE and the accompanying accelerated expansion of the universe.
In September 2017, claims were made ,  that DE does not exist based on computer simulations by incorporating the observed (local) inhomogeneities in the distribution of baryonic matter and DM. The simplified cosmological model introduced by Einstein and Friedmann about 100 years ago is based on the assumption that, on average, the universe expands uniformly – as if all the matter were smoothly distributed in the cosmos. According to GR, observed inhomogeneities in curvature must act back on the expansion of the universe – at least in principle. This effect is termed backreacting and is supposed to be the cause for the expansion of the universe – according to the simulations. In addition, the inhomogeneous distribution of baryonic and DM may have another effect; namely, it may lead to a fractal dimension of spacetime because the radius of the universe may slightly vary, depending on the angular direction. Hence, this effect may produce small variations about the exact radius of the universe as calculated in the Einstein–Friedmann model of 1922. Further, as the universe is not exactly homogeneous, the matter distribution in the form of galaxies, clusters of galaxies, or voids may create slightly nonuniform expansion rates depending on the direction of observation.
Although it seems unlikely that backreacting is strong enough to explain the complete expansion of the universe, it might perhaps be sufficient to explain the accelerated expansion of the universe, because the expansion rate in any two directions should be different – owing to the local variations in matter density. In other words, the universe may expand almost uniform on the average (i.e. by calculating an expansion rate averaged over all angular directions), but no two directions will have exactly the same rate of expansion. Thus, might it be that the (claimed) accelerated expansion is an illusion, caused by the filament structure of matter, that is, it is an effect of the cosmic web itself?
7.3.1 Dark Energy, Inflation, and EHT
It should be kept in mind that without Einstein’s cosmological constant Λ (on the RHS of Einstein’s field equations) the CDT simulations of Ambjorn et al. do not result in a spacetime lattice. Further, their simulated spacetime is approximately four-dimensional, but appears to be fractal. Hence, all simulations that wish to do away with DE are not consistent with these results. Before any matter inhomogeneities can form, the spacetime lattice has to be in place, and this requires the value, needed to establish the arrow of time.
At a later instant of time in the cosmic evolution, when large-scale material structures in the universe had formed, they may indeed have led to a feedback effect on the curvature distribution in space and thus might be the cause for the small accelerated expansion. However, any independent simulations able to confirm this phenomenon must exhibit a realistic order of magnitude for the backreacting. Perhaps the universe is actually slowing down and does not accelerate beyond z = 0.7, caused by a possible backreacting effect.
In EHT, DE has to exist as it is generated simultaneously with the spacetime grid (Section 7.2). As an alternative to using the Ising model , once the spacetime lattice comprises a sufficiently large number of metrons (becomes a spacetime manifold), DE can be modelled by the cosmological field . This exerts a negative pressure in Einstein’s field equations causing the expansion of space, including inflation.
Recently, the following question from Nobel Laureate, astrophysicist, and cosmologist John Mather was posed: Can stars, by virtue of converting mass into energy, be responsible for the effects we attribute to DE (mentioned in the blog by E. Siegel on June 23, 2018) . As is clear from the discussion in this section, under EHT, the DE field is the precursor of matter, and thus, no, it should be the other way around. Everything material in the universe ultimately is derived from DE.
The generation of matter is due to symmetry breaking that diminishes sharply the amount of DE. Thus, the inflation period comes to an end, driving the universe into slow expansion mode.
A speculative idea is that an overshoot might have taken place causing the universe to slow down too much, acting like a damped oscillator. Therefore, at a later period in the evolution, the universe may indeed be subject to a slightly accelerated expansion rate – as may be presently observed.
In 2016, the two expert groups for the neutron lifetime measurement (as discussed in Section 3.1.2 of Part I)  reanalyzed their data and confirmed that two different experimental techniques produced conflicting experimental findings. The first technique (termed bottle experiment) uses the storage of ultracold neutrons, counting the remaining neutrons and reporting a neutron lifetime of s with a 68 % probability. The second experiment (termed beam experiment) detects the inelastically scattered neutrons, counting the protons resulting from neutron decay, measuring a lifetime of 887.7 ± 2.2 s. In EHT, owing to the postulated existence of hypercomplex masses, a free neutron can also decay according to the additional decay channel
that is much less likely than the well-known decay reaction
which means that the beam technique should actually report a larger neutron lifetime. Thus, this might be considered a hint for the existence of hypercomplex masses.
It is a firm belief in physics that all the known interactions in nature between material particles can be reduced to four interactions. However, Einstein’s gravity may not fall into this category because it is mediated by the curvature of spacetime and not necessarily by gravitational bosons. Of course, one can postulate the existence of these bosons,28 but they were never observed, nor are they necessary for gravity to function. GR is a geometric theory depending on external spacetime curvature, whereas the other interactions are living in internal or gauge space. Their interactions are not based on spacetime curvature, but instead by the exchange of bosons.
It was shown in Section 5 that in EHT there are two groups and that formally describe three gravitational bosons that are mediating the forces for the cosmological fields as described by Einstein’s GR, whereas hypercomplex-gravity fields are described by the bosons . However, there are fundamental physical differences between the two sets of gravitational generators that make us conclude that cosmological fields may not be working via mediator bosons but are of purely geometrical origin. Hence, it is more likely that the enormous weakness of Einstein’s gravity is the result of the enormous rigidity of the spacetime grid.
The following remarks on the nature of the two types of gravity seem to be in place.
Why is gravity so weak? The answer is twofold.
First, for the universe to become large, the gravitational constant GE must be very small; otherwise, the curvature of spacetime would become too large. This also means that the grid spacing of the spacetime lattice (due to the quantisation principle) has to be discrete and extremely small.
Second, gravity may not be that small if the gravitational interaction between electromagnetism and gravity in the form of hypercomplex gravity is considered. However, this type of gravity is due to an exclusive interaction between material particles without involving the geometry of spacetime. Einstein’s GR rests on pure geometry; it is questionable if there are real bosons mediating this force. If they do exist, Einstein bosons would be of spin 2, while hypercomplex-gravity bosons are of spin 1 and the interaction strength is about 20 orders of magnitude stronger than for spin 2 bosons.
In EHT, there exist three different gravitational coupling constants for cosmological fields, denoted by . The gravitational constant of GR comprises all three parts, namely, Gp (gravitational constant for the proton and neutron or hadrons), (gravitational constant for leptonic particles carrying an electric charge), and Gq the gravitational constant that couples particles carrying energy to the DE field , because in later cosmological epochs Λ may have developed a slight dependence on position x, as the distribution of visible and DM may not be entirely homogeneous even when considering large cosmological scales (see Part I). That is,
where GE is termed Einstein’s gravitational coupling constant, which is different from Newton’s coupling constant
that does not know anything about DE.
Gp is assumed to describe the gravitational coupling between two proton masses mp, that is, .
Not all three gravitational interactions may be active in a process. If leptons (electrons) are present (e.g. atoms), the gravitational constant has to be added. For all practical purposes, Newton’s law remains unchanged, as for matter built from atoms the coupling is practically given by GN, since . The Newtonian gravitational constant is given by for Newton’s theory of gravity is based on the concept of absolute time and space, that is, spacetime without DE. Hence, the gravitational interaction between matter and the DE field, characterised by Gq, cannot appear in Newtonian physics, and therefore, . Hence, Einstein’s equivalence principle is an approximate symmetry only, which is eventually broken due to the presence of DE. The strong variations in the measured values of GN (or GE) may be caused mainly by diurnal neutrino fluctuations affecting the values of the gravitational constants GE and GN by , depending on the Sun’s activity (11- to 12-year period) and/or the location of the laboratory (daily/nightly variation due to the rotation of the Earth with respect to the Sun).
7.3.2 Existence of Dark Matter?
For eight decades, astrophysical and cosmological observations reported on the existence of additional gravitational interaction that cannot be attributed to the visible baryonic matter. Thus, the existence of DM has been postulated, which, according to current theoretical understanding, is supposed to originate from particle physics beyond the SM. As Figure 7 has shown, gravitational lensing from the collision of galaxy clusters provides firm evidence for the existence of DM, e.g. the Bullet cluster collision about 150 My ago (as well as other clusters) that is some 3.7 BLyr from Earth at a redshift of z = 0.3. It should be noted that even MOND cannot fully explain away the mass discrepancy in galaxy clusters.
The neutrino anomaly seen in the Los Alamos Liquid Scintillator Neutrino Detector at the Los Alamos National Laboratory in the mid-1990s was interpreted as a possibility for the existence of a heavy, fourth kind of neutrino (rest mass m > 0), termed sterile neutrino, supposed to be even less interactive with matter than the three ordinary types of neutrinos. Its mass was tentatively determined as 1 eV/c2. However, on August 9, 2016 , the IceCube neutrino detector experiment reported a null result (capable of detecting neutrinos of mostly extragalactic origin in the energy range 320 GeV–20 TeV, located about 2.500 m below the surface buried in the ice of Antarctica) after 1 year of measurement. Newer results from IceCube were reported by Ahlers and Halzen  recording neutrino energies up to 10 PeV (1016 eV). The neutrino-nucleus interaction (i.e. with the deep Antarctic ice) eventually produces muons () that in turn generate Cerenkov photons observed in the IceCube detector . This is in accordance with observations from the Planck satellite. In addition, over long-enough distances, neutrino oscillations have been observed, for instance, .
While most recent IceCube neutrino observations have not revealed any hint about the existence of a fourth neutrino, at least not from cosmic neutrinos , by contrast, the recent MiniBooNE collaboration (Mini Booster Neutrino Experiment) at Fermilab  has reported 2437 neutrino and antineutrino events in the MiniBooNE experiment in the energy range of MeV. According to theoretical models, only 1976.5 ± 44.5 events should have occurred. This excess of electron neutrinos was produced from 8 GeV protons, with scattering cross section σe for short-baseline neutrinos at an m/MeV, where Eν is the neutrino energy and is the (short, compared to cosmic neutrinos) distance that the neutrino travelled before detection. The observed excess of the electron-flavoured νe neutrinos is explained by some of the muon neutrinos oscillating into sterile neutrinos for a time during regular oscillations. The sterile neutrinos are assumed to have converted into νe neutrinos during the next oscillation phase, hence the higher νe numbers. Both neutrino appearance and disappearance experiments (e.g. Los Alamos Liquid Scintillator Neutrino Detector and MiniBooNE experiment) measure this kind of excess that might be interpreted as the existence of a fourth neutrino or sterile neutrino. In EHT , it is called the dark neutrino because, together with the postulated DM particle, the fourth particle family is formed. If neutrinos are Dirac particles, then there must be three right-handed neutrinos, which must be singlets, not subject to strong or electromagnetic interactions, i.e. sterile neutrinos. Perhaps the MiniBooNE results indicate the presence of right-handed antineutrinos.
However, as will be shown below, a sterile neutrino with the proposed physical properties most likely cannot exist. The definition of the sterile neutrino will be compared to the Higgs boson properties. The chirality of a fermion changes every time it interacts with a Higgs boson. But there is no right-handed neutrino, or at least it was never detected (see above). That means the (left-handed) neutrino most likely does not couple to the Higgs field.
As is known from the OPAL experiment at CERN, for the measured width of the resonance reaction (f denotes a fermion, see Section 12 ) around the mass of the Z0 boson (i.e. Z0 bosons were created in the LEP storage ring at CERN at a centre of mass energy of GeV), only the theoretical resonance curve predicting the existence of exactly three different types of neutrinos agrees with experiment. Hence, there is no room for a canonical fourth neutrino with m > 0. Consequently, a sterile neutrino cannot be subject to the weak interactions like the three neutrinos , arranged in the three lepton doublets (Fig. 1). About 1 year ago, a study found that the number of antineutrinos generated from radioactive plutonium-239 matched theoretical predictions, but the antineutrino ratio produced by the decay of radioactive uranium-235 was significantly lower than predicted by models. If sterile neutrinos were behind this anomaly, there should be the same fraction of missing antineutrinos emerging from the radioactive decay of plutonium as from uranium. Instead, it is likely that the theoretical model is the source of the anomaly. Hence, no evidence was seen for a sterile neutrino . Therefore, the coupling of a sterile neutrino to the doublet neutrinos can only occur through a mass term in the Lagrangian. But any particle that carries energy will cause this kind of coupling, not necessarily a lepton. The most important postulated feature of the (fourth) sterile neutrino is to modify the oscillation pattern through changes in the (three) neutrino vacuum oscillations (jumping back and forth between one neutrino flavour and another, which cannot be fitted within the framework of the SM of particle physics) into the sterile neutrino state. This introduces an additional oscillation length, for instance, . This process to occur requires a violation in the Lepton number conservation. In the 3 + 1 simulation model for the sterile neutrino eV2 is assumed (mass difference of νe and νS between the electron neutrino and the sterile neutrino) . The Xenon 100 experiment did not find any excess in the approximate 320 GeV to 20 TeV neutrino energy range. In practice, there are bounds on the doublet-sterile neutrino mixing, but there is no bound on the number of sterile neutrinos and on their mass scales.
In the following, we present a different approach for DM and the sterile neutrino. According to EHT, the current decade-long experimental search for a fourth kind of neutrino of positive mass should remain unsuccessful (similar to the fruitless search for a DM particle) because these particles are assumed to have negative mass. That means, the DM particle should have a negative mass of −80.7 GeV/c2 but cannot exist in our spacetime. Instead, it is found in dual spacetime (Fig. 1). As our spacetime and dual spacetime share the same hypersurface , the gravitational field of these two dark particles can be felt in our spacetime . Dark particles themselves remain both invisible and directly undetectable. That is, the dark neutrino may only indirectly be observed by modifying the neutrino oscillation pattern. During the cosmic evolution, photons decoupled at a temperature of T ≈ 3000 K, whereas neutrinos would decouple at an energy of about MeV. Compared to 1 MeV, the proposed energy of the sterile neutrino of 1 eV signals a light neutrino, resulting in a neutrino energy density of about
where the sum is over all neutrino types and Hubble’s constant . In order for to account for the missing DM, one needs to have eV, which is not in accordance with the MiniBooNE results, nor is it compatible with recent Planck data that limit the masses of all three neutrino types to eV/c2 , Chapter 1. Thus, a neutrino of negative mass could exist as predicted by EHT. For neutrino masses higher than 1 MeV, there would be a Boltzmann suppression factor of . These particles would be nonrelativistic early on and would be candidates for cold-DM particles.
It is obvious that these concepts require a drastic deviation from present physics. The existence of these DM particles cannot be explained in the framework of the SM of particle physics, or in the so-called advanced theories beyond the SM (which most likely have been invalidated by recent experiments). Dark particles of negative mass also challenge the cornerstones of the SM of cosmology, exclusively based on SR and GR. Einstein’s models of relativity are correct, but not sufficient in describing these new types of physical phenomena. Dark matter or hypercomplex masses do not exist in SR or GR, and thus, physical phenomena involving these two types of matter may not be subject to the constraints imposed by the two relativistic theories.
So, DM has been elusive since its inception by the Caltech astronomer Zwicky in 1933. Since then, the existence and nature of DM have been controversial. Recently, novel (but mutually exclusive) theoretical interpretations for DM particles were reported (Section 7.4). During the short period since our discussion in Part I, Section 2 , further claims of DM detection have been reported – a process going on for more than eight decades – only to be retracted after a longer or shorter period of time. No DM particle has been found up to today (winter 2018). The totally futile search for DM has led numerous scientists to believe that DM does not exist and that alternatives are needed.
According to EHT, there is no fourth family of neutrinos of real positive mass, but a fourth neutrino type of negative mass (−3.23 eV/c2) called the DM neutrino might exist in dual spacetime as depicted in Figure 1.
Already in 1983, M. Milgrom proposed the nonexistence of DM, see also the discussion in Section 9.10.4 in  as well as Section 2 of Part I. Instead, he suggested a modification of Newton’s law for very small accelerations should be considered, termed modified Newtonian dynamics. The MOND formula of Milgrom does not have a physical interpretation; otherwise, he should have been able to modify the underlying physical action of Einstein’s GR theory. Nevertheless, Milgrom’s MOND formula matches the observed rotation curves, but it is incorrect physically. Newton’s law is right at all levels of observed cosmic accelerations (see right part of Fig. 7). In Section 9.10.4 of , we already presented an attempt to derive the MOND formula. An updated explanation for the MOND formula is foreseen in the forthcoming Part III, including the measurements of Bidin et al. (see Section 3.2.1 of Part I) that revealed the almost total absence of DM inside galaxies, and hence a completely different physical phenomenon should be responsible for the MOND formula. On the other hand, from astrophysical observations, it is known that galaxies are surrounded by a large halo of DM. The galactic halo is instrumental from preventing galaxies of flying apart, and thus, it is indispensable in the formation of large cosmic structures. Dark matter is here to stay.
Einstein’s GR is now on firm experimental ground, even the nonlinear aspects as shown by the work of M. Kramer and N. Wex, MPI Bonn (see Section 3 of Part I and earlier in Sections 9.10.2 and 9.10.3 in ). As a result, there remains no room for alternative theories of gravity except for the behaviour of gravity inside a black hole where no experimental data exist.
The latest promising lead about the invisible DM was the observation of an excess of γ-rays coming from the centre of our Milky Way galaxy. The immediate theoretical explanation was that DM particles were annihilating each other in great numbers, thus producing the strongest radiation source seen in the universe. However, in a recent study Mauro et al. , using the Large Area Telescope on board of NASA’s Fermi Gamma-Ray Space Telescope, measured the shape of the isotropic diffuse γ-ray background thought to originate from hitherto unresolved point sources (the pulsar spectra in the centre of our galaxy). By means of these very distinct spectra, they were able to model the glow of the galactic centre correctly utilising a population of about 1000 pulsars (extremely dense and rapidly spinning cores of collapsed stars that emit electromagnetic radiation predominantly in the form of radio waves or γ-rays) without the need of any DM particles. This approach is conclusive as the pulsar spectra vary in a specific way in accordance with the energy of the emitted γ-rays.
Dark matter has remained as elusive as ever. In Section 2.3 of Part I , it is explained that DM particles (according to EHT) cannot be found by the LHC (Section 3). Furthermore, there is strong observational evidence that DM is not present inside galaxies (measurements by Bidin et al. 2010–2014, Part I ). In addition, the absence of DM in galactic centres is confirmed in the new study by Mauro et al., which finds DM seems to be restricted to the halo of galaxies. Axion particles, the other hope for the explanation of DM, have been theoretically known since 1977. An in-depth discussion of axions and WIMPs as well as on the general status of DM and DE is given in Chapter 6 by N. Prakash  covering the period up to 2011.29 An intensive 40-year-long search has revealed not the slightest hint of their existence . Theoreticians have claimed that axions – through the interaction with a magnetic field – can turn into photons. Nothing like that was observed up to now.
The conclusion of the above discussion is that DM does exist, and there is already a comprehensive mapping of the distribution of DM in the cosmos. At the same time, experimental particle physics is unable to provide the slightest hint of a DM particle. Theoretical physics has produced dozens of theoretical models, which have became more and more contrived over time.
In the above discussion, we learned that there is substantial observational evidence [also supported by the recent T2K neutrino beam line findings in Tokai, Kamioka, Japan (σ = 6.1)] for a fourth, singlet neutrino of a mass of approximately 1 eV/c2. However, from (18), it can be deduced that such a light neutrino is not sufficient to resolve the enigma of DM. In EHT, there exist four doublet particle families that are given by
where the dark neutrino and the DM particle have negative masses, avoiding the conflict with the data from the OPAL experiment (see above). Therefore, the search for DM is not over yet. The enigma remains. According to EHT the null results of the DM particle experiments to detect a DM particle can only be understood by postulating a new, radical principle, namely, that the dark neutrino and the DM particle possess negative masses and thus cannot reside in our Minkowski spacetime, but instead live in dual spacetime  as will be discussed further in the subsequent two sections.
7.4 Masses of Dark Matter Particles
The AMS-2 and PAMELA experiments , , , , , , ,  have measured an energy resonance at 80 GeV, indicative of a particle of this energy that possibly might be considered to be the energy of the DM particle χ. However, the associated particle was not found.
However, as known from the latest LHC data as of winter 2018, there is no new particle of a mass around +80 GeV/c2; that is, there is no particle of positive mass in our spacetime as all accelerators (LEP, SPS, Fermilab, SLAC, SPS) have demonstrated for decades and, more recently, the LHC too. In particular, since the LHC upgrade in 2015 to an energy of TeV, an intensive search program for DM particles has been initiated by both the CMS and ATLAS collaborations. SLAC and Fermilab  initiated the Super CDMS (Cold Dark Matter Search) SNOLAB project, to start in the early 2020s, which is supposed to be 50 times more sensitive to very light DM particles than its predecessor, CDMS. Located at the Soudan Underground Laboratory, CDMS ended its operation in 2015 without any result, after decades of DM search. The new experiments sound more like an act of desperation, because very light particles never before were considered to be candidates of DM particles. We dare to predict that this last act in the drama of the search for DM particles will also leave the experimenters empty handed – DM particles are not living in our spacetime.30 Every LHC run has generated increasing experimental constraints on the mass of DM particles. The most recent results of the CMS collaboration  have not seen any hint for a DM particle down to a mass of 1 GeV and an associate mediator boson in the range of 50–100 GeV. There is no experimental evidence whatsoever for supersymmetric particles or technicolour or extra dimensions, and all of those theoretical constructs, based on these concepts, although being around for decades and having underwent numerous refinement processes, appear to be more and more unlikely as possible solutions to the hierarchy problem.
After the inflation period, the universe only comprised cold hydrogen gas, but was awash in background radiation, including radio waves. The cosmic afterglow may be attributed to the so-called Big Bang, but a different scenario may also be conceivable. In any case, the universe at about 380,000 years was dark. After several hundred million years, gravity, initiated by primordial density fluctuations, started to form massive stars. Their first light excited the hyperfine transition of the neutral hydrogen H2 spectral line (the spin flip spectral line, s = 1/2 ℏ → −1/2 ℏ, for the hydrogen ground state n = 1) in the high redshift intergalactic medium, and caused it to absorb these radio waves at nominally 1420 MHz. Due to the expansion of the universe, the actual absorption frequency is lower, given by 1420/(1 + z) MHz, where and denote the redshifted and laboratory (restframe)–measured wavelengths. This radiation should come from everywhere in space. J. D. Bowman, already in his Ph.D. thesis , suggested to measure these intrinsically faint signals masked by foreground emission and after 12 years deployed an antenna in Australia. This location was selected because of the low-level galactic noise radiation. There were first hints of a signal in early 2016 at a frequency of 78 MHz and depth of 0.5 K, with a frequency width of about 19 MHz. But the signal was twice as strong as theoretically predicted. The conclusion from this signal is that the absorbing hydrogen gas must have been colder as assumed. The conjecture is that DM is responsible for this phenomenon. R. Barkana  believes that hydrogen and DM particles are scattering each other. This should have appeared about 180 million years after the Big Bang. The substance of DM remains unknown. Signals from the first stars are predicted to arrive in the form of radio waves as recorded by the EDGES antenna of J. Bowman. Explanation from R. Barkana , : There existed two actors in the form of first stars and DM. The radio waves were sent by the first stars, whereas the DM collided with the gas and cooled it down. Extra cold material explains the stronger than expected signal. Gravity by Einstein is correct. The conclusion by Barkana  is that DM therefore comprised particles. The discovery indicates that these should be low-mass particles, possibly several proton masses. However, no DM particle was ever seen, and these measurements are not proving that DM particles were encountered, but they may show the DM gravitational interaction. In other words, if the DM particles are in the dual space, and their gravitational potential is felt by the hydrogen gas, then a cooling effect by this potential is encountered. However, these signals are not proof for the existence of DM particles.
Numerous types of DM particles have been proposed over the years from the extension of the SM of particle physics. WIMPs are new but stable elementary particles with masses and coupling strengths at the electroweak scale, produced by the thermal background of the universe through an assumed hot Big Bang. However, in order to obtain today’s observed abundance of DM from this hypothesised thermal production, WIMPs need to self-annihilate in high DM density regions by a self-annihilation cross section that corresponds to a particle energy in the range between 80 and 100 GeV. Potential continuum emission of very high energy in the final state can be detected by the H.E.S.S. array of ground-based Cherenkov telescopes. WIMPs of higher mass were never detected in any of the accelerators, including the LHC of up to a particle energy of 1.7 TeV. These data are in accordance with a recent but totally different type of observation coming from the H.E.S.S. array of ground-based Cherenkov telescopes . WIMP–DM particle DM annihilation signals focus on regions in the sky with both expected high DM density and reduced astrophysical γ-ray signal measure. Therefore, any very high energy γ-ray observations of the galactic centre (GC) region are among the most promising avenues to look for DM annihilation signals due to the GC proximity and its expected large DM content.
7.5 Dark Matter Space
It is a sobering fact that despite enormous, long-standing experimental efforts only indirect signs for the existence of DM have been found in the form of gravitational interaction with matter and radiation. So far, DM particles were never detected in our spacetime. On the other hand, a comprehensive mapping of the distribution of DM in the cosmos exists, based on its cosmic scale gravitational interaction.
The solution to this conundrum may be twofold. First, it needs to be accepted that there are no DM particles of positive mass m > 0. Hence, DM must have negative mass. However, this kind of exotic mass cannot exist in our spacetime. Second, as DM particles, denoted by χ, are known to exist but cannot be found in our spacetime, assumed to be a de Sitter space , they have to reside in a dual spacetime, termed , whose physical properties are discussed below. Such a dual space must allow the gravitational interaction of DM with OM of our spacetime, in order to be consistent with astrophysical observations. Furthermore, this gravitational interaction must be attractive. As no DM particle of positive mass was measured, the existence of a dual spacetime containing particles of negative mass m < 0 is postulated.
Next, the coordinate structure of this dual spacetime needs to be determined. The experimental proof of Newton’s law has been exhaustively validated at all scales. Therefore, i.e. no spatial coordinates are available for an extension of our spacetime, and no Kaluza–Klein tower for the mass of the DM particle can be constructed. The only remaining option is time coordinate t. In particle physics, t > 0 is used for particles moving forward in time, while antiparticles are characterised by , i.e. the reversal of both time and space, together with charge conjugation. This means an antiparticle has its charge reversed and is moving backward in time t < 0 and space . Therefore, the only remaining coordinate extension is imaginary time. Hence, the coordinates of are assumed to be given by
where the speed of light must be chosen as in order to produce a positive particle energy. As will be shown below, no waves (radiation) should be possible in this space,
to guarantee the observed attractive gravitational force in , resulting from of DM particles living in dual spacetime. The metric of the dual spacetime is the same as for our spacetime , but the physics is different. It should be noted that DM particles are of negative mass in dual space, but in our spacetime, only their positive energy is present – not their mass. It would be incorrect to think of a negative DM mass existing in our spacetime. In particular, no repulsive force between matter and DM can be constructed, for instance, by applying Newton’s law. This only works if both masses are present in our spacetime. Thus, the energy of DM particles, as observed in our spacetime, is positive, and the force exerted by the gravitational field of DM particles is attractive.
The question arises how the spatial coordinates of dual space are related to the spatial dimensions of our spacetime, because the gravitational interaction between the two spaces is a given experimental fact. The answer is straightforward; spaces and share the spatial coordinates x, y, z, but time is real for and imaginary for . The transformation changes the character of the wave equation (Schrödinger) into a heat-type equation. Therefore, electromagnetic waves cannot exist in .
It remains to be determined what the different types of DM particles are. To this end, the discussion of the group (Section 5.4) should be recalled. Two additional particles (with respect to the SM) were proposed to exist, namely, the DM particle χ and DM neutrino νχ, constituting the fourth lepton family (Fig. 4).
Their masses, however, are negative, viz.
respectively, because they are not living in our spacetime; this is owing to dual spacetime , also termed dark spacetime. In order that the gravitational interaction of DM particles can be observed in our spacetime, , the sharing of the spatial coordinates is required. However, the respective energies associated with DM particles are positive,
as actually observed in our spacetime, owing to the physical properties of dark spacetime, that is, with c as the speed of light in vacuum, expressed by ic so that the energy of the χ and νχ particles is counted positive in accordance with Einstein’s equation as m < 0 giving E > 0. Hence, there should be no contradiction that only three particle families exist, because this requirement is valid only for our spacetime and for m > 0. Dark matter particles, on the other hand, are in dual space with m < 0. Therefore, only the gravitational interaction of the DM particles with matter and radiation is perceived in our spacetime. The two DM particles themselves, in principle, cannot be measured in our spacetime. Consequently, no experiment should ever be able to directly see a DM particle. However, at present, there is no exact knowledge about the decay channel of DM particles.
There is a major physical difference between the physical properties of spacetimes and . Consider the process of generating light by atoms or molecules. The time-independent Schrödinger equation describes particles, e.g. atoms, with electrons having definite discrete energies (ℓ (), where n is called the principal quantum number, and ℓ denotes the orbital angular momentum number. Electrons going from a higher (excited state) to a lower energy level are emitting radiation (light, photons); for instance, the visible spectrum of hydrogen has the two lines H and H with frequencies ν of Hz and Hz and associated energy . Hence, the name luminous matter correctly describes that, and owing to this process, our universe is not dark. It is worthwhile to investigate the effect of the transformation from spacetime (luminous matter) to dual spacetime (supposed to contain dark, i.e. nonluminous matter), dS → DdS, which is given by (spatial coordinates x remain unchanged). Consequently, the Schrödinger equation of dS is transformed into a diffusion equation in DdS, that is,
where D is the diffusion coefficient. Note the all important sign changes on the RHS of the diffusion equation. In dual spacetime, there is no longer radiation (light); instead, DM seems to be governed by a diffusion equation that is not hyperbolic (wave propagation) but parabolic (heat conduction). In other words, there are no waves in dual spacetime.
Hence, dual spacetime should indeed be without luminosity. According to the diffusion equation, DM should be equally distributed in space 𝕊3, because the DM concentration always flows from regions of high density towards low density. This kind of equidistribution would be attained if there were no matter in our spacetime. Therefore, because of mutual gravitational interaction, DM will start to follow the lumps of OM in dS deviating in the course of time from its initial primordial smooth distribution. Depending on the magnitude of the diffusion coefficient and the concentration of OM, this process may take several hundred million years. As indicated in Figure 7, the right picture is showing the rotation curve of stars in a young galaxy, when the universe was about several hundred million years old. As is clearly visible, the rotation curve is perfectly well described by Newton’s law. Therefore, it can be concluded that Milgrom’s hypothesis that Newton’s law does not describe the rotation curves of stars in galaxies is definitely incorrect – unless one assumes that the fundamental laws of physics are changing with time. There are no experiments whatsoever that are pointing into this direction. By contrast, some 10 billion years later, as indicated in the left picture, star rotation curves in spiral galaxies exhibit a substantial deviation from Newton’s law. Something must have happened. What could have changed is both the distribution of DE and DM inside and around galaxies. One might argue that the density of DE is far too small to have any gravitational influence inside galaxies. This argument would be correct, unless one assumes that two types of DE particles exist, generated in pairs mainly during the inflationary period, which are both attractive and repulsive. Due to the cosmic motion, there exists an asymmetry that slightly favours the production of repulsive DE particles, sustaining the cosmic expansion. The two types of DE particles must not be seen as particle and antiparticle that would annihilate each other. Dark energy is a precursor of matter and thus only exerts gravitational forces. However, the distribution of the two types of DE particles may be modified owing to the 107 times higher mass density r inside galaxies, compared to the mass density of intergalactic space. This could lead to some kind of polarisation, that is, to a spatial separation of the DE particles. This topic will be discussed in more detail in Part III.
According to Bidin et al., DM is absent inside galaxies; therefore, an explanation for MOND cannot be based on the DM concept. Dark matter obviously was also not present in young galaxies, hence the modified rotation curves for the Milky Way and other galaxies (the graphs of orbital speed versus distance for other spiral galaxies are similar to our galaxy). On the other hand, it is known beyond doubt that DM has formed a DM halo around our Milky Way galaxy with a radius that may be 10 times as large as the halo of luminous matter. This can be interpreted as a clear sign that DM is attracted by ordinary mass. The DM halo is instrumental for the structural stability of galaxies.
At present, the relationship between spacetime and dark spacetime is not well understood.31 In particular, it is not known whether energy exchange beyond gravitational interaction can take place between the two spacetimes (remember the space is the same). For instance, as might have been measured by the AMS-02 experiment, a resonance may have been seen that shows increased proton-antiproton pair production, which could be interpreted as a hint for the decay of the DM particle by pair production at an energy of about 80 GeV. On the other hand, a particle with a mass in the range of 80 GeV/c2 was ruled out from existing accelerator data a long time ago. This irreconcilable contradiction is resolved if DM particles of negative mass exist in , but this mass, because of its positive energy, is gravitationally attractive in . However, it is not known if dark spacetime could act as a source of energy. In that case, the reverse process might also exist; i.e. there might be regions in our spacetime that appear void of matter and energy because of a hypothetical reverse reaction,
transferring energy back into dark spacetime. If this were the case, the normal annihilation process of particle and antiparticle should be suppressed, at least to a certain extent. However, such a process was not found in any accelerator data, and it needs to be understood under which circumstances energy transfer back into dark spacetime on a cosmic scale might be possible.
No claim can be made to have solved the DM riddle, but perhaps the novel concepts of four lepton families and dark spacetime might help to form the basis for a future solution.
7.6 Spacetime Lattice and Propagation Speeds
In this section, the possible impact of the lattice or grid structure of spacetime on energy levels and the propagation speed of matter as well as photons will be qualitatively discussed utilising a crude model adapted from solid state physics. Space is considered to be one-dimensional with the topology of a circle, but this is not important for the discussion.
The left part of Figure 8 shows both the temporal and the spatial evolution of the spacetime lattice of the universe using spherical surfaces S2 for the representation of space. The lattice comprised elemental surfaces, called metrons by B. Heim  of surface area . During its evolution during the expansion era, the number of elemental surfaces comprising the spacetime grid is increasing, which is a highly dynamical process, i.e. the lattice is restructuring after each elemental time step, generating additional metrons, and thus, the total potential energy of the lattice decreases in accordance with the Ising model (Section 5.4). Therefore, the number of metrons increases with cosmic time: for , while the rate at which metrons are generated depends on the expansion rate H of the universe, which is Hubble’s law . According to B. Heim, the metron area decreases with time, hence . Whether this is actually the case depends on . The duration of inflation is about 10−33s, and the increase in the original radius of the universe is about 1080 (i.e. the ratio of the real distance r with the comoving distance , see below; there are widely differing numbers in the literature). Suppose that the initial grid S2 was covered by a single metrons, after inflation, that is within 10−33s, the number of metrons has increased to 10160. In the early universe, the metron production rate must have been gigantic.
According to the duality principle, the universe is closed, k = 1, and so long there are only metrons and DE (smoothly distributed), the universe possesses maximal spatial symmetry, and thus, there is no angular dependence. This situation changes only after inflation was brought to a halt, that is, with DE being converted into both DM and OM. As a result, the production of DE is no longer sufficient to sustain the pace of inflation; i.e. the universe continues to expand at a much more slower rate, but overshoots and undershoots seem (damped oscillator model) to be possible in the cosmic motion. The right part of the figure shows, in analogy to atoms composing a solid, the periodic potential that may be produced by the atoms of space (Ising model). Perhaps it is this potential that is causing a resistance for particles of zero rest mass in their motion through the lattice of atoms of space. Hence, the propagation speed for photons in vacuum will be finite.
In Figure 8, the spatiotemporal evolution of the universe is exemplified by two-dimensional spherical surfaces S2 (embedded in ℝ3) that comprised elemental surfaces with spin or metrons of size (Fig. 5). The universe expands with time, owing to the presence of DE. As it does, the space lattice comprises an increasing number of metrons that, in turn, are generating additional DE. In other words, the process is self-accelerating until, eventually, both DM and OM are generated from DE by a symmetry breaking process that may be governed by the temperature of the CMB, as described by a Landau–Ginzburg potential (Fig. 5). The presence of matter then breaks the radial symmetry of spacetime, introducing an angular dependence and leading to mixed metric terms of space and time . For instance, one could take any point on the surface , representing our current universe, which is located on the largest sphere (Fig. 8), and draw a circle (because our sample universe is only 2D) with a radius of m (28.5 billion pc or 93.0 billion ly and volume m3), than one has pictured the observable universe, which, as revealed by observations, is remarkably flat. From Planck satellite measurements, the age of the universe is determined to be 13.82 billion years. The much larger radius of the observable universe (93.0 versus 13.82 billion ly) shows the impact of the scale factor a(t) of the universe that relates the real cosmic distance r with the comoving (fixed) distance with and
obtained from the Friedmann equation
During the short inflation period, Λ can be considered constant. That is, in the Quantised Bang model, inflation is dominated by DE, that, in turn, is owing its existence to the expanding spacetime grid (duality principle). The whole process started with the first metron, produced by quantum fluctuations (the cosmic principles at work, in contrast to ). As the universe is closed and of simple spherical spatial topology (3D, maximally symmetric in the absence of matter), its actual size must be much larger than the radius of the current observable universe. Generally, the Planck length
is considered to be the smallest possible length, but this seems to be in contradiction with the ESA Integral satellite measurements, and thus a value
may have to be introduced to characterise the spacetime lattice in order to be compatible with the ESA observations . Hence, spacetime appears to be extremely but not perfectly rigid, but nevertheless, there still is a resistance for particles of zero rest mass moving through this lattice. As the photon propagates with finite speed c in vacuum, this can be considered as a measure for the interaction of the photon with the atoms of space.
Despite the smallness of the spacetime grid spacing, spacetime ultimately is of discrete nature. Consequently, the propagator (denoting the probability amplitude for a physical system to go from initial state to final state ) as a (not countable) sum over all paths, as employed in the Feynman (1948) path integral method, is only applicable if the high order of infinity of the number of paths can be assigned a measure, but such a procedure is not at all clear. According to Feynman, a path is a 3D curve in space parametrised by time t. In the actual calculation of the path, integral curves are represented by a set of straight lines.
In physical reality, however, because of the metron nature of the spacetime lattice, a path is to be replaced by a sequence of two-dimensional surfaces, comprising a finite number of neighbouring metrons. In this regard, the CDT computer simulations of Ambjorn et al. , , ,  seem to be closer to physical reality, because of the numerical grids utilised (Fig. 9).
If any anti-gravity device is ever to be developed, the first thing needed is a new discovery in fundamental physics – a new principle, not a new invention or application of known principles, is required.
A. V. Cleaver:
What It Is or Might Be
Journal of the British Interplanetary Society, Vol. 16, 1957 
8 Principles of Propellantless Space Propulsion
The article by A. V. Cleaver , Rolls Royce entitled “Electro-Gravity: What It Is or Might Be,” was written more than 60 years ago. In it, a completely novel approach to space propulsion is discussed, so-called interaction field propulsion. Furthermore, in the article, it is stated that the Martin Co. (now Lockheed Martin) actually ran advertisements appealing for scientific researchers interested in gravity. It is further reported that extramural contracts were placed, through their Research Institute for Advanced Study, with Dr. Pascual Jordan and Burkhard Heim, at the German Universities of Hamburg and Göttingen. A recent comprehensive and well-written biography on the life and scientific work of Burkhard Heim was published by von Ludwiger (in German) . A few years later, the need for advanced space propulsion methods based on field propulsion was discussed again in the books by Seifert  in 1959 and Corliss  in 1960 as well as by Samaras . In the 50s and 60s of the last century, field propulsion, i.e. space propulsion without propellant, was a domain of intense research, but, as is well known, this once active field did not produce any space propulsion technology, and in the following decades, research in this area completely subsided. At that time, there existed an active scientific program aimed beyond the ever attractive force of Newtonian gravity.
The field saw a revival with the NASA breakthrough physics propulsion program (1996–2001) , which ended without having generated usable practical or theoretical consequences concerning novel space propulsion methods. It became clear from this project that engineering refinement of existing technology and known physical laws were not suitable in providing breakthrough propulsion. A review of the state of the art as of 2003 was then given by J. E. Allen . In his final critique, Section 5, Allen concludes that the necessary breakthrough has not been achieved.
The quest for propellantless propulsion has a long history, meaning propulsion systems that rely upon the exchange of momentum and energy with their reference frame through the use of physically generated forces. In particular, in the 1950s in the United States, a comprehensive research program on gravity control propulsion was set up in aerospace industry as well as 14 universities. First, three concepts are discussed that recently have been investigated as a physical basis for breakthrough propulsion. It will be shown that the EM drive, the Woodward propulsion idea, based on Mach’s principle, which states that the acceleration of massive particles can only be measured relative to other matter in the universe, i.e. its inertia must depend on the distribution of the other matter in the universe as well as any concept based on Kaluza–Klein theory, is physically unfeasible. Any breakthrough in propulsion or energy generation, in order to become a real game changer, needs to be functioning without fuel. This insight is not new and was already discussed in the book on space propulsion by Corliss  in 1960, termed field propulsion, and was actively researched in industry and academia at that time. Rocket propulsion cannot be abandoned at present, because it is currently the only technology available that is providing sufficient thrust to deliver material to low earth orbit or communication satellites to geostationary orbit. Second, if we are serious about spaceflight, a crash propulsion research program based on fundamental physics should be started forming a task force dedicated to the aim of studying whether there exists novel gravitational physics that could lead to the development of propellantless propulsion.
This physical principle was already envisioned by W. Corliss and other physicists half a century ago. A novel physical principle for spaceflight as well as energy generation is needed first, and then everything else will fall into place; i.e. the proper technology will follow from this principle. The technology must be feasible, whereas wormholes and spacetime warping may be unrealistic or impossible, and antimatter for spaceflight is technologically unobtainable in the foreseeable future, but it should be accepted that, at least in the beginning, the science of any novel propulsion system, necessarily, will have to be speculative, for it cannot be based on current physics.
What could this new physical principle be? Obviously, it has to do with both gravitation and spacetime. Planetary gravitation needs to be overcome during launch, and once in space, a vehicle is moving through a medium called spacetime. Spacetime is considered a dynamical physical field, because it is inseparably associated with all pervading field of DE and thus assumed to carry both energy (in form of information) and momentum. Momentum exchange between the space vehicle and spacetime needs to take place, which is assumed to result in additional spacetime dynamics, that is, contraction or expansion. Instead of interacting with its fuel, the spacecraft is interacting with the surrounding spacetime. How? Through the generation of gravity-like (acceleration) fields outside GR by the mechanism of (delayed) symmetry breaking.
The only approach that may have the potential as breakthrough may be the generation of gravity-like fields that are outside GR. In order to overcome the enormous technical challenges posed on conventional propulsion systems by the drag of gravity, it becomes obvious that only propulsion without propellant can solve this problem. Field propulsion, aptly named by W. R. Corliss in his book Propulsion Systems for Space Flight Space, Academic Press, 1960, was then an active topic of research, however, without delivering any practical results. Space propulsion is still dealing with the technologies (and hazards) developed in the 1950s and 1960s of the last century, and the vision portrayed by Wernher von Braun in his famous article in Collier’s magazine in 1952, entitled Man on the Moon, did not become a reality. A manned Mars mission, despite all the claims made by the various Mars projects – as the first author, while working at the European Space Agency, knows from firsthand experience – will not take place any time soon, unless a breakthrough in propulsion physics can be achieved.
Recently, several authors published propulsion concepts on Weber’s electrodynamics, but Weber’s electrodynamics was developed before Maxwell and does not seem to provide any additional physics.
There are recent articles citing Weber’s EM formulation as if something new could be obtained from it. First, Weber was before Einstein, and it is not clear whether or not his formulation is even Lorentz invariant.
Maxwell’s description of EM has accounted for all observed EM phenomena since its inception in 1864. Moreover, and this is most important, Maxwell’s theory is the foundation for QED that has been confirmed to extreme experimental accuracy.
Even the slightest misconception in Maxwell’s EM would have been detected and corrected. Nothing like that was ever observed. Even in the latest LHC data taken at 13 TeV, the tiniest deviation would have been seen. So, even if Weber were also correct (which is doubtful, but unknown), nothing can be gained from Weber’s theory that cannot be explained with Maxwell’s theory that is much easier to handle and, as we know, is Lorentz invariant; it remains correct at relativistic speed. Any research in this direction will not lead to any new results.
Extreme gravitomagnetic fields, termed hypercomplex gravity (see below), may be generated by the interaction between gravity and electrodynamics (in the so-called Heim experiment) and seem to be the only physically realistic chance for propellantless propulsion. Extreme gravitomagnetic (or hypercomplex) fields might have been measured by M. Tajmar as was analyzed in , as well as in several other articles. However, there is no smoking gun proving their existence.
There is no way for hypercomplex gravitational fields to exist within Einstein’s GR, which means that completely novel physics concepts have to be introduced as discussed in this article.
9 Physically Impossible Propulsion Concepts
Unphysical EM Drive
Recently, the EmDrive, see http://www.newscientist.com /data/images/archive/2568/25681402.jpg, which has been around since 1999, was hailed as the “engine that might break physical laws” by generating an asymmetric force owing to different EM radiation pressures on the side walls of a closed cylindrical resonator. But this is wishful thinking . By squeezing a closed Coke can on one side and trapping electromagnetic radiation inside, the can is supposed to move in the direction of the smaller cross section. This will never happen. This is a pure electromagnetic phenomenon, and all descriptions citing an interaction with the vacuum are equally false. As recent experiments have shown (see the above reference), the vacuum is extremely stable, and extracting energy from the vacuum, i.e. bringing the vacuum to a lower energy state, requires extreme amounts of energy. No simple EM phenomenon can cause such an interaction with the vacuum. Therefore, regardless who has measured what, these are artefacts. According to W. Pauli, “This is not right. It is not even wrong.”
Microwave ovens do not fly. There is absolutely no experimental evidence for a varying speed of light, c, within the truncated cone cavity, nor does c change outside the cavity. It is mentioned in the New Scientist article that 1 kW of power is needed to generate 1.2 mN of thrust. Compare this to the Saturn V that generated 33,000 kN of thrust (five F-1 engines each at 6.7 MN thrust). Using an EM drive would amount to generating a power of kW; that is, one would need about 30,000 large nuclear power plants to produce this amount of thrust. Apart from the fact that this kind of energy source would destroy any material device, it would be the most inefficient way to fly. It also might be a little heavy, because nuclear power plants are not lightweight. There is no reasonable physical principle backing the EM drive, not even an unconventional theoretical idea.
Using de Broglie’s (1928 and later D. Bohm’s 1952) interpretation of QM does not result in any new physics. We are talking about a novel (at that time) interpretation of QM different from (now outdated) Bohr’s idea. Never did Bohm postulate that the virtual particles of the vacuum can affect his necessary nonlocal (i.e. faster than light) pilot wave!
An interaction with DM particles, as mentioned in the article, with the microwaves inside is not possible either. Dark matter, as its name is telling us, is not charged and electromagnetically inactive; otherwise, it would have been detected. Dark matter is not subject to electromagnetic interaction. There is no physical basis for the EM drive, which is based on a solely conventional EM phenomenon. Otherwise, just squeeze a coke can on one side and get some microwave radiation inside, and the can should start moving in the direction of the smaller surface. Regardless of what has been measured, this cannot be real thrust.
Mach’s Principle Retired
The Woodward drive is also discussed in , , being developed since the 1990s, and it is equally physically unfeasible. It is based on Mach’s principle, which, as is now known, is physically incorrect.
First, Mach’s principle is not part of GR. With the existence of the Higgs boson confirmed by the LHC (postulated 1964, measured July 2012 at CERN), particle mass comes from the interaction with the scalar Higgs field and not from the interaction with the other masses in the universe as postulated by E. Mach in the 19th century. Mach’s principle is also in conflict with Einstein’s GR as the rest mass of a particle is a relativistic invariant that is an intrinsic property and does not depend on the distribution of the surrounding masses in the cosmos as claimed by E. Mach. His idea is not based on physical facts but more on philosophy.
Moreover, if the inertial mass of a proton were affected by mass distributions on the cosmological scale, then there must be an anisotropy in the inertia of every proton on Earth! Because of the mass distribution in our own galaxy, any proton would be accelerated towards the galactic centre and would have a mass different from a proton subject to an acceleration in the opposite direction, simply because the mass of the galaxy is concentrated in the galactic centre.
This could be measured very accurately by the Mössbauer effect and nuclear magnetic resonance technique (frequency), but such an effect was never observed.
For any experiment performed on the rotating Earth, the proton mass should depend on direction – according to Mach. This is clearly not the case! With the existence of the Higgs boson, Mach’s principle became an outdated idea and is a relic of the 19th-century mechanistic worldview. Hence, there is no physical principle for the Woodward drive. Inertia is an intrinsic property of matter and is not related to other masses. The Higgs boson was found, and it is the source giving matter to otherwise massless particles. As we know, Einstein’s Weak Equivalence Principle has proved to be correct to a very high degree, and thus it is correct to state that .
Kaluza–Klein theory of five dimensions is incorrect.
This theory is physically wrong, because it requires , and thus leads to the wrong Lagrangian for EM. That is, any five-dimensional theory (four spatial dimensions) is necessarily incompatible with EM.
Prediction is very difficult,
especially if it is about the future.
10 Summary of Physical Concepts of EHT
Finally, it seems appropriate to summarise the novel physical ideas, collected under the name EHT and presented in the previous sections to demonstrate their drastic physical implications in extending both the SM of particle physics and cosmology. The influx of recent experiments (Section 5.2) played a major role in the formulation of these principles. In particular, EHT requires giving up the cherished concept of extra space dimensions replacing it by introducing extra systems of numbers. As a direct result, see (5), two novel types of matter were introduced, which are termed negative matter (DM) and hypercomplex matter (virtual particles) that are not described in both SR and GR. Hence, physical phenomena that are based on the presence of this type of matter, that is, fields, may not be subject to the constraints imposed by these two theories.
The concept of duality proved to be of overriding importance. It is at the foundation for the existence of the cosmos, because the two fundamental energy concepts of EHT, namely, the energy of information due to Szilárd, as expressed by (2), and the energy of mass, that is, Einstein’s famous equation , are considered as a physical realisation of this principle. These two energy forms can be converted into each other. Information energy of the expanding spacetime lattice is transformed into matter energy. That is, the potential energy of the evolving spacetime lattice transforms into DE, the precursor of matter.
Hence, string theory and SUSY, as postulated under EHT, are rendered untenable and also GUT will not be feasible in their currently foreseen form. Nevertheless, in cosmology, the concept of the Big Bang seems to have to be replaced by a Quantised Bang (see below). Also, the GUT era in the course of cosmic evolution appears to be infeasible. Extended Heim theory also makes novel predictions. Extreme gravitational fields outside GR should exist, dubbed hypercomplex gravitational fields; a fourth family of leptons and quarks is predicted, and the existence of DE and DM is explained in the context of EHT. Furthermore, both time, t, and the speed of light in vacuum, c, get promoted from real to complex that requires an extension of the concept of Einstein’s spacetime. Finally, the concept of matter is promoted from real to hypercomplex matter.
The formulation of EHT is based on the set of fundamental physical principles that cannot be proved but are formulated according to generally accepted physical principles in accordance with the known experimental results (Section 4). These principles have far-reaching consequences for both the SM of particle physics and cosmology.
The complete unification of physical interactions is not possible according to the principle of duality.
The concept of extra number systems replaces the idea of extra space dimensions. Since the 1970s, string theory and SUSY have been contradicting all experiments of particle and atomic physics that were specifically conceived to prove their existence. In particular, the continuously improved measurements of the range of validity of Newton’s gravitational law down to the atomic scale has rendered the concept of extra space dimensions untenable (Fig. 2). The paradigm shift of EHT therefore replaces extra dimensions by extra number systems that give rise to the concept of hypercomplex masses. A direct physical consequence is the existence of extreme gravitomagnetic fields that are outside GR. These novel types of mass are also considered to be responsible for the existence of a fourth family of leptons and quarks, which are instrumental in the explanation for DM.
The fundamental mathematical group is that is broken down into four symmetry groups (Section 5).
The unphysical concept of the Big Bang is replaced by the Quantised Bang. The evolution of the universe originates from generation of the first discrete elemental surface by quantum fluctuations, marking the transition from nonexistence to existence. The next quantum fluctuation can cause the elemental surface to disappear, or it may create a second elemental surface that interacts with the first one due to the spin of these two metrons. This releases a quantum of information energy that is converted into a quantum of DE that immediately will act to expand the spacetime comprising two metrons. Therefore, the probability for a third metron to be generated is higher than the probability for the second metron to disappear. Thus, the increase in the number of space atoms will be exponential, driven by the generated DE. A highly simplified model of this inflation model is given in Chapter 9 in . The spacetime lattice goes quickly from discrete to continuous, allowing the use of the Einstein field equations in conjunction with DE (responsible for inflation). Eventually, photons are generated from DE – ending inflation – and mediate the force that defines electric charge, which are also the source for ordinary matter and DM.
Another basic modification concerns the notion of spacetime as formulated by Einstein. There is a dual spacetime, , which contains DM, making DM principally unobservable in our spacetime. However, the gravitational impact of DM can be observed.
As a result of the four groups, there exist six gravitational bosons. Three of these bosons are for the cosmological gravitational fields that are of purely geometric nature and cannot be unified with the three other forces.
The three particles representing the extreme gravitomagnetic field are believed to result from an interaction with electromagnetism and are spin 1 fields. This type of gravitation is thought to be unifiable with the strong interaction.
Dark matter seems to be absent within galaxies but is concentrated in halos. Thus, the result of the MOND formula (which does give the correct acceleration) cannot be explained by DM as DM does not exist within galaxies. A discussion was presented to obtain the MOND formula based on the presence of two types of DE particles, attractive and repulsive, i.e. by polarisation through the higher mass density inside galaxies that is times higher compared to intergalactic space.
The extreme gravitomagnetic fields may be utilised as a means for space propulsion without propellant. The key seems to be a specific material composition (two or more metallic components) that might work at ambient temperature.
The basis for the novel physics presented is a collection of foundational physical principles, which individually are generally accepted and proved by experience and experiment. The application of these principles results in major implications for cosmology, including a closed topology of the universe, modifying the cosmic genesis by replacing the hot Big Bang with a quantised bang and explaining the nature of DE and its role in inflation. Furthermore, these principles predict the nonexistence of singularities (i.e. wormholes are not considered feasible physical objects). In addition, the fundamental principles are employed to discuss the MOND hypothesis and to give a derivation of the MOND formula but without giving up on Newton’s gravitational law.
An extended group structure for the description of elementary particles is produced by introducing extra systems of numbers – quaternions (hypercomplex numbers, noncommutative) ℍ and octonions (non associative) 𝕆. The extra number systems also account for additional types of matter (negative, −m and hypercomplex ), replacing the extra space dimensions of string theory.
These novel types of matter are instrumental because the existence of a fourth family of particles of negative mass is predicted representing DM.
It is postulated that DM particles are located in dual de Sitter spacetime , but their gravitational interaction is observed in normal spacetime. is marked by an imaginary time component, , but shares the spatial components of our spacetime manifold . Hence, DM particles cannot be detected in our spacetime. The existence of (virtual) hypercomplex masses may explain the measured discrepancies of the proton diameter and the contradiction in the measured lifetimes of the neutron.
The group structure using the field of hypercomplex numbers gives rise to 12 elementary charges and six gravitational bosons with three gravitational constants (see text). The six gravitational bosons comprise two groups, the first three are the cosmological bosons (the graviton, spin 2 particle from Einstein’s theory of gravity), (spin 1), and νq (spin 0) that might mediate the forces for the cosmological fields, unless they are of pure geometric origin. In addition, according to EHT, an interaction between gravity and electromagnetism should occur, mediated by three additional gravitational bosons , (spin 1), and (spin 0), which are produced either at low temperatures in the laboratory (symmetry breaking) or at high temperatures in the vicinity of quasars. In both cases, extreme gravitomagnetic (or hypercomplex) fields should have been generated that may be up to 18 to 20 orders of magnitude larger than the gravitomagnetic fields of GR. If their existence can be confirmed, they are clearly outside GR. The reported change in the fine structure constant αf seen in the vicinity of quasars may be another hint of the presence of these extreme gravitomagnetic fields, which are believed to modify the permeability of free space, μ0. The extreme gravitomagnetic fields may be produced by those rotating black holes in the form of quasars. New propulsion and energy generation technology might follow from extreme gravitomagnetic fields that are owing their existence to the conversion of photons γ into gravitophotons reflecting the particle nature of the resulting extreme field.
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke
11 Physics, Cosmology, and Technology Discussion
One of the key features of EHT lies in the formulation of the underlying physical principles employed by nature, because everything follows once these principles have been set up. Einstein himself considered this the very first step before any mathematical formulation of a theory should take place.
If we are wrong at this step, then there is no hope in setting up a comprehensive theory of both space and matter. An incorrect theory means that the theory has to be adjusted – creating more epicycles.
As of winter 2018, the LHC data have provided zero evidence for any of the concepts employed in advanced physical theories that have ruled particle physics for more than four decades, namely, SUSY, extra space dimensions, and GUT. This is a strong sign that nature is using a different set of rules. Hence, in this article, novel ideas were presented in order to resolve the long-standing deadlock.
The new concepts of hypercomplex numbers, dual spacetime, and elemental surfaces for spacetime necessarily lead to different physics in the form of negative and hypercomplex mass as well as different groups in physics based on the field of hypercomplex numbers. For instance, there should be a fourth family of particles, however, accounting for DM. In addition, the existence of hypercomplex masses is postulated (virtual particles that are supposed to be generated in the interaction between electromagnetism and gravitation) that give rise to gravity bosons of spin 1, thus producing the much stronger fields, as discussed above. In other words, there should exist a second type of gravity outside GR. By contrast, the cosmological gravity fields are aptly described by Einstein’s GR. Moreover, the Big Bang hypothesis should be replaced by the Quantised Bang idea, based on the existence of the metron (elemental surface). Dark matter is composed of negative mass and residing in dual spacetime, while only its gravitational interaction can be observed in our spacetime. That is, DM particles cannot be detected in our spacetime, nor can they be produced by accelerators. As there are no singularities in the cosmos, the geometry of the universe must be closed, and eventually expansion (symmetry breaking) is converted into contraction. The baryonic asymmetry is attributed to cosmic motion.
The overriding principle in physics as discussed here is the principle of duality. This may sound vague, but, as was shown, duality imposes severe constraints both on any physical theory of matter and cosmology. The long-sought grand unification of all physical laws, even at extreme energies, does not seem to be possible.
For instance, duality requires that from the very first instant of the cosmos both spacetime (first as a lattice, later on in the evolution as a manifold) and DE are formed at the same time. Together with the quantisation principle for elemental surfaces, it leads to a Quantised Bang strictly obeying energy conservation (apart from quantum fluctuations). In other words, there was no Big Bang violating the principle of energy conservation. Based on the existence of the metron, the Big Bang hypothesis should be replaced by the Quantised Bang.
The cosmos seems to be governed by two energy principles that are dual to each other: Szilárd’s energy principle that measures the energy resulting from information (or organisation) and Einstein’s energy principle of matter (or radiation). In the evolution (including inflation) of the universe, the information energy (potential energy, negative) of the spacetime lattice (sugar cube crystal model) is converted into the precursor of matter energy, i.e. DE (positive energy density). This quantised bang cosmology accounts for the existence of DE as well as the subsequent inflation period.
There must be a symmetry breaking mechanism that converts DE into DM and OM (or NOM), reducing the amount of DE and putting an end to inflation. As there are no singularities in the cosmos, the geometry of the universe must be closed, and eventually expansion (by symmetry breaking) is converted into contraction. The baryonic asymmetry is attributed to cosmic motion.
The other key idea is the extension of the isospin space concept to an eight-dimensional gauge space , Heim space, with subspace structure 1-3-2-2, from which the overall group structure for matter and spacetime is derived. There are no extra space dimensions (no strings), instead extra number systems, i.e. nature utilises the field of hypercomplex numbers, which extends the idea of matter and antimatter. This in turn leads to the idea of extreme gravitomagnetic fields mediated by virtual particles of hypercomplex mass that act like a catalyser; i.e. they trigger the reaction but are not visible in the initial and final states of a process.
The duality principle further impacts matter and spacetime. These are two different quantities that cannot be unified; instead, they represent two sides of the same coin.
In this regard, there is no way to unify all physical interactions. Einstein’s cosmological fields may be represented by three mediator gravity bosons, but are the result of spacetime geometry (time-dependent curvature is equivalent to the propagation of gravity waves). It is not at all clear that gravitational spin 2 bosons really exist that can be measured like photons. They may, however, exist as auxiliary physical entities.
In accordance with the duality principle, matter is different from geometry and cannot be expressed by geometry. This means the geometrisation of physics as foreseen by Wheeler et al. may not be feasible. Furthermore, the Einstein field equations cannot describe an equality between matter and geometry, but express an equivalence only. Rather, matter and spacetime influence each other and therefore can be expressed only as an equivalence. As a result, the Planck length may not represent a meaningful length scale for pure geometry phenomena. The results of the ESA Integral satellite indicate a length scale much smaller than the Planck length for the grid spacing of the spacetime lattice. Hence, Planck’s constant −80.7 should not occur in an expression for this grid spacing. Instead, the Schwarzschild radius of the proton (18 orders of magnitude smaller than the Planck length) may be the correct measure. It should be noted that presently we presume there is no gravitational interaction (Einstein) among leptons, only hadron–hadron and hadron–lepton, in addition to the novel gravitational interaction, with the DE field itself, manifested by the expansion of spacetime (which is too small to be measured in propellantless propulsion).
The other idea is that a dual spacetime exists. In our spacetime, matter is positive, while in dual spacetime matter is negative. Both spacetimes share the same three spatial coordinates, but time is real in our spacetime, while it is imaginary in dual spacetime. The same holds for the speed of light.
However, the extreme gravitomagnetic or fields, group SU(2), are mediated by three gravity bosons that are like other mediator bosons from particle physics and thus are completely different from the Einstein cosmological gravity fields.
Moreover, because of duality, the weak and EM forces can be unified, and the strong and the force may be unified. The unification of the two remaining interactions should not be possible. This said, there could not have been a GUT era in the early cosmic evolution when gravity became distinct from the other three forces, which still were united at the GUT energy.
Coupling constants are outside physics and are (in EHT) based on number theory, meaning there could be a relationship between prime numbers and the structure of irreducible groups in physics. However, we do not have a really convincing derivation of these numbers; a lot of guesswork and speculation are involved.
The above ideas have major ramifications for the two SMs of particle physics and cosmology  and require major extensions of both models. In particular, there are no strings, and in the SM of cosmology, there is no Big Bang. Dark matter cannot be found in our spacetime; instead, it resides in dual space, because its mass is negative. This straightforwardly leads to a fourth family of particles and also requires 15 gluons (Fig. 10). The largest modification concerning gravity are fields that are both attractive and repulsive and interact with the other three interactions as well as with the DE field. Moreover, the universe is closed, and at some time in the future, expansion should change into contraction. In addition, the MOND formula is correct, but Newton’s law is valid down to the atomic scale. The origin of DE and DM is explainable from the novel ideas of EHT. The predictions concerning the generation of extreme gravitomagnetic or fields should be fairly easy to test by setting up proper experiments.
This, in a nutshell, is how we see the framework of EHT, but there remain a lot of details to be filled in.
Needless to say, there are still a lot of riddles, and many other topics exist to be explored, such as the principle of structure formation and organisation. The universe is definitely not the result of so-called self-organisation or accidental processes, but there seems to be a governing mechanism that is directing all physical processes, hence the entelechial and aeonic dimensions in internal Heim space .
This article is dedicated to the eminent Andreas Resch, P Dr. Dr., C.Ss.R., professor and director at the Institut für Grenzgebiete der Wissenschaft, Innsbruck, Austria, to acknowledge his scientific work, Imago Mundi, whose prime subject was and is the creation of a consistent Weltbild, to unify both science and humanities, bridging the gap that still seems to divide these two disciplines; and to Hozumi Gensho Roshi, professor of applied sciences at Hanazono University, Kyoto, Japan, for his teachings (teisho) of more than 30 years (e.g. YouTube videos) in Europe explaining the nature of reality. These two eminent scholars, although from very different backgrounds, have dedicated their works to the quest for ultimate reality, thus elucidating the underlying reality of the cosmos. The authors are most grateful to Prof. Greg Daigle, former adjunct professor at the University of Minnesota, for numerous e-mail discussions and literature hints as well as his relentless efforts to improve the style, clarity, and contents of this article. The TIKZ programming efforts of Dr. H.-G. Paap, HPCC Regensburg, in preparing the figures are greatly appreciated, as well as the discussions with my (first author) colleague, Prof. Dr. T. Waldeer, Ostfalia University.
A. Einstein, On the Method of Theoretical Physics, The Herbert Spencer lecture, delivered at Oxford, June 10, 1933. Published in Mein Weltbild, Querida Verlag, Amsterdam 1934. Google Scholar
J. Hauser and W. Dröscher, Z. Naturforsch. 72, 493 (2017). Google Scholar
S. Hossenfelder, Lost in Math, Basic Books, New York 2018, p. 291. Google Scholar
T. Auerbach and I. von Ludwiger, J. Scientific Exploration 6, 217 (1992). Google Scholar
The ATLAS Collaboration, M. Aaboud, G. Aad, B. Abbott, O. Abdinov, et al., J. High Energy Phys. arXiv:1708.09266v1 [hep-ex] (2017). Google Scholar
C. Cesarotti, Q. Lu, Y. Nakai, A. Parikh, and M. Reece, High Energ. Phys. Phenomenol. arXiv:1810.07736v1 [hep-ph] (2018). Google Scholar
W. Dröscher and J. H. Hauser, An Introduction to the Physics, Astrophysics, and Cosmology of Gravity-Like Fields, HPCC-Space GmbH, Hamburg, Germany 2016, (www.hpcc-space.de), available from www.amazon.com.
Cosine-100 Collaboration, Springer Nat. 564, 83 (2018). Google Scholar
J. Brockman (Ed.), This Idea Must Die, Harper Perennial, New York, London, Toronto, Sydney 2015. Google Scholar
K. Pardo, M. Fishbach, D. E. Holz, and D. N. Spergel, arXiv:1801.08160 [gr-qc] (2018). Google Scholar
F. Zwicky, Helv. Phys. Acta 6, 110 (1933). Google Scholar
M. Battaglieri, A. Belloni, A. Chou, P. Cushman, B. Echenard, et al., High Energ. Phys. Phenomenol. arXiv:1707.04591v1 [hep-ph], 102 (2017). Google Scholar
F. Agostini, The XENON Project: Backgrounds and New Results, PhD thesis, Gran Sasso Science Institute, Italy 2017, https://www.bo.infn.it/xenon/sito_web_Bologna/tesi/tesi_agostini_dottorato.pdf.
M. Zych and C. Brukner, Nat. Phys. (2018). DOI:10.1038/s41567-018-0197-6. Google Scholar
C. M. DeWitt and D. Rickles (Eds.), The Role of Gravitation in Physics, Report from the 1957 Chapel Hill Conference, republished as open access in 2011 by Max Planck Research Library for the History and Development of Knowledge (Eds. J. Renn, R. Schlögl, B. F. Schutz). Google Scholar
A. Zee, Einstein’s Gravity in a Nutshell, Princeton University Press, 2013. Google Scholar
C. Hespel, Des Univers Multiples, Espace et Astrophysique, No. 21, Janvier 2018, p. 64. Google Scholar
A. Barrau, Des Univers Multiples, Dunod, 2017. Google Scholar
P. Yogananda, Wine of the Mystic: The Rubaiyat of Omar Khayyam, Self-Realization Fellowship, 1995, p. 81. Google Scholar
N. Prakash, Dark Matter, Neutrinos, and our Solar System, World Scientific, 2013. Google Scholar
ADMX collaboration, A Search for Invisible Axion Dark Matter with the Axion Dark Matter Experiment, arXiv:1804.05750v2 [hep-ex], 17 April 2018. Google Scholar
CERN, LHC prepares for new achievements, CERN News, December 3, 2018. Google Scholar
CMS Collaboration, Search for dark matter produced in association with heavy-flavour quark pairs in proton-proton collisions at TeV, arXiv:1706.02581v1 [hep-ex], 8 June 2017. Google Scholar
A. Loeb, Scientific American Space & Physics, 2018, p. 25. Google Scholar
T. P. Singh, Outline for a quantum theory of gravity, arXiv:1901.05953v2 [gr-qc], 31 January 2019. Google Scholar
B. Heim, in: Ein Bild vom Hintergrund der Welt in Welt der Weltbilder (Ed. A. Resch), Imago Mundi Vol 14, Resch Verlag, Innsbruck 1994. Google Scholar
J. Ambjorn, The Impact of Topology in CDT Quantum Gravity, arXiv:1604.08786v1 [hep-th], 29 April 2016. Google Scholar
M. D. Schwartz, Quantum Field Theory and the Standard Model, Cambridge Univ. Press, 2014. Google Scholar
J. Ambjorn, J. Jurkiewicz, and R. Loll, The Self Organizing Quantum, Scientific American, August 2008. Google Scholar
J. Ambjorn, J. Jurkiewicz, and R. Loll, Quantum Gravity: The Art of Building Spacetime, Chapter 18 in: Quantum Gravity, (Ed. D. Oriti), Cambridge Univ. Press, 2009. Google Scholar
J. Ambjorn, A. Görlich, J. Jurkiewicz, and R. Loll, CDTan Entropic Theory of Quantum Gravity, arXiv:1007.2560v1 [hep-th], 15 Jul 2010. Google Scholar
J. Ambjorn, Recent Results in CDT Quantum Gravity, arXiv:1509.08788v1 [gr-qc], 29 September 2015. Google Scholar
B. Heim, Z. Naturforsch. 32a, 233 (1977). Google Scholar
A. Baltimore, Viruses, Engineering & Science, No 1, California Institute of Technology, 2004. Google Scholar
C. Kiefer, Der Quantenkosmos, S. Fischer, 2009. Google Scholar
C. Kiefer, Quantum Gravity, 3rd. ed., Oxford University Press, 2012. Google Scholar
C. M. Will, Theory and Experiment in Gravitational Physics, 2nd ed., Cambridge University Press, 2018. Google Scholar
R. Penrose, The Road to Reality, Jonathan Cape, 2004. Google Scholar
GERDA Collaboration, M. Agostini, M. Allardt, A. M. Bakalyarov, M. Balata, et al., Nature 544, 5 (2017). Google Scholar
A. Zee, Quantum Field Theory in a Nutshell, 2nd ed., Princeton University Press, 2010. Google Scholar
Z. Bern, Do I have to Draw a Diagram: A Tale of Quantum Gravity, KITP Public Lecture April 20, 2016, UCLA & KITP. Google Scholar
E. Witten, Notices of the AMS, 45, 1124 (1998). Google Scholar
G. Kane, Supersymmetry and Beyond, Foreword by E. Witten, Basic Books, New York 2013. Google Scholar
W. Dröscher, J. Space Expl. 3, Mehta Press, November 2014. Google Scholar
J. Hauser, J. Space Expl. 3, Mehta Press, November 2014. Google Scholar
D. Andriot and G. L. Gómez, Signatures of extra dimensions in gravitational waves, arXiv:1704.07392v2 [hep-th], 21 June 2017. Google Scholar
J. Hauser and W. Dröscher: Emerging Physics for Novel Field Propulsion Science, Space, Propulsion & Energy Sciences International Forum SPESIF-2010, American Institute of Physics, Conference Proceedings, 978-7354-0749-7/10, 2010, p. 15. Google Scholar
G. L. Greene and P. Geltenbort, The Neutron Enigma, Scientific American, April 2016, p. 37. Google Scholar
G. Panico and A. Wulzer: The Composite Nambu Goldstone Higgs, arXiv:1506.01961v2 [hep-ph], 11 November 2015. Google Scholar
P. Mannheim, Alternatives to Dark Matter and Dark Energy, arXiv:astro-ph/0505266v2, 1 August 2005. Google Scholar
E. Siegel, Merging Neutron Stars Deliver Deathblow To Dark Matter And Dark Energy Alternatives, October 25, 2017 (this is not a science paper, but a science blog). Google Scholar
E. Siegel, Beyond the Galaxy, World Scientific, 2016, p. 376, Chaps. 9 and 10. Google Scholar
P. Schmüser, Feynman-Graphen und Eichtheorien für Experimentalphysiker (Lecture Notes in Physics), Springer, 1998. Google Scholar
N. Prakash, Mathematical Perspectives on Theoretical Physics: A Journey from Black Holes to Superstrings, Imperial College Press, 2003. Google Scholar
F. Wilczek, QCD Exposed in Fantastic Realities: 49 Mind Journeys and a Trip to Stockholm, World Scientific, 2006. Google Scholar
F. Wilczek, Phys. Today 53, 22 (2000). Google Scholar
M. Kaku, Quantum Field Theory, Oxford 1993. Google Scholar
J. Malcadena, The Illusion of Gravity, Scientific American, November 2005, p. 56. Google Scholar
J. Malcadena and L. Susskind, Cool horizons for entangled black holes, arXiv:1306.0533v2 [hep-th], 11 July 2013, 48 pp. Google Scholar
S. Carroll, From here to Eternity, Dutton, 2010. Google Scholar
W. Greiner, Classical Mechanics, Point Particles and Relativity, Chapter 28, Springer, 2004. Google Scholar
E. Zeidler, Quantum Field Theory I, Basics in Mathematics and Physics, Springer, 2005. Google Scholar
A. Ijjas, P. J. Steinhardt, and A. Loeb, Cosmic Inflation Theory Faces Challenges, Scientific American, February 2017. Google Scholar
H. Traunmüller, Z. Naturforsch. 73, (2018), DOI: 10.1515/zna-2018-0217 and https://www.researchgate.net/publication/328007338.
Ligo.org, Constraints on cosmic strings using data from the first Advanced LIGO observing run, arXiv:1712.01168v2 [gr-qc], 2 May 2018.
R. Penrose, The Large, the Small and the Human Mind, Cambridge University Press, 1999. Google Scholar
H.E.S.S. Collaboration, Search for dark matter annihilations towards the inner Galactic halo from 10 years of observations with H.E.S.S, 1607.08142v1 [astro-ph.HE], 27 July 2016. Google Scholar
Xenon Collaboration, First Dark Matter Search Results from the XENON1T Experiment, PRL 119, 181301 (2017) Physical Review Letters, 3 November 2017. Google Scholar
ATLAS Collaboration, Search for scalar dark energy in and mono-jet final states with the ATLAS detector, ATL-PHYS-PUB-2018-008, 29th June 2018, pp. 12. Google Scholar
D. L. Wiltshire and A. Coley, The Case for Putting Aside Dark Energy to Reevaluate General Relativity, Wire, 30 June 2017. Google Scholar
E. Siegel, Could The Energy Loss From Radiating Stars Explain Dark Energy? accessed at https-//sciencesprings.wordpress.com/2018/06/23/from-ethan-siegel-could-the-energy-loss-from-radiating-stars-explain-dark-energy/.
IceCube collaboration, Searches for Sterile Neutrinos with the IceCube Detector, arXiv:1605.01990v2 [hep-ex], 29 August 2016. Google Scholar
M. Ahlers and F. Halogen, Opening a New Window onto the Universe with IceCube, arXiv:1805.11112v1 [astro-ph.HE], 28 May 2018. Google Scholar
MiniBooNe Collaboration, Observation of a Significant Excess of Electron-Like Events in the MiniBooNE Short-Baseline Neutrino Experiment, arXiv:1805.12028v1 [hep-ex], 30 May 2018. Google Scholar
B. Povh, Teilchen und Kerne, Springer, 2014. Google Scholar
Erratum: Measurement of the reactor antineutrino flux and spectrum at Daya Bay [Phys. Rev. Lett. 116, 061801 (2016)] F. P. An et al. (Daya Bay Collaboration) Phys. Rev. Lett. 118, 099902, Published 1 March 2017. CrossrefGoogle Scholar
IceCube Collaboration, Search for sterile neutrino mixing using three years of IceCube DeepCore data, arXiv:1702.05160v2 [hep-ex], 26 June 2017. Google Scholar
C. Grupen, Einstieg in die Astroteilchenphysik, Springer, 2018, p. 440. Google Scholar
M. Di Mauro, The origin of the Fermi-LAT R3-ray background, arXiv:1601.04323v1 [astro-ph.HE], 17 January 2016. Google Scholar
P. Maestro, Cosmic rays: direct measurements, 1510.07683v1 [astro-ph.HE], 26 October 2015. Google Scholar
P. Marrochesi, CALET: a high energy astroparticle physics experiment on the ISS, 1512.08059v1 [astro-ph.IM], 26 December 2015. Google Scholar
The Fermi-LAT Collaboration, Phys. Rev. Lett. 115, 231301 (2015). Google Scholar
Alpha Magnetic Spectrometer, Wikipedia, accessed 27 May 2017. Google Scholar
N. Masi, Il Nuovo Cimento C 39, 282 (2016). Google Scholar
Fermilab News, Construction begins on one of the world’s most sensitive dark matter experiments, Fermilab News, 7 May 2018. Google Scholar
J. D. Bowman, Probing the Epoch of Reionization with Redshifted 21 cm HI Emission, PhD Thesis, MIT, June 2007, pp. 147. Google Scholar
R. Barkana, The Rise of the First Stars: Supersonic Streaming, Radiative Feedback, and 21-cm Cosmology, Physics Reports, 13 May 2016, 88 pp. Google Scholar
D. Tong, The Unquantum Quantum, Scientific American, December 2012. Google Scholar
W. Dröscher and J. Hauser, Emerging Physics for Novel Field Propulsion, 46th AIAA/ASME/SAE/ASE Joint Propulsion Conference and Exhibit, AIAA 2010-NFF1, 26–28 July 2010, Nashville, TN (available at www.hpcc-space.com).
W. Dröscher and J. Hauser, Physics of Axial Gravity-Like Fields, 47th AIAA/ASME/SAE/ASE Joint Propulsion Conference and Exhibit, AIAA 2011-6042, 31 July – 3 August 2011, San Diego, CA, 23 pp. (available at www.hpcc-space.com).
A. V. Cleaver, Journal of the British Interplanetary Society 16 (1957/58). Google Scholar
I. von Ludwiger, Burkhard Heim: Das Leben eines vergessenen Genies, Scorpio Verlag, München, Germany, 2010, p. 478. Google Scholar
H. Seifert (Ed.), Space Technology, Wiley, 1959. Google Scholar
W. R. Corliss, Propulsion Systems for Space Flight, McGraw Hill, 1960. Google Scholar
D. G. Samaras, Applications of Ion Flow Dynamics, Chapter5, Prentice Hall Space Technology Series, 1962. Google Scholar
M. Millis, (Ed.), NASA Breakthrough Propulsion Physics, NASA/CP 208694, January 1999. Google Scholar
M. Tajmar, The SpaceDrive Project First Results on EMDrive and Mach-Effect Thrusters, 69th International Astronautical Congress (IAC), Bremen, Germany, 1–5 October 2018. Google Scholar
M. Tajmar, The SpaceDrive Project? First Results on EMDrive and Mach-Effect Thrusters, Barcelo Renacimiento Hotel, Seville, Spain/14? 18 May 2018, p. 10. Google Scholar
C. L. Calcagni, Classical and Quantum Cosmology, Springer, 2017, in colour, p. 843. Google Scholar
The meaning of Einstein’s words is that mathematics cannot be used as a replacement for physics as he said: “Ideas are more important than knowledge.” Physics must not be separated from experiment. We quote N. N. Taleb in Skin in the Game, Random House, 2018, p. 27: “Intellectualism has a sibling: scientism, a naive interpretation of science as complication rather than science as a process and a skeptical enterprise. Using mathematics when it’s not needed is not science but scientism.” Naive in this context should be understood as without any empirical evidence.
Note: No, EHT (Extended Heim Theory) is not Heim theory , despite the similarity of the names. The name EHT was selected to honor B. Heim’s ideas of internal gauge space and elemental surface in order to construct a polymetric tensor of all physical interactions and a spacetime lattice. The concept of eight-dimensional internal space employed in EHT is reminiscent of B. Heim’s initial (but insufficient) six-dimensional approach, but otherwise the two approaches are employing different physical concepts, and there are no further relationships, except for the name, of course. As it turned out, Heim’s ideas about the internal structure of elementary particles and his calculation of the spectrum of elementary masses turned out to be incorrect as well as his ideas about cosmology, in particular the range of attractive gravitation.
A theory is called natural if it does not contain numbers that are extremely small or extremely large. The opposite is fine-tuning. In that sense, nature is not natural, just look at the cosmological constant or compare 1 AU (astronomical unit) to the distance to the closest star, i.e. the stars appear fixed. Large numbers need explanation.
The square root of the Lorentz invariant Mandelstam variable s provides the sum of the particle energies in a scattering experiment; that is, for the LHC collider with its two oppositely moving proton beams, the laboratory observer is at the centre-of-mass, and the total momentum of the two beams . Thus, the total beam energy calculated using the four-momenta p1, p2 is . With in the relativistic limit, one obtains TeV (6.5 TeV for each beam pipe), where mp is the proton rest mass, and P denotes the proton speed. This energy is available since the 2015 LHC upgrade.
To be more exact, the experimenters are searching for both spin-0 resonances produced from gluon–gluon fusion and spin-2 resonances produced from gluon–gluon or quark–antiquark initial states. The 95 % confidence level is utilised as usual.
An idea that may have to die is the idea of the existence of extra (real) space dimensions that has blocked progress in physics since its inception in 1919 by Kaluza, because alternatives were not pursued.
The term universe is used to mean the observable universe, which is the spherical region of the universe comprising all matter that can be observed from Earth at the present time by light or neutrino signals or gravitational waves – all with finite propagation speed – that have had time to reach our planet since the beginning of the cosmological expansion. There is currently no accepted experimental proof for the existence of superluminal signals. The distance a photon traversed, emitted by a galaxy tph = 100 million years ago, also termed the lookback time, is as the speed of light in vacuum is independent of time. However, the distance to the other galaxy at the arrival of these photons. This distance is difficult to determine because it is changing while the photons are traveling owing to cosmic expansion (governed by the Friedmann equation), characterised by Hubble’s parameter H = H(t). Its present value is called the Hubble constant H0 ≈ 22 km/s per Mly. Hence, the spatial dimension of the universe , where billion years (Planck satellite data) denotes the age of the universe, that is, the maximal lookback time. Notice, that cosmic expansion has no impact on physically bound systems like atoms, solar systems, or even galaxies, because it is not strong enough to modify the effective gravitational potential into a potential that has no reversal points as shown in detail in Section 9.8 in the book by the authors . Of course, everything depends on the temporal evolution of H = H(t).
It may be argued that superluminal speed is present in the path integral formulation of Feynman. Any arbitrary path from an initial location xi to the final location xf is represented by a polygon in the x − t plane with the corresponding time interval [ti, tf] subdivided into n discrete time intervals Δt. For each of these (supposedly small) time intervals, however, the integration over space (x-coordinate) goes from −∞ to +∞, i.e. the length of a path is not restricted. Clearly, this would mean superluminal speed for any material particle going along this path, but the path integral formulation associates a probability amplitude , that is, a complex number to each path P. Hence, the resulting probability amplitude for a particle to arrive at location xf and time tf is given by the sum over all possible paths . Often the amplitude over path P is written in the form to denote that is a functional; that is, it depends on the entire path x(t) and not on a single number. In order to calculate such a probability amplitude the genius of Feynman, remembering remarks by Dirac concerning the role of classical action S in QM, postulated the relation , where the constant C is chosen to normalise ϕ and is the classical action, and L denotes the Lagrange function, e.g. kinetic minus potential energy, Ekin − V. Apart from the fact that it is not clear how to do the integration over all paths, it seems strange that the length of path x(t) does not play any role. All possible probability amplitudes appear to arrive at the same time tf interfering simultaneously at xf, resulting in a single amplitude, which, when squared, gives the probability to find the particle at location (xf, tf). A single probability amplitude cannot be measured and thus has no physical reality; i.e. it is not a signal; that is, it cannot be used to transport information. The information is contained in the square of the probability amplitude (upon interference of amplitudes took place), which gives the probability to find the particle at xf in an interval dx and at time tf in an interval dt. In other words, as probability amplitudes are not physical entities, they cannot be used to send physical signals. The measured probability does not provide any hint on the structure of the interference pattern. Different sets of probability amplitudes represent different interference pictures, but if the same probability distribution results, they describe the same physical reality. Hence, there is no possibility to distinguish between these different sets of probability amplitudes. In other words, a changing interference pattern cannot be detected so long the resulting probability distribution remains invariant. In a similar way, the exchange of two identical particles in a physical system cannot be observed, which may be realised with superluminal speed, but this process is not accompanied by any information transport.
The multiverse idea has different meanings as described in a recent article by C. Hespel  and also in the book by astrophysicist A. Barrau . However, these authors have not considered the latest experimental results from CERN, nor are the recent experimental findings on the range of Newton’s law discussed. We reject the idea that we are the special part of a multiverse, comprising 10500 universes, i.e. the one that does sustain life. The probability to end up in such a universe is 10−500, that by all standards in physics should be considered equal to 0. Any concept based on infinities or singularities has to be rejected. Einstein’s general theory of relativity is an extrapolation from physics to the realm of real numbers ℝ that ceases to function as soon as the discreteness of nature produces perceptible quantum effects. This also holds for the concept of Lorentz invariance that is not consistent with the existence of a space-time lattice. Rotations on a grid have only a finite number of degrees of freedom, whereas a manifold provides an infinity of degrees of freedom.
Nature’s secrets will not be revealed to dry intellectualists performing mental mathematical gymnastics, but rather a mindset like Einstein’s is needed, i.e. a strife for getting to know the fundamental reality behind the manifest physical phenomena to improve the foundations of physics.
Note: The concept of unification will be discussed in Section 4.1, and it will be found that unification of the four basic interactions, as attempted for about a century, contradicts the principle of duality and thus cannot be achieved. In addition, the cosmological gravity fields as described by Einstein’s field equations are not part of the four fundamental interactions, at least not in EHT, because they result from geometry. Instead, in EHT, it is the hypercomplex-gravity fields together with electromagnetism as well as the strong and weak forces that are considered as the four fundamental physical interactions for they are based on the concept of physical charge. Hence, a different approach to unification has to be sought.
No mathematical proof for the correctness of the collection of fundamental principles is possible, but in physics their usefulness can be judged by how well these principles do represent physical reality, i.e. match experiments and observations. Hence such a set of principles should be unique. This is not true for other fields; see, for instance, the article by B. Heim Ein Bild vom Hintergrund der Welt .
Mystics of all ages and all cultures have been reporting on the existence of a nonmanifested realm of higher reality than the physical cosmos where categories of space, time, and matter do not exist in the same way as in our cosmos. It is obvious that physics, as we conceive it, is restricted to the manifested universe, and no attempt will be made to provide any assertions of the nature of the metaphysical realm in the framework of the fundamental physical principles.
Remark: It is generally believed that a theory of quantum gravity describes spacetime at the Planck scale. There are ideas that the expected large fluctuations of the geometry might lead to changes in the spatial topology. For that reason, Ambjorn et al.  implemented a topology change of the path integral in their CDT simulations to model the impact of topology. However, if the measurements of the Integral satellite are taken into account, the ratio of the Planck length and the small (but finite) spacetime lattice spacing Δs is in the range of that is far above the Planck length. The Planck length expressed in spacetime grid units can be considered a continuous variable, and thus no fluctuations should occur. Hence, no change in spatial topology should be expected to occur at the Planck length. Therefore, a new framework of physics as proposed by theorists may not be needed because quantum gravitational effects are not relevant at the Planck length, and thus, a violation of Lorentz invariance owing to quantum gravity effects should not be observable. In addition, as brane worlds do not seem to exist, any violation of Lorentz symmetry appears to be less likely. However, Lorentz invariance is at odds with both discreteness; that is, when the quantised nature of our universe emerges and unitarity , see Schwartz, Chapter 8 owing to the ± signature of spacetime and of the mixing of time and space coordinates due to Lorentz boosts. As the Lorentz group SO(1,3) is not compact, there are many representations that violate unitarity, that is, , and probability values may be P ≥ 1. It seems that GR and QM are not compatible, although both theories so far have been fully confirmed experimentally. As QM appears to be more fundamental, it seems that Lorentz invariance may have to be abandoned under special circumstances.
Without duality there is no spacetime or matter, nor are there any dual conceptions or the law of relativity. In other words, physics does not (yet) exist. First, there must be categories of objects. The question of whence and why the cosmos came into being is unsolvable scientifically. A somewhat benighted answer is in order to provide a cosmic platform for the habitation of life in an infinitude of forms and myriad manifestations.
An extensive discussion of the concept of entropy and its physical interpretation with historical context is given by C. Kiefer in  pp. 142–161.
Note. There exists the term self-energy of the electron. This means that an electron can emit and reabsorb photons, represented by a one-loop Feynman diagram of order α2, which is the cause of the Lamb shift in the hydrogen atom, W. Lamb, 1951. We think that the name self-energy is a misnomer as it is not in accordance with the principle of duality. Instead, the electron is supposed to exchange photons with the vacuum field (spacetime without particles); i.e. the electron may emit a photon that is absorbed by the vacuum field. Then, in turn, the vacuum field re-emits a photon, which is re-absorbed by the electron. The recoil from both emission and absorption smears the position of the electron over a distance scale of about 0.1 fm. The smeared electron exhibits a weaker attraction (less negative energy level) to the nucleus of the H atom, which is the proton. This effect should be stronger for the s electron (2s1/2, orbital angular momentum ℓ = 0) because of its nonvanishing probability amplitude at the position of the nucleus compared to the state 2p1/2 with ℓ = 1 with a probability amplitude zero at the position of the nucleus. Hence, the attraction of the ℓ = 0 electron state is slightly more weakened than the electron state. As a consequence, the degeneracy of the energy states with respect to orbital quantum number ℓ and spin quantum number s has disappeared – measured as the Lamb shift. The spin-orbital interaction gives corrections in the energy levels of the H atom that only depend on quantum numbers n and j, but not on ℓ and s. The states 2s1/2 and 2p1/2 of the H atom both have total angular momentum j = 1/2. If g were exactly 2 as calculated from the Dirac equation, then the z component of the electron magnetic moment 𝝁, resulting from the spin angular momentum j = s = 1/2, of state 2s1/2 would be the same as for the addition of ℓ and s to j = ℓ + s = 1/2, of state .
In Greek, entelecheia stands for an objective or a completion, a concept introduced by Aristotle in his work The Physics. Aristotle assumed that each phenomenon in nature contained an intrinsic objective, governing the actualisation of a form-giving cause.
Symmetries play an important role in modern physics. The energy of a physical system is expressed by its Lagrangian L, which in classical physics is the difference between kinetic energy T and potential energy V, that is, . The invariance (symmetry) of L under transformations, for instance, a rotation in space, leads to the conservation of angular momentum. Mathematically, rotations are described by 3 × 3 matrices, denoted by R. These matrices form a group, termed the special orthogonal group SO(3) (that is, det R = +1), as for each rotation R the inverse rotation R−1 exists, and there is a so-called 1 element; i.e. nothing is rotated. It is to be noted that the order of two rotations is not commutative. Energy conservation follows from the invariance under time translation, etc. However, the equations of motion, i.e. the Euler equations resulting from such a Lagrangian, do not have to reflect this symmetry. For instance, the elliptic orbits of planets following from Newton’s theory are not even invariant under rotation about an axis normal to the planetary orbit.
The Lagrangian densities , , ,  for the most important interactions are listed below. For a scalar field ϕ, the Lagrangian is given as with an equation of motion , where the d’Alembertian operator is defined as , as equation of motion, termed the Proca equation or Klein–Gordon equation for a massive particle. The complex Lagrangian LC is written as and the complex field where ϕ and ϕ∗ are considered as two independent functions. Hence, there are two Klein–Gordon equations of motion, and , for the two noncoupled fields. Of course the scalar field can be a n-component complex scalar field of 2n noncoupled fields, that is, and , where the symbol † denotes complex conjugate and transpose (a row vector), and V denotes a potential. For the electromagnetic field , where the minus sign ensures that like charges repel. If one considers a massive vector field like the vector bosons of the weak interaction, the Lagrange density is given by with the equation of motion , which results in . In the SM, the Dirac field describes fermions, i.e. the 36 quarks and 12 leptons from which all material particles are derived. The four-spinor (or bispinor because it contains both the particle and antiparticle) , whereas . The Lagrangian density of QED is given by where and γμ are the four 4 × 4 gamma matrices (comprised combinations of the 2 × 2 Pauli matrices), while is the so-called covariant derivative that contains the energetic contribution of the fermion interaction with the electromagnetic potential Aμ. The covariant derivative simply reflects the fact of GR, namely, that any energy term must have an impact on the curvature of the surrounding spacetime geometry. The Lagrangian can be expressed in the more conventional form by explicitly stating the energy term from current and vector potential . The corresponding Dirac equation of motion is , where are matrices; and with σi, the 2 × 2 Pauli matrices; and I, the 2 × 2 unit matrix. The Dirac matrices are defined as and . In the same way the Lagrangian for QCD can be written , which is similar to the Lagrangian of QED, LQED, but one needs to account for the quark flavour (where Nf = 3 in the SM) denotes the number of different quarks, the quark masses mj (which are to be determined from experiment); the strong coupling constant αs (energy dependent, from experiment) replaces the fine structure constant α. One needs to account for the gluon fields, denoted , in analogy to the vector potential Aμ of QED. As there exist eight gluon fields in the SM, . In contrast to the photon γ, the gluons carry colour charge (not to be confounded with colour but similar to electric charge), and thus gluons are interacting with each other, and therefore the gluon field vector is no longer commutative, and are termed the structure constants (real numbers) of the underlying group SUC(3). A gluon carries both a colour and an anticolour charge, for instance, red and antiblue colour charges. If absorbed by a blue quark, the blue and antiblue colours cancel, and a red quark occurs, .
The three hypothetical gravitational bosons of GR are denoted, in order to differentiate them from hypercomplex-gravity, by , the gravitophoton , and the spin 0 quintessence particle νq (Section 5.4). As the cosmological gravitational fields originate from pure spacetime geometry, it is an open question whether these bosons really exist in nature. On formal mathematical grounds, they are described by group SU(2).
Casimir operators commute with each other and with all group generators as well as the energy operator Ĥ, and thus their n eigenvalues can be simultaneously measured. For instance, the rank of SU(3) is two, and the two operators isospin and hypercharge Ŷ can be used to arrange baryons into multiplets, depending on their total spin J.
The conserved baryonic charge is B = 1 for protons and neutrons, and B = 0 for the electron that has the assigned leptonic charge L = 1, whereas L = 0 for nucleons. For baryons that do not contain a strange quark s, one finds B = Y, where Y denotes the value of the hypercharge. The electric charge of an elementary particle is given by .
There exist numerous similar statements concerning the origin of the Big Bang. For instance, one can read in a textbook on modern astrophysics: “…13.7 billion years ago, the universe borrowed energy from the vacuum to create vast amounts of matter and antimatter….” Not only is energy conservation violated in this process, but in addition the separate existence of a vacuum of highly specific properties is assumed, while a not yet existing universe is borrowing energy. We may also say “…and then a miracle occurs.”
This is particularly true for climate science, where huge amounts of money are at stake, and not being mainstream may well mean not being funded. Before each climate conference, politicised climate reports are released just in time, citing all kinds of possible catastrophes that do not occur, but never addressing the real problem, which is overpopulation.
Note: in Chapter 5.4, group formally was assigned to the three gravitational bosons, mediating the gravitational interaction among hadrons, and the interaction of matter with the DE field, respectively. That is, Einstein’s GR has been modified slightly by adding the interaction of matter with the ubiquitous but generally weak DE field, that is, .
According to the optimisation principle, nature does not produce anything that functions without a specific purpose. At present, we do not have a convincing physical explanation that forces nature to construct a dark spacetime with the specific properties stated, except that such a construction is not in conflict with the existence of exact three lepton families of positive mass, but at the same time mimics the behavior of dark matter as observed.
Further physical justification of these principles is not possible and necessarily leads into the realm of metaphysics. This does not mean that arguments from metaphysics are to be rejected, but simply states that these rules are outside of physics. Another major example that leads also outside of physics are the numerical values of the physical coupling constants that, at least in EHT, are based on number theory. It should be clearly stated that the roots of the cosmos are to be sought outside of physics and thus cannot be explained. Hence, the question which process may be responsible for setting up the proper blueprints governing the evolution of the cosmos cannot be decided by physics. Science in general and physics in particular can only take the observed facts as a given and try to construct adequate models, but are incapable of providing explanations for any underlying objective. These restrictions and the ensuing contradictions were clearly described in B. Heim’s article entitled Welt der Weltbilder .