PARAPHYSICAL PRINCIPLES OF
NATURAL PHILOSOPHY
[The ParaPrincipia]
James E. Beichler
Copyright ©1999, 2013
All rights reserved
Preface
This book was originally written as a Ph.D. dissertation for the Union Institute and University in 1999. The
present publication follows the dissertation exactly except for a few grammatical, spelling and graphics corrections. It
is published for the first time in e-book format because the specific physical theory of reality that was outlined in the
original dissertation was not completed in detail until recently. Although this book is a bit dated, such that advances in
physics and cosmology since 1999 have not been included, the information in the book is still perfectly valid. The
completed theory appears in a companion ebook with the title Four Pillars of Wisdom. A great deal more background
material is in preparation for publication while a more comprehensive look at the day to day advances in the
theoretical model over the last decade can be found on my webpage at neurocosmology.net.
The first chapter of this book gives a philosophical and historical overview of paraphysics and defines the term.
The second chapter discusses the physical characteristics and properties of psi, the functional mechanism normally
associated with paranormal phenomena. Chapter three gives a history of past attempts to develop paraphysics as well
as specific physical theories of psi. The next chapter, four, discusses modern physics and more recent attempts to
develop theories of everything (TOEs) before a new theory of the single field (SOFT) is introduced. The single field
theory presented is essentially based upon a five-dimenional Clifford-Einstein-Kaluza space-time continuum that has
been quantized. Although the classical notion of a continuous field forms the basis of this theory, neither the present
theory of relativity nor the present quantum theory is being abandoned. This theory merges relativity and the quantum
together as equal partners in a single theory. Both relativity theory and the quantum theory have been spectacularly
successful and all of their past successes must be incorporated into any new comprehensive theory of reality rather
than be abandoned. So any attempted unification must merge them together rather than allowing one to consume and
replace the other as other scientists have attempted.
After discussing the physics of the single field in the fourth chapter, psi phenomena are briefly outlined within
the context of the single field theory in the fifth chapter. Psi phenomena emerge naturally from the geometrical
characteristics and requirements of the single field structure. In fact, if they did not already exist they would have been
predicted by the single field theoretical model. The fifth chapter is followed by a conclusion which briefly explains
how the single field theory could be used to ‘solve the universe.’ And finally, a lengthy bibliography worthy of a
doctoral dissertation appears at the end of the book. This book is purposely as comprehensive an expositon of the
complete field of paraphysics as possible and in this respect it is like no other book in the field. Readers of this book,
both professional and non-professional, need to develop good sense of the science of paraphysics and the scientific
possibilities it raises to completely understand the advancements of science in general that will come in the next
several decades.
I would like to thank the members of my doctoral committee for their encouragement, suggestions and criticisms
in the original preparation of the dissertation. My doctoral committee consisted of Professors Kevin Sharp and Jose
Cedillos of the Union Institute, Doctors Harold Puthoff and Samuel Stansfield who acted as non-faculty experts in
the field of theoretical physics, and Doctors Pat Madden-Sullivan and Greg Summers who were peer reviewers.
Special thanks go to Kevin Sharp who acted as my core faculty advisor. Without their help and support, this book
would not have been possible given the controversial nature of the subject and the academic prejudice against all such
research, whether theoretical or experimental.
Ph.D., Memorial Day, 2013.
** A Note: E-books have a very small character set and do not recognize Greek letters in text. So, all Greek letters
that represent physical quantities in the text have been replaced by their English names. I regret the inconvenience
and apologize if any have been missed. **
Introduction
The word ‘paraphysics’ has never been precisely defined. To establish paraphysics as a true science, the word is
first defined and its scope and limits identified. The natural phenomena that are studied in paraphysics, psi
phenomena, are distinguished by their common physical properties. The historical roots of paraphysics are also
discussed. Paraphysics can be defined, represented by a specific body of natural phenomena and it has a historical
basis. Therefore, paraphysics is a distinguishable science. It only needs a theoretical foundation. Rather than using a
quantum approach, a new theory of physical reality can be based upon a field theoretical point of view. This approach
dispels philosophical questions regarding the continuity/discrete debate and the wave/particle paradox. Starting from
a basic Einstein-Kaluza geometrical structure and assuming a real fifth dimension, a comprehensive and complete
theory emerges. The four forces of nature are unified, as are the quantum and relativity.
Life, mind, consciousness and psi emerge as natural consequences of the physics. The scientific concept of
consciousness, ambiguous at best, has become an increasingly important factor in modern physics. No one has ever
defined consciousness in an acceptable manner let alone develop a workable theory of consciousness while no viable
physical theories of life and mind are even being considered even though they are prerequisites of consciousness. In
the five-dimensional model, life, mind and consciousness are explained as increasingly complex ‘entanglements’ or
patterns of density variation within the single unified field. Psi is intimately connected to consciousness, giving the
science of paranormal phenomena a theoretical basis in the physics of hyperspace. Psi results from different modes of
consciousness interacting non-locally via the fifth dimension.
Several distinct areas of future research are suggested which will lead to falsification of the theory. A new theory
of the atomic nucleus is clearly indicated, as is a simple theory of the predominant spiral shape of galaxies. A
quantifiable theory of life is also suggested. And finally, this model strongly implies a direct correspondence between
emotional states and psi phenomena that should render the existence of psi verifiable.
CHAPTER 1
It’s all in the word!
And that word is paraphysics. Paraphysics is a new science, but the word is older. Science has not
been sufficiently ready for its emergence until now.
The logical necessity of paraphysics
Paraphysics can never be established as a legitimate branch of science until it is properly defined. Its definition
must be precise enough to establish the scope of paraphysics as well as distinguish it from its closest relatives, physics
and parapsychology. Since it deals with psi, a rather controversial subject in its own right, the precision of the
definition is all the more important. The scope and methods of paraphysics must be firmly established independent of
psi so scientists will accept the legitimacy of paraphysics in the absence of absolute proof that psi exists. This will not
be difficult since paraphysics does not depend on psi alone, nor is it dependent on any one theory of psi, although a
successful physical theory of psi would greatly enhance the acceptance and further development of the science of
paraphysics.
Any scientific understanding of paraphysics not only requires a valid definition of paraphysics, but also a
comprehensive development of what constitutes normal physics. Within this context, anyone with so much as a
rudimentary knowledge of physics would first define physics by listing the fields that it contains, such as
thermodynamics, classical mechanics, electrodynamics, quantum theory and relativity. While this definition of physics
is far from complete, it does at least indicate the central idea behind physics that allows the development of a
simplified notion of paraphysics. Paraphysics could thus be deemed the field of physics describing phenomena which
are not dealt with in the aforementioned fields of normal physics. While this definition is understandable at a
rudimentary level, it is neither sufficient nor adequate for any real understanding of paraphysics.
In this regard, defining paraphysics is a far more serious problem than might be thought at first. It is fraught with
academic danger and controversy. On the one hand, normal physics represents a respectable and well-defined
academic discipline, so anything paraphysical, beyond the norm for everyday physics, could be classified as
paraphysics. This is the manner in which most people presently regard paraphysics. But this view is grossly inaccurate
since paraphysics is not a dumping ground for those phenomena, natural or otherwise, which physics cannot explain.
For example, Professor Tenti of the University of Toronto has recently placed a short web page on the Internet. He
defines paraphysics as the physics of natural phenomena beyond the scope of normal physics, but he illustrates
paraphysics with science fiction phenomena that have no reality in the world of nature. On the other hand, physics
means ‘nature,’ so physics must include anything and everything, which is natural, and thus includes all natural
phenomena. If paraphysics is to be considered a branch of physics, it must include phenomena that are both natural
and paranormal which seems to be a contradiction in terms. This definition introduces a potential logical
contradiction and begs the question of definition since the word paranormal has itself been given no meaningful
definition. Paraphysics must be regarded as either a branch of physics or an extension of physics; it must be ‘of
physics’ yet ‘beyond’ physics. The difficulty defining paraphysics is further complicated by the fact that physics
progresses and changes as time passes, consuming new ideas and areas of knowledge. Normal physics is thus
undergoing a constant change.
As the scope of physics expands into the future, what is beyond physics today may not be beyond the physics of
tomorrow. However, the basic fundamental tenets of physics remain constant against this changing historical
background of knowledge, so paraphysics must ultimately be defined with respect to the basic, unchanging, underlying
concepts upon which physics is based rather than be regarded as an extension of physics ‘beyond’ its present
knowledge base. Since some factors remain the same, consistent throughout the history of physics in spite of the
progressive nature of science, founding paraphysics upon fundamental unchanging tenets proves to be the only valid
method of distinguishing between physics and paraphysics, or normal and paranormal physics, while remaining within
the realm of natural phenomena.
These considerations raise the question of whether paraphysics is a separate discipline or a branch of physics.
But it must be assumed that paraphysics is a sub field of physics because physics covers all naturally occurring
phenomena which includes the paranormal as well as the normal. This potential contradiction and its solution have a
precedent in the mathematical research of Bernhard Riemann. Before he rethought geometry, the universe was
thought to be either infinite and unbounded or finite and bounded. But Riemann developed a geometric model in
which the universe could be finite yet unbounded, a completely new viewpoint that circumvented the older infinite
versus finite argument. As in normal physics, paraphysics deals with the nature of physical reality although it goes
beyond the notion of ‘matter in motion’ as the fundamental quantity that is used to explain natural phenomena in
normal physics. Ideas and concepts such as life, thought, mind, and consciousness are beyond the scope of normal
physics yet they are well-established norms in science whose existence would never be questioned. Since physics does
not normally include explanations of these concepts, physics cannot be as all inclusive of nature and natural
phenomena as many would believe. Thus, there are already precedents for paraphysics to maneuver within the plenum
of natural phenomena outside of the direct realm of physics.
These concepts do not seem to fit the physics model of ‘matter in motion,’ so they may well be paraphysical in
nature. Physics may or may not break these down, reduce them to a common physical quantity for the purpose of
explanation at some unspecified future date, but even if it does their nature will probably remain unique enough,
different from the other quantities defined and measured in physics, that they will be ‘beyond’ presently accepted
norms in physics as well as future extensions of those norms. So, although the scope of physics is wide and
comprehensive within nature, it is not complete to all ‘that is in nature’ or all ‘that is possible.’ The need for
paraphysics thus becomes apparent beyond its simple definition as ‘the physics of psi’ or ‘the physics of paranormal
phenomena.’
The concepts of life, thought, mind and consciousness are normally dealt with in other academic disciplines such
as the life sciences and psychology, but these sciences are also incomplete in their understanding of these concepts.
The paranormal concept of psi, a general term describing the quantity which acts in both extrasensory perceptions
(ESP) and psychokinetic (PK) phenomena, is intimately bound to these quantities and thus lies outside of normal
physics. Psi also lies outside the scope of the life sciences and psychology, so the new academic field of
parapsychology was developed six decades ago to deal with the mental aspects of psi. By so introducing a new science
to deal with the psychological aspects of psi, it has become necessary to distinguish between the mental and physical
aspects of psi and thus differentiate between parapsychology and paraphysics.
As science approaches a complete realization of the most fundamental aspects of reality, in so far as such a
realization is even possible, it may well be found that there is no difference between paraphysics and parapsychology,
just as there may be no real difference between either mind and matter or consciousness and physical reality in the
final analysis. But at this point in the evolution of science there is no direct evidence that either consciousness and
physical reality or mind and matter are at their most fundamental level the same thing. There are only suspicions. So,
the difference between paraphysics and parapsychology must stand, at least until scientific evidence indicates
otherwise.
It is quite possible that drawing a distinction between what is purely paraphysical and what is purely
parapsychological may actually aid in the search for a consistent and accurate definition of mind and consciousness as
well as matter and physical reality. There have been many cases in science where concepts have been defined as much
by ‘what they are not’ as by ‘what they are.’ Deciding what is purely physical about psi may even help science to find
the limits of consciousness and thus define consciousness. The definition of paraphysics as an independent field of
science must therefore be deemed an important step in the evolution of science. Within our normal worldview, as
defined by the fundamental bases of classical physics, relativity and quantum mechanics, all of nature is defined in the
terms of matter, motion, or matter in motion. So any possibility that there is a more fundamental reality underlying
these concepts implies a paraphysical explanation which goes ‘beyond’ the common concept of ‘matter in motion’ to
establish paraphysics within the science of physics.
Physics in perspective
The use and abuse of ‘paraphysics’
Defining moments
The physics of other worlds
CHAPTER 2
Strange facts in search of a theory
In his presidential address before the Society for Psychical Research in 1972, C.W.K. Mundle
characterized parapsychology in this manner. The original quotation stated that psychology is a
theory in search of facts while parapsychology is a set of facts in search of a theory. Perhaps it is time
for science to realize that these two fields complement each other and the science of mind will remain
incomplete until they are unified. A complete view of the world will only evolve when science accepts
the fact that paraphysics and the science of the mind supplement each other.
The definitive psi
The science of physics is built upon observation, definition, measurement and noting patterns of action which
can be explained by hypotheses, testing those hypotheses, building theories and expanding those theories to cover
more and more phenomena. Definition is an important and integral part of the process. In fact, definition lies at the
very basis of the scientific method. Without strict and precise definitions, the quantities to be measured in the pursuit
of science could not be measured and more precise definitions allow measurements to be further refined. If a quantity
cannot be precisely defined, then precise measurements cannot be made and patterns of action cannot be described in
enough detail to build accurate theories. In a broad sense, the history of any branch of science can be characterized
and portrayed as the progression and refinement of the definitions that form the basis of that field, and the
parasciences are no different. So, a precise definition of psi, the quantity to be measured in parapsychology and
paraphysics, is necessary for the further development of these sciences.
Earlier this century, Rhine established the basis of the new science of parapsychology by defining psi accurately
enough to be measured, if only by statistical methods. Before Rhine accomplished this feat, no direct measurements
could be made and most of the psychic studies depended on anecdotal evidence, observation alone.
Although the early investigators had carried out experiments, psychical research was not at that time
primarily an experimental science. The tendency was to regard the accumulation and validation of large
numbers of reports of ostensibly paranormal events as the right way of finding out about psi, while
experiments served the subordinate purpose of providing convincing evidence of the occurrence of
telepathy or other psi-phenomena. A more completely experiment-oriented approach was set into motion in
1927 when the first university laboratory for the experimental study of the subject was started at Duke
University in North Carolina under J. B. Rhine. From that time, parapsychology, like other branches of
scientific research, used experiment as a means not merely of verifying the fact of psi but of finding out
about its nature and properties. (Thouless, 16)
The evolution of psychic studies to an experimental science moved parapsychology to the third step of the scientific
method as listed above. The previous study of psychic phenomena had only progressed to the observational phase
although there were attempts to develop hypotheses to explain the observed phenomena. Even today, the studies of
psi phenomena and the fledgling attempts to develop a theoretical basis for psi have only moved into the pattern
recognition and hypothesis building stages of a science.
The field of parapsychology is littered with hypothetical structures to explain the psi process, or rather some
features of it, but there is no comprehensive theory of psi. The lack of such a theory has caused scientists to waste
valuable time on ‘proving’ the existence of psi. Gerald Feinberg expressed this opinion over two decades ago
(Feinberg, 24), yet scientists are still trying to ‘prove’ the existence of psi. But there is no basic precept in science that
will not allow any proof of the existence of psi, nor should it. Nothing can be proven to exist from a philosophical
point of view, so why should psi, which seems to be the subtlest of all influences, be susceptible to ‘proof’ when
nothing else is so susceptible. And knowing that nothing can be absolutely ‘proven,’ why is a ‘proof’ of the existence
of psi even being demanded? There is something fundamentally wrong with scientific attitudes toward psi phenomena
that need to be addressed by the scientific community as a whole.
In view of this lack of both a ‘proof’ as well as a comprehensive theory of psi, scientists have sometimes resorted
to redefining psi or inventing an alternative word or phrase to replace the term. There are criticisms of the use of the
term psi that reflect the concern that words like psi, ESP and PK are already theory laden. Such words are thought to
convey a particular but unwarranted point of view, attitude or theory that may or may not be valid. Use of the term
psi is associated wholly with parapsychology and thus stresses the purely mental aspects of the phenomena in
question, while the physical aspects of the process are rendered subsidiary or secondary to the mental manifestations.
Edwin May has recently criticized use of the term psi as not precise enough for quantitative measurement, (May, 1996,
3) but his alternate use of the term ‘anomalous’ lacks even the precision of so scientifically vague a term as psi.
The modern trend toward the use of the phrase ‘anomalous phenomena’ instead of ‘psi phenomena’ suffers the
same problem as the older idea of ‘paranormal phenomena.’ Anomalous phenomena are those phenomena and events
that do not fit the preconceived theoretical framework of normal science. On the other hand, what is paranormal goes
beyond what is normal within the wider framework of naturally observable (and thus perceivable) phenomena,
regardless of any theoretical framework induced upon normalcy by science? In either case, not all paranormal and
anomalous phenomena are psychic in nature. Psi, no matter what the faults with the term, distinguishes a particular
class of anomalous and paranormal phenomena that are associated with mind.
Psi phenomena or events follow specific patterns as reflected in the fact that they these phenomena have
distinguishable properties. These properties separate psi from other paranormal and anomalous phenomena, so psi is
a more precise term than either ‘anomalous’ or ‘paranormal.’ The problem is that psi is not, nor has it ever been,
designated as a measurable quantity. Some scientists do not like the term because it does not specifically refer to a
‘thing’ that is directly measurable. Instead, psi is a classification of subtle processes that have common characteristics.
The processes are characterized in the first approximation by their association with mind, but the term does not imply
any specific theory of mind. Nor does it limit theories of psi exclusively to a quantity of mind alone.
Further problems with psi are inherent in the possible dual nature of psi phenomena as manifested in the
physical world. Some psi processes exhibit a mind/mind interaction while others exhibit a mind/matter interaction.
This inherent duality will probably have a simple unified explanation as is assumed by use of the word psi to cover
both types of phenomena, but the phenomena are qualitatively different at the common level of reality where psi
manifests. These two types of psi are termed Extra Sensory Perception (ESP) and Psycho-Kinesis (PK) phenomena,
respectively. These classifications of psi actually predate the concept of psi as a unification of the common elements
of all such phenomena. ESP and PK phenomena were found to have enough common characteristics that it was
decided to group them together under the single banner of psi.
Another term that will sometimes be met with is one originally suggested by Dr. Wiesner and myself. We
were inclined to think that there might be no real difference between what were called telepathy,
clairvoyance, and precognition, that they might be the same capacity working under different circumstances.
We suggested that this capacity might be indicated by the Greek letter PSI (psi) which would also have the
advantage over such terms as ESP that it implied no theory about the psychological nature of the process
but could be used merely as a label. This suggestion has been widely taken up and the term 'psi-phenomena'
is as well understood (at least in the United States) as the term 'paranormal phenomena'. We also suggested
that this Greek symbolism might be extended to cover the difference between ESP and PK, the former
being called PSIgamma (psi-gamma) and the latter PSIkappa (psi-kappa). This suggestion has not, however, been
generally accepted. (Thouless, 1-2)
By Robert Thouless’ own testimony, he and Wiesner adopted the term psi because it was not ‘theory laden’ and thus
“implied no theory about the psychological nature of the process.” The term psi filled a perceived need, so the
scientific community adopted it.
The use of other terms to replace psi is premature at best. To use another term now would be just as improper as
using the term psi if it were ever demonstrated that a more accurate term was available. Granted, psi is bound to a
psychological viewpoint of the phenomena in spite of Thouless’ intent, but substituting another term would only
cloud the issue and detract from the obvious connection between mind and psi. It has not been demonstrated to
anyone’s satisfaction or notion of sufficiency that psi is a physical quantity rather than psychological, even though the
two could someday prove be one and the same at a more fundamental level of reality. So a more physically oriented
term than psi is unwarranted. Psi is whatever scientists choose to make it, within the restrictions set by nature, but not
within the restrictions set by human philosophical notions.
The only thing that is truly evident is that psi must be determined by its properties, in lieu of a precise definition
of psi. Without reference to its properties, psi is just an empty category of phenomena that may or may not be related,
even though the existent evidence strongly implies that a single idea rests behind all psi phenomena. Both Feinberg
and Henry Margenau have stated that the properties of psi must be emphasized rather than the statistical ‘proofs’ of
its existence that have dominated research because science will never accept the reality of psi based upon experimental
‘proofs’ of its existence, especially when these proofs are statistical in nature.
My point is simply this: in order to study these obscure things, you must practice selectivity and concentrate
your attention upon instances where positive results are incontrovertible and, of course, demonstrably free
of fraud. I believe that as long as you go around making statistical studies everywhere and on everybody,
you are not likely to be convincing for a long time to come. (Margenau, 1982, 119)
Margenau actually went further than Feinberg to state that which was only implied in Feinberg’s criticism, the
necessity of a theory of psi before the scientific community can take the concept seriously.
Now, the second thing you need is theory. No amount of empirical evidence, no mere collection of facts,
will convince all scientists of the veracity and the significance of your reports. You must provide some sort
of model: you must advance bold constructs - constructs connected within a texture of rationality - in terms
of which ESP can be theoretically understood. (Margenau, 1985, 120)
Their opinions are all the more relevant since both men are respected physicists. Neither could be considered a
parapsychologist or paraphysicist. The primary purpose of more than a half century of experimental research on psi is
not to ‘prove’ the existence of psi, as it may well seem, but to ascertain the properties of a scientifically viable process
in nature which has been dubbed psi. Nearly all researchers of psi seem to have forgotten this fact in their rush to
overcome emotional and biased criticisms of the concept.
A property by any other name...
Physical properties at last!
Footprints in the sand
The strangest facts one could imagine
CHAPTER 3
The History of Future Science
As a historian, I am denied the right to speculate on the future; As a believer in the reality of psi I
am compelled to forecast the future; As a scientist I am obligated to consider the scientific
possibilities for the future; And as a conscious being I am morally bound to determine the
consequences that those possibilities might have on the human condition in the future.
In the Beginning
As this century comes to a close and the third millennium of the Common Era comes closer to reality, scientists
and scholars will begin a long process of analyzing the events and scientific accomplishments of the last century. This
process offers both strange parallels and deep contrasts with similar analyses that came at the end of the nineteenth
century. Although both the nineteenth and twentieth-centuries experienced unprecedented growth and scientific
advances, the overall outlook of the scientific community at the end of the two periods is vastly different. The
nineteenth century ended with many scientists reveling in the success of Newtonian physics and believing that science
had only to fill in a few gaps to complete their explanation of nature. Yet by 1901, a new and totally unexpected
revolution in science had already been initiated. On the other hand, this century will end with many scientists
expecting another revolution in thought in the near future. It is to this end that we must now turn.
The nineteenth century came shortly after the beginning of the industrial revolution. With this came a new
emphasis on the science of energy and power as well as subsequent shifts in political attitudes and cultural mores as a
whole. Both the technological and scientific advances of that age can be characterized by the development of two
devices, the steam engine and the electric battery. While these devices were physical in character, corresponding
developments in chemistry and the atomic worldview were also significant. The science related to steam power
formed the basis of the new science of thermodynamics and the kinetic theory of matter over the next several
decades. While these were primarily extensions of the Newtonian worldview, they did introduce statistics into physics
in the name of the advancement of knowledge. The electric battery led to the birth of electromagnetism and the new
concept of fields of potential. While electromagnetism was decidedly non-Newtonian, the first interpretations of
electromagnetism were definitely Newtonian. Indeed, these were the very scientific advances that led scientists and
scholars to the belief that they were on the verge of a total understanding of nature by the end of the nineteenth
century.
These and other scientific advances also had unintended consequences which are normally ignored by scholars,
historians and scientists. The successes of science were a primary factor in the philosophical debates of the late
nineteenth century that began to question what it was that science was investigating. Did the laws of nature describe
the reality of the world or were they just our common human perception of that reality? This philosophical movement
was antithetical but intimately related to the popular movement of spiritualism. On the one hand, the overwhelming
success of science and technology encouraged an interest in them by the common, and many times, uneducated and
inadequately educated masses. Segments of this cultural movement combined with popular forms of spiritualism and
religion to give birth to what has become known as ‘modern spiritualism.’ Yet the success of science offered a unique
opportunity for scientists, scholars and philosophers to once again address issues that had been popular a century
earlier but largely ignored as science built upon its own foundations. These controversial issues included questions
regarding the nature of life and the relation between living and dead matter, as well as the mind/body paradox. The
issues were not just the results of advances in physics, but were also the stepchildren of advances in biology, chemistry
and geology as well as the advent of the Darwinian theory of evolution. Such issues developed the collective
consciousness of the whole scientific and scholarly communities.
Out of this mixture of questions on mind, body, life, spirit and the human perception of reality emerged the new
science of psychology. Even today, very few scientists and especially psychologists would admit that psychology's
parents were physics and spiritualism, and its closest sibling is parapsychology, but that fact is nonetheless true.
When all of these notions are considered, it is easy to see why so many believed that science was nearly complete
at the close of the nineteenth century. Yet, at the same time, it is easy for modern scholars and historians to find the
first signs of the new revolution in science within the confines of this grand milieu of ideas. The new revolution that
evolved out of this period was non-Newtonian even though it was rooted within the greatest successes of the
Newtonian worldview. One early crack in the Newtonian worldview came with the discovery of non-Euclidean
geometries at the outset of the nineteenth century. Newtonian physics and its corresponding worldview were based
squarely upon Euclidean geometry. There was no other geometry to be considered until the non-Euclidean geometries
were discovered and the Euclidean correspondence to Newtonian physics was taken completely for granted by
scientists and scholars alike. The importance of Euclidean geometry to the Newtonian worldview is clearly
emphasized by the fact that first discoveries of the non-Euclidean geometry were held in secret by J.K.F. Gauss who
feared the “clamor of the Boeotians,” a vague reference to the uproar the discovery would make given the Newtonian
view of reality.
When the non-Euclidean geometries finally became known within the academic community during the second
half of the nineteenth century, some scientists immediately applied the new geometries to physics and some
astronomers began to search for observational evidence that the universe as a whole was non-Euclidean. (Beichler,
1988, 1996) Scientific and philosophical debates ensued and the issue was not settled by the end of the nineteenth
century. All that was known was that space might be Euclidean, but matter itself could not be proven Euclidean. If
space in the large scale was non-Euclidean, it was so close to Euclidean that astronomical observations could not
distinguish the exact geometrical nature of space. (Robert Ball, 1881, 518-519) The case for a non-Euclidean or hyperdimensional physical space was not helped by the fact that some spiritualists and religious philosophers attempted to
locate ghosts, spirits, angels and even heaven in higher dimensions of space that angered the scientific community.
Other, more serious cracks in the Newtonian picture came during the last two decades of the century. Ernst
Mach made the first successful argument that Newtonian absolute space was not necessary in physics, although his
argument had no practical significance to physics until the twentieth century. The earlier attempts to explain
electromagnetic theory in terms of Newtonian physics resulted in the development of the concept of a luminiferous
aether. This aether was a non-inertial or massless substance whose sole significance lay in the fact that it lent to
absolute space enough substantiality to allow for the transmission of electromagnetic waves through the vacuum of
space between the stars. The luminiferous aether had no other physical characteristics that affected mechanics.
Michelson and later Michelson and Morley proved rather decisively that no aether existed. Although their
experimental work provided a believable coffin in which to place the aether theory and that particular Newtonian
interpretation of electromagnetic theory, it did not provide an immediate burial of the aether concept. Scientists
realized the importance of the Michelson-Morley experiments almost immediately, but still attempted to place
bandages over the wounds that it caused in the Newtonian worldview.
And finally, the discovery of radioactivity just before the close of the nineteenth century opened a whole new
vista of research and physical concepts that had not been dreamed of before. Coming very near the end of the
century, researchers would not have been familiar enough with the characteristics of radioactivity to even guess the
problems that its existence posed for the Newtonian worldview and all of physics. Nature does not follow the manmade calendar and has no psychological stake in the turning of a century or millennium. It is only coincidental that
these changes came at the turn of the century, which is no more than a psychological marker for human culture and
science. The turn of a century is only a convenient psychological moment for humankind to assess its past and
attempt to foresee its future.
At the turn of the century, psychology was first beginning to emerge from its roots in the mind/body problem in
physics and philosophy as well as its paranormal roots. This last break is best illustrated by the work of William James.
In a sense, his work offers a cusp or watershed that is useful when gauging the emergence of psychology as a science
in its own right. James believed in psychical or paranormal phenomena. Not spiritualism, but ESP and similar
phenomena. He developed the concept of ‘subliminal perception’ to explain ESP. That fact would come as a shock to
many modern psychologists who see the concept as an important factor in any psychological research, devoid of all
psychic or paranormal content. Yet in its modem form, ‘subliminal perception’ is an important and valid concept
within standard psychology and is completely dissociated with the paranormal. ‘Subliminal perception’ marks the limit
of one’s psychological awareness of his or her surroundings while psychic phenomena are past that limit in modern
psychological thought.
James emphasized the concept of consciousness as well as the psychic aspects of psychology. The concept of
consciousness has been ignored by psychologists for all those decades since it was first emphasized by James, just as
the paranormal was effectively stripped from the study of mind and psychology. However, a new emphasis has been
placed upon the direct study of consciousness, just within the past two decades. Many have attributed the long gap in
the history of this subject as a result of an early takeover of psychology by the ‘behaviorists.’ Behavioral psychology
has recently been toppled from its high perch as psychologists begin to look more closely at environmental factors in
human psychology. In a very true sense, behavioral psychology emphasized a mind/mind or human-to-human
approach to psychology, subtly maintaining the mind/matter split of the Newtonian/Cartesian worldview, even
during this century when the Newtonian worldview has been de-emphasized in physics and the physical sciences.
The dominant view of the behaviorists grew at the expense of the mind/body or human/environment emphasis
in which physicists and other scientists participated. This marked the modern breach between psychology and the
other sciences that has only been healed within the past two decades. But the growth of the behavioral school
coincides directly with the growth of the non-Newtonian concepts in the physical sciences that have marked the
Second Scientific Revolution of this century. The new emphasis on the study of ‘consciousness’, which has developed
in the scientific community within the past two decades, is indicative of the recent changes in the scientific view
toward modern physics that has evolved concurrently within the past two decades.
The new revolution in physics during this last century evolved from the problems inherent in unifying the
electromagnetic theory with Newtonian physics. This unification came within a scientific climate clouded by the
discovery and explanation of radioactivity. Electromagnetic theory alone was successful in describing how electricity
and magnetism worked independently as well as together, especially in the transmission of electromagnetic waves
through space. But it was not so successful in explaining the minutest interactions of matter and electromagnetic
waves. Max Planck’s explanation of blackbody radiation, Albert Einstein’s explanation of the photoelectric effect and
Bohr’s model of the electronic shells of Hydrogen and other atoms all came as answers to questions raised concerning
the interaction of matter and electromagnetic waves at the atomic and subatomic levels of physical reality. These and
subsequent developments in quantum physics formed a very serious blow to the time honored Newtonian concepts
of nature and the world. They introduced the new ideas of probability, chance, indeterminism and the wave/particle
paradox into modern physics.
During the previous century, statistics were introduced into physics via the study of thermodynamics. Even
though statistical analyses gave the correct mathematical models for large ensembles of particles, the action of
individual molecules and particles of matter were still thought to act causally and deterministically in a Newtonian
manner. Statistics provided a stopgap method to describe large ensembles of motion in cases where the individual
Newtonian motions could not be practically measured and accounted for. Statistical analysis of such large ensembles
mimicked measurable and discernable macroscopic quantities that resulted from microscopic physical interactions. On
the other hand, radioactivity elevated statistical methods to probabilities and demonstrated that chance and
randomness were essential features of nature at the most fundamental level. This fact opened the door for the
incursion of indeterminism into the newly forming worldview after the turn of the century. This indeterminism was
institutionalized in physics by the Heisenberg uncertainty principle. These various conceptual changes took place over
the first three decades of the twentieth century and are regarded as the “thirty years that shook physics.” These were
the revolutionary years in physics during which quantum theory came to form the most important basis of science,
although not the only basis of modern physics.
Einstein and others, as expressed in the null experiment of Michelson and Morley, solved the other side of the
electromagnetism/Newtonian mechanics problem, with the development of the theories of relativity. Special relativity
came first. Although it is now considered in its more mechanistic manifestations of time dilation, energy-mass
equivalence, Lorentz contraction and explosive increase of inertial mass at near light speeds, it originated as a theory
of electrodynamics, an attempt to reconcile the Newtonian and Maxwellian theories. With the main emphasis of the
purely mechanistic properties of special relativity, its electrodynamic origins have almost been forgotten while the
resulting misinterpretation of special relativity abounds in common culture as well as science in general. It is often
thought that special relativity forbade the existence of absolute space and the luminiferous aether, but it only rendered
them superfluous as unnecessary concepts for use in modern physics. This was only a de facto banishment of the
concepts and general relativity has occasionally been accused of rendering a new absolute space-time. Special relativity
was itself limited in scope from its very inception. It covered only systems moving at relative constant speeds, so
Einstein generalized his theory of relativity to include accelerated systems leading to the general theory of relativity.
The special theory itself was revolutionary in that it destroyed the Newtonian concepts of constant mass and
absolute time, unifying space and time within a single structure as well as deriving the basic relationship between mass
and energy. But general relativity went further in adopting a non-Euclidean geometry as the fundamental geometry of
the universe. There are three general interpretations of the fundamental principle of general relativity, the curvature of
the space-time continuum. (Graves, 182-183) Einstein espoused each of these interpretations at different times during
his career. In the first interpretation, mass distorts space-time to cause the curvature, so mass represents the reality. In
the second, mass distorts space-time and changes in curvature cause the motion of matter. Matter and curvature are
equally real. And finally, the curvature itself is the fundamental reality and matter is just a manifestation of curvature
within the continuum. Although the mathematical model of space-time represented by general relativity does not
distinguish between these three interpretations, the distinctions made in these interpretations lead to different
consequences in advancing field theory to include other fields such as the electromagnetic field.
Einstein was not the first to equate space curvature with matter and ripples in space curvature as the motion of
matter. William Kingdom Clifford first proposed this conceptual advance four decades before Einstein's theory.
(Clifford, 1870) Although Clifford’s theory was fundamentally electromagnetic he planned to include all the forces of
nature in his theory including gravity, (Clifford, 1887) while Einstein’s was a theory of gravitation. Einstein dealt with
a space-time continuum rather than a separate space and time, as did Clifford. His theory was also successful in
describing the finer points of gravitation as opposed to Clifford’s theory, which was never completely enunciated
before his untimely death. Even so, Clifford’s theory is closer to the later attempts to derive a unified field theory than
it is to Einstein’s original version of general relativity as most scholars claim today.
Einstein’s development of both theories of relativity was concurrent with the development of quantum theory.
In fact, Einstein’s physics can be found at the most basic levels of development in both branches of physics in the
new scientific revolution, relativity and the quantum. It is something of a historical paradox that Einstein’s work was
crucial in the early development of the quantum theory because Einstein never accepted the probabilistic
interpretation of quantum mechanics as expressed in the Copenhagen Interpretation. The different strands of thought
that represented quantum theory jelled into a single coherent body of knowledge at the 1927 Solvay Conference
against Einstein’s arguments and objections. What evolved at that time has come to be called the Copenhagen
Interpretation of quantum mechanics. In this interpretation, quantum theory, in the form of the Heisenberg
uncertainty principle, placed a precise limit on human knowledge of the world and physical reality. Since momentum
and position could not be known simultaneously with any degree of accuracy, the fundamental basis of physical reality
was thought to exist (or perhaps not exist) beyond all human attempts to observe and probe that reality, for now and
all of the future of science.
When placed in terms of wave mechanics, the Schrödinger wave equation was given a probabilistic interpretation
such that the square of the wave function became the probability of locating the wave, as an individual material
particle, at any point in space-time. Before the act of measurement, as initiated in the laboratory, the wave
representing a material particle had some probability of being located at any position in space-time. The act of
measurement localizes the particle-wave to a single position in space-time, an action that is known as the ‘collapse of
the wave function.’ According to the Copenhagen Interpretation, the particle does not physically exist prior to this
‘collapse’ and therefore has no physical reality until the interaction that caused the ‘collapse’ in the first place. In the
most extreme version of the Copenhagen Interpretation, there is no physical reality until our consciousness interacts
with the wave function so science cannot probe beyond this point. Therefore, the Heisenberg uncertainty principle
marks a strict and unbreakable limit to our knowledge of physical reality. It is this interpretation of quantum theory
which Einstein so sternly and steadfastly objected to, but it became the primary interpretation of quantum mechanics
after the Solvay Conference of 1927. Einstein’s refusal to accept this interpretation branded him something of a
pariah from this time forward and he spent the rest of his life in the development of a unified field theory from which
he dreamed that the quantum of action would evolve from the field equations.
While this brief explanation does touch on the major contributing events in the conceptual developments of the
Second Scientific Revolution, as well as summarize the evolving trends, it is far from complete. What is important is
the fact that these new physical concepts were alien to the Newtonian worldview and thus represented a fundamental
conceptual shift in scientific thought and attitude. The worldview of the scientific community thus began to follow an
evolutionary track that sought the incorporation and elaboration of these new ideas. The success of the quantum
theory under the influence of the Copenhagen Interpretation was both phenomenal and immediate, as opposed to the
general theory of relativity, which languished (by comparison) over the next four decades. Until the 1960s, there was
little practical application of general relativity so most physicists chose to work in the areas of physics dominated by
the quantum mechanics.
The Copenhagen Interpretation was a philosophical force to be reckoned with within the physics community.
During this period, it once again came to seem that science was nearing a state of completion and the quantum theory
would eventually explain all of nature. A state of scientific complacency and smugness emerged toward new ideas
which were at odds with the Copenhagen Interpretation and science seemed only to increase the accuracy of
measurements, dot their i's and cross their t's, while the last vestiges of a quantum world were being discovered.
Success followed success in the application of' quantum concepts to nature, leading to the development of quantum
electrodynamics and ‘quantum field theory.’ Yet no one objected that the concept of a ‘quantum field theory’ is itself
an illogical proposition. The phrase is an oxymoron.
The quantum of action represents the discrete and particulate in nature, while the field represents that which is
physically continuous. Since quantum and field represent opposing physical realities, the concept of a ‘quantum field
theory’ is ridiculous as it stands. The major objective in ‘quantum field theory’ has been to reduce all the forces of
nature, which act in the manner of fields, to the exchange of material particles such as gravitons, photons and virtual
particles. The field concept would thus disappear at the expense of the quantum of action. No one has sufficiently
addressed this contradiction in terms with respect to developing a theory or explanation of the difference between the
quantum and field. The prevailing view of those who adhere strictly to the Copenhagen Interpretation and believe that
the Heisenberg uncertainty principle limits how closely and accurately we can observe reality has been largely accepted
without challenge to its basic assumption of a discrete physical reality. Yet this is only one of the philosophical
problems that beset the standard quantum conception of nature.
Parallel developments at the fringe of legitimate science
Physical theories of psi in the pre-scientific period
Physical theories of psi in the early scientific period
Physical theories of psi during the middle scientific period
(The legitimization of parapsychology)
Physical theories of psi during the late scientific period
(The legitimization of paraphysics)
When trends collide
Chapter 4
TOEs are the purported ‘theories of everything’ that are being proposed by physicists. The best
contenders for TOEs are billed as theories of the next century because the mathematical methods to
solve the problems set forth by the theories have not yet been developed. Unfortunately, scientists and
scholars have not yet realized that placing so much faith in their mathematical models is merely
substituting the finger pointing at the moon for the moon itself. The answers they seek are actually
as plain as ‘the nose on your face,’ if they would only return to the fundamental basics of physics.
Unification and the basic food groups
The greatest long-term trend in all of science can be found in the attempted unification of the laws of nature that
comprise the science of physics. In general, science progresses by several methods. Among these are included both
the synthesis of pre-existing concepts and the explanation of newly discovered phenomena. While the synthesis of
accepted theories and concepts forms the basis of unification, the explanation of newly discovered phenomena also
follows the pattern of unification. When they are first discovered, an attempt is made to explain new phenomena by
older accepted theories. However, if they cannot be so explained then new theories are developed so that science can
cope with the new phenomena. Later, the new theories are unified with the older theories, either as addendums to the
older theories once they are expanded, or they are unified through the development of a still newer theory that
incorporates both the old and new concepts in a more comprehensive model of nature.
Examples of both of these synthesizing processes abound in the history of physics and science, so much so that
unification seems to be a major, if not the primary task of theoretical physics. The development of thermodynamics
offers an excellent example of this synthesis process. Thermodynamics was born in the 1840s when James Joule
unified two independent branches of Natural Philosophy, the kinetic theory of matter, which explained heat and
Newtonian mechanics. This unification seemed inevitable since Hermann von Helmholtz and other scientists came to
the same conclusions independent of Joule’s groundbreaking unification. Joule’s unification is highly significant in the
history of science because it forced the emergence of physics as an academic discipline separate from Natural
Philosophy.
On the other hand, the development of the electromagnetic theory, which occurred over more than a century of
physical research, involved the discovery of new phenomena that could not be explained within the Newtonian
paradigm. Although many scientists were involved with the development of the concept of electromagnetism and
made significant contributions to the theory, the two most important were Michael Faraday, who laid the
experimental foundations of electromagnetism between 1820 and 1850, and James Clerk Maxwell, who rendered
Faraday’s theory into a mathematical model in the 1860's after appropriate modifications and additions. What had
originally been two separate branches of scientific enquiry, electricity and magnetism, had been unified into a single
and comprehensive electromagnetic theory. The history of science for that century long process is virtually littered
with new discoveries of electrical phenomena. However, the story of electromagnetism did not end at that moment in
history.
While electromagnetic theory explained many phenomena dealing with the propagation of light waves and
successfully predicted still more phenomena, problems rapidly arose with phenomena that were associated with both
Newtonian mechanics and electromagnetism. Primarily, those phenomena that showed evidence of an interaction
between the smallest particles of matter and electromagnetic waves defied explanation by either Newtonian mechanics
or electromagnetic theory. These phenomena included the spectral lines of elements and compounds as well as black
body radiation. In still another area where these two paradigms came into contact, the necessity of the luminiferous
aether for the mechanical propagation of light waves, both theories fell apart as demonstrated by the MichelsonMorley and similar experiments. It was from these and similar failures of the two theories that two new unifications
evolved which altered the course of physics, the developments of quantum theory and special relativity at the turn of
the last century.
In the opening years of the twentieth century, quantum theory and special relativity were successful in unifying
mechanics and electromagnetism at a very high price for classical physics. Fundamental changes in physics became
more and more evident as each new theory progressed beyond its original formulation. The quantum theory of 1901
developed into a system of quantum and wave mechanics by 1927 and special relativity expanded into general
relativity by 1916. In these forms, each new theory came to represent essentially incompatible aspects of reality.
Quantum mechanics relies upon the discrete nature of reality while general relativity portrays the continuous nature of
reality as represented by the concept of the field. After the 1920s, quantum mechanics became the dominant theory in
modern physics for several decades. Einstein never fully accepted the Copenhagen Interpretation of quantum
mechanics and spent the remaining decades of his life in opposition to mainstream physics while searching for a
unified field theory that would unite the electromagnetic and gravitational fields within a single field model. He hoped
that the quantum would literally appear as a byproduct of the mathematics modeling his unified field. In this
endeavor, very few physicists came to the aid of Einstein.
In the meantime, most physicists accepted the Copenhagen Interpretation of the quantum and sought to unify
physics according to their own model of reality. This line of thought culminated in such concepts as the quantum field
theory (QFT) and quantum electrodynamics (QED), but these theories were never totally successful in their
unification of the quantum and special relativity while gravitation theory has never been incorporated into the
quantum model. The lack of success in uniting even special relativity and quantum mechanics was well recognized by
the founders of QFT.
The ambitious program of explaining all properties of particles and all of their interactions in terms of fields
has actually been successful only for three of them: the photons, electrons and positrons. This limited
quantum field theory has the special name of quantum electrodynamics. It results from a union of classical
electrodynamics and quantum theory, modified to be compatible with the principles of relativity.
(Guillemin, 176)
As Guillemin has testified in his history of quantum theory, QED is only “compatible” with the principles of
relativity. It does not provide a framework for the unification of special relativity and the quantum. This same idea has
also been confirmed by Julian Schwinger, one of the founders of quantum electrodynamics, who summed up the
situation in 1956.
It seems that we have reached the limits of the quantum theory of measurement, which asserts the
possibility of instantaneous observations, without reference to specific agencies. The localization of charge
with indefinite precision requires for its realization a coupling with the electromagnetic field that can attain
arbitrarily large magnitudes. The resulting appearance of divergences, and contradictions, serves to deny the
basic measurement hypothesis. We conclude that a convergent theory cannot be formulated consistently
within the framework of present space-time concepts. To limit the magnitude of interactions while retaining
the customary coordinate description is contradictory, since no mechanism is provided for precisely
localized measurements. (Schwinger, xvii)
Schwinger clearly acknowledged in this statement that QED, the primary form of a quantum field theory, has reached
a specific limit whereby it cannot be judged without reference to an outside framework of space-time.
The prevalent framework of space-time, also referred to as the “customary coordinate description,” at this
juncture of history is that supplied by the theories of relativity, so Schwinger obviously believed that QED had not yet
been unified with special relativity. Special relativity just forms a limiting condition for the mathematical model of
quantum field theory that does not indicate that they have been unified in a single theory. It would further seem that
the real unification to which scientists subscribe is between quantum mechanics and GR, since both describe the
motion of matter in space-time while the unification with GR would certainly include an implied unification with
special relativity.
New scientific advances in the 1960s and thereafter have brought GR to the forefront of physical research even
as old philosophical problems which have plagued the Copenhagen Interpretation of quantum mechanics were
revealing more cracks in the prevalent quantum paradigm. During the last few decades, these later developments have
produced a climate of change within theoretical physics, which has resulted in a renewal of Einstein’s search for a
unified field theory. However, nearly all of the modern attempts at unification follow first from quantum theory rather
than beginning from field theory as represented in special relativity or general relativity. Most scientists treat relativity
as something to add onto or incorporate into for unification model have been proposed with such colorful names as
supergravity, Grand Unification, superstrings and finally the ‘Theory of Everything’ (TOE). Even accepting the
possibility that a TOE might exist marks a drastic change of attitude within the scientific community. But the problem
of unifying the discrete and continuous aspects of the physical world has never been resolved in spite of attempts to
do so from both the quantum and field approaches to the quantum perspective. Under these circumstances, more
recent attempts to base a future theory of physics on Einstein’s hoped nature. This dichotomy is represented
indirectly within modern physics by such concepts as the wave/particle duality of both matter and light.
Although the philosophical problems presented by the differences between the discrete and continuous are not
universally recognized in the physics community, a few physicists A Unified Grand Tour of Theoretical Physics have been
brave enough to question the established norms of modern physics in this regard. In his book, Ian D. Lawrie has
confirmed the uneasiness felt by physicists although he has not clearly defined the cause of his concerns beyond
stating the modern physicists “do not properly understand what it is that quantum theory tells us about the nature of
the physical world” even though “there are respectable scientists who write with confidence on the subject.”
Evidently, “the conceptual basis of the theory is still somewhat obscure.” (Lawrie, 95) Mendel Sachs is far more
straightforward with his criticisms. Sachs has noted two distinct and separate strains of scientific progress within
modern physics.
The compelling point about the simultaneous occurrence of these two revolutions (relativity and the
quantum) is that when their axiomatic bases are examined together, as the basis of a more general theory
that could encompass explanations of phenomena that require conditions imposed by both theories of
matter (such as current ‘high energy physics”), it is found that the widened basis, which is called ‘relativistic
quantum field theory’, is indeed logically inconsistent because there appear, under a single umbrella,
assertions that logically exclude each other. (Sachs, 1988, 236-237)
Sachs is, of course, referring to the logical and mutually exclusive nature of the quantum (the discrete) and the field
(the continuous). He does little to hide either this fact or his criticism of the shortcomings of present day physics.
Sachs has concluded that “neither the quantum theory nor the theory of relativity are in themselves complete as
fundamental theories of matter,” (Sachs, 256) due to the fact that they represent incompatible fundamental concepts
of the discrete and continuous aspects of nature.
These philosophical problems have physical counterparts within the mathematical model of the singularity. GR
falls apart at just the point where the continuous field of gravity meets the physical boundaries of the discrete particles
whose curvature creates gravity, where the curvature of the space-time metric becomes so extreme that it becomes
infinite. This singularity not only occurs at the heart of elementary particles, but also in black holes and the Big Bang,
which theoretically created our universe. On the other hand, quantum mechanics deals with singularities in a different
manner, although no more successfully than GR deals with them. Quantum mechanics utilizes a rather artificial
method known as renormalization to deal with singularities, or divergences as they are called, and then ignores the
problems created by the divergences.
In QED, each particle is associated with a field, so there are as many fields as there are different particles. This
situation gives rise to an unpleasant expansion of the concept of field which some have criticized. (Popper, 194) Yet
far more serious problems exist in QED. At the point where the different fields interact, one would expect to find the
action and reaction of the particles as caused by forces in the classical sense of the term. However, mathematical
divergences that render the masses of elementary particles infinite and undefined exist at the precise point where the
fields interact. These divergences or infinities can be renormalized to yield definite answers by applying mathematical
perturbation methods. Therefore, localizing and defining a point particle in QED amounts to using an artificial
mathematical method for no other physically valid reason than that the method yields finite results that can be
experimentally verified.
While many scientists do not see this procedure as a problem since its predictive power makes QED one of the
most successful theories ever developed in science, the artificial nature of renormalization is at the very least
philosophically unsatisfying and unsettling to other scientists and scholars. The method is considered at least ad hoc,
but otherwise a necessary evil at present. Karl Popper was very critical of this shortcoming of QED.
Moreover, the situation is unsatisfactory even within electrodynamics, is spite of its predictive
successes. For the theory, as it stands, is not a deductive system. It is, rather, something between a deductive
system and a collection of calculating procedures of somewhat ad hoc character. I have in mind, especially,
the so-called ‘method of renormalization’: at present, it involves the replacement of an expression of the form
‘lim log x - lim log y’’ by the expression ‘lim (log x - log y)’; a replacement for which no better justification is
offered than that the former expression turns out to be equal to 4 - 4 and therefore to be indeterminate,
while the latter expression leads to excellent results (especially in the calculation of the so-called LambRutherford shift). It should be possible; I think, either to find a theoretical justification for the replacement
or else to replace the physical idea of renormalization by another physical idea - one that allows us to avoid
these indeterminate expressions. (Popper, 194-195).
Although this criticism was leveled several decades ago, shortly after QED was first developed it is still a valid
criticism. QED is considered theoretically inconsistent because of these and other problems in spite of its great
successes. It is only a theory of electron interactions and does not unify electromagnetism with any other of the basic
forces in nature.
During the 1960s, Quantum Chromodynamics (QCD), which is a theory of nuclear interactions, followed QED.
QCD unifies the Yang-Mills field, which describes the nuclear forces of binding between neutrons and protons in the
atomic nucleus, with the Standard Model of quarks. QCD does not share the inconsistencies that plagued QED, but
does require renormalization methods similar to those in QED. Renormalization methods therefore remain a general
characteristic of the quantum field theories and QCD suffers from some of the same criticisms that plague QED. It is
hoped by quantum theorists that newer methods and developments in quantum field theory will eventually justify or
replace renormalization, rendering quantum field theory more palatable to its critics.
Their hope is that by including gravitational interactions in the existing formulations of quantum field
theory systems, it will be possible to construct a finite theory without any infinite renormalizations. They
thus hope to avoid the consistency problem of the renormalization theory. (Schweber, 603)
Yet these problems persist. A large part of the work in quantum field theory still involves finding proper
renormalization procedures that yield workable solutions. If a mathematical method of renormalization works, it is
adopted as long as its results are experimentally confirmed, even if there is no physical reason for its success. This is
not the best situation for physics, but physicists have accepted the method in principle and moved on to other
theories.
The infinite masses of elementary particles that resulted in QED and QCD before renormalization can be
effectively compared to the singularity problems of GR. However, such a comparison has not been common within
either physics or the philosophy of science. In spite of the lack of recognition of this ‘coincidence,’ physics has
continued to progress toward unification. It would seem that the fact that two fundamentally different approaches to
physical reality, the continuous field and the discrete quantum, lead to the same inconsistency would be an important
clue to identifying the problem of their unification. Since this clue has gone unnoticed, unification has proceeded
along other lines. Both quantum physicists and relativity physicists dance around the problem of the
singularity/divergence even though this problem should represent the object of the main thrust toward unifying the
two perspectives of physical reality.
Unification using a fifth dimension
Klein’s interpretation of the Kaluza theory
The super theories
Universality and the single field theory
a. Philosophical arguments for an extra dimension
b. The basic assumption and its consequences
c. Specifics of the model
d. Quantum theory revisited
e. What’s so special about special relativity?
f. Relating to special relativity
g. Tying up loose ends
CHAPTER 5
Strange facts find a theory: A new dimension for psi
The answers will not be found by inventing new quantities and qualities to explain psi. The
answers will come by redefining the most fundamental quantities by which science begins its
explanation of the world so that they better reflect our present state of knowledge about the world.
Finding the correct questions to ask of nature is the real answer, once the terms of nature have been
defined.
That’s life!
Many physicists and scholars believe that there is a great wall between the physics of centuries past and the
physics of this century. The older style of physics is ‘classical’ while the physics of this century is ‘modern.’ Today,
many also claim that ‘their’ physics is ‘post modern’ which must mean that it is somehow radically different (and
advanced) from normal twentieth century physics. But perhaps they are just too quick to dismiss the older styles of
physics. In spite of modern protestations, our science remains mechanistic in the sense that scientists’ primary
concern is discovering laws, rules and principles that govern ‘matter in motion’ at the most fundamental level of
nature. In this regard, even modern science is following strictly Newtonian ideals and pre-Newtonian ideals.
There are glimmers of hope that science may be breaking out of this mold, but they exist only at the edge of
science. Parapsychology and paraphysics are growing along this fringe area. There are also trends in science, already
discussed, that although they are not breaking the mold of mechanism they are at least stretching it to cover new areas
of nature. The recent incursion of consciousness into the study of physics is the best example of this change in
attitude. Consciousness entered physics through the quantum mechanical process of ‘collapsing’ the wave packet. It
might seem that consciousness is necessary to ‘collapse’ the wave packet, but such a belief cannot be sustained under
close scrutiny unless consciousness permeates each and every point in the universe and most scientists and scholars
would never accept such an assumption. The logical conclusion to this dilemma would be to accept the alternate ‘fact’
that consciousness ‘collapses’ the wave packet, but does not permeate all of matter. This has led many to the
unsatisfactory conclusion that humans are the only conscious beings and we literally create physical reality and the
universe by our thoughts. This position is also absurd.
The only true conclusion to this dilemma is to recognize that consciousness is sufficient to ‘collapse’ the wave
packet, but it is not necessary for the ‘collapse.’ The process can proceed in the absence of consciousness. The fact
that scientists can design and conduct an experiment to ‘collapse’ wave packets at some level of reality, and thus
manipulate nature, does not guarantee that only consciousness can ‘collapse’ a wave packet, and then only human
consciousness. The concept of consciousness with which all scientists, and especially physicists, work is too broad and
ill-defined. The concept of consciousness with which physicists deal is not necessarily the same as that with which
psychologists, doctors, psychiatrists, brain chemists or philosophers deal. The concept of consciousness upon which
physicists speculate is far broader in scope than an anesthetist deals with when the anesthetist renders a patient
‘unconscious’ during a medical operation. Physicists cannot, nor should not, talk about consciousness unless they can
define what they mean by the term independent of the ‘collapse’ of the wave packet. Until this is done, the ‘collapse’
of the wave packet will never be understood at the same level of physicists’ speculations about the process and will
remain a purely mechanistic detail in a physics which is still characterized by its Newtonian overtones.
Even if consciousness defies a coherent and concise definition there is enough data and information to render an
approximate idea of some of its parameters. A definition of consciousness based on its observed properties would
suffice to develop a theory of consciousness. The primary and overriding fact is that consciousness seems to be
associated with living, animate matter. So a search for consciousness must first deal with the concept of life and living
matter. Associating consciousness with living matter would sidetrack any arguments about which type of complex
systems are conscious: humans, dolphins, trees, dogs, cats, tulips or paramecia. Let the writers of science fiction deal
with that issue for the time being. A more fruitful path to follow would lead science to define animate matter rather
than consciousness.
Physics is reductionist. It reduces the world that we perceive to the smallest and most fundamental bits of matter,
elementary particles and the fields associated with them, and then describes how they interact. One form of that
fundamental interaction would lead to animate matter and another to inanimate matter. Animate matter can be
considered an extended material body that acts in a particular manner. Inanimate matter is an extended body that
interacts with the rest of the universe by reacting to the basic forces of nature. But all animate matter goes beyond
simply reacting to the basic forces of nature. In other words, animate matter is self-motivating while inanimate matter
is motivated or moved by external forces.
In the context of physics there are two and only two classifications or types of matter: animate (or living) matter
and inanimate (or non-living). Physics has never stressed the differences between the two and has therefore only
considered all animate matter as acting in the same way to the external world that inanimate matter acts. A simple list
of the properties of both types of matter is sufficient to illustrate the differences between them and act as a guide to
developing a physics of living matter. The properties of these two types must be listed according to the reductionist
manner of science and must be placed in terms conducive to the scientific study of physics. Thus the most common
features or properties that are inherent in all forms of living matter must be enumerated.
Of these various properties, self-motivation is the simplest most fundamental property that we can attribute to
animate matter because procreation and negentropy are structural characteristics while Mind and consciousness may
not be universal characteristics of animate matter. In other words, only self-motivation is necessary for living matter
within the context of physics although it may not be sufficient to define life outside of the science of physics. Pure
physics is only concerned with ‘matter in motion,’ not with purpose or interpretation, which are the functions of mind
and consciousness.
When it comes to motion, inanimate matter always follows the field gradient. In other words, inanimate matter
follows the physical laws of motion without deviation. Any deviation from the laws of motion that is displayed by
inanimate matter would only demonstrate that our laws of motion are inadequate, not that inanimate matter defies the
laws of nature. Physicists may take this fact on faith, but physicists do practice their craft assuming that this is true.
Physics has always dealt with inanimate matter or with animate matter acting as inanimate. Physics has never
distinguished between the two. Both a human body and a lump of coal will fall at the same rate in a gravitational field,
but the human body has a choice to go over the cliff or not, the lump of coal does not make that choice.
Self-motivating matter has the ability to sense its surroundings and make a choice that allows it to either follow
the field gradient or move against the field gradient. This choice represents a willful action. On the toerh hand, willful
action requires some form of mechanism-controlling device and that controlling device, whether extremely simple or
quite complex, can be called a brain. The brain is just a choice-making device, but it is still a mechanical device that
can be duplicated in non-living, inanimate bodies, i.e., robots or simple computers. This does not mean that ‘robots’
are living organisms nor that computers can really ‘think.’ Such questions form some of the great debates of our time.
It merely means that some sort of device, at the very least a simple brain which has the ability to detect and interpret
simple environmental stimuli and then make a choice between possible modes of interacting with the environment, is
a sufficient and effective device for living organisms to survive in their environment. So while the existence of a brain
or other such controlling device is necessary for living bodies, it is not unique to living bodies. It would therefore
seem that something more than a brain is necessary for living matter and this would be some level of mind.
The concept of mind is different from brain, because mind has an ‘awareness’ of the brain, the body and the
environment at least at some very basic level. The brain is a physical device, however simple or complex, that is
localized in the body. There is no evidence that mind is a local phenomenon, i.e., it is fixed at a single position in the
body, although the physical brain has nearly always been considered the seat of the mind. Mind is holistic rather than
reductive, so it is doubtful that mind can be understood through the reductive and mechanistic logical process
followed in physics. As far as we can determine, consciousness exists at a still higher level or higher order than mind,
but neither is necessary for understanding simple ‘matter in motion.’ All that is necessary at this level of physics is the
fact that animate matter is self-motivating and inanimate matter is not. After all, physics reduces the world to ‘matter
in motion,’ so the differentiation between self-motivating and not self-motivating is all that simple physics requires to
begin the process of understanding living matter.
Historically, scholars of all persuasions have long suspected that there is far more to life than could be reduced to
the mechanisms described by physics. Scholars and scientists alike have invented several different quantities and
qualities to describe this difference. Concepts of a ‘life force,’ ‘vis viva’ and ‘elan vital’ have cropped up in science
several times in the past three centuries. Today’s equivalent of these concepts can be found in such terms as ‘wholism’
and ‘complexity,’ but the basic ideas that they describe are still the same. Even in these cases, the inventng of new
quantities that cannot be measured are of little help in understanding the physics of motion for inanimate bodies of
matter. So again, we have come back to a concept of self-motivation as the only method by which physics can account
for the differences between animate and inanimate objects.
Once self-motivation has been accepted as an adequate and effective basis for a physics of life, a new question
emerges: How does self-motivation work in living organisms within the five-dimensional model? A living organism is
a large and very complex material system. In spite of the complexity of the organism, it can still be considered a large
number of mutually interacting particles. The physical integrity of the organism is established by the specialized
chemical interactions within the organism. These chemical interactions can be further reduced to the interplay of
electromagnetic fields and material particles. Within this context, they represent the mechanical basis of the living
organism. But in the five-dimensional model, the various particles are more than the sum of their normal physicochemical interactions.
The individual particles are connected to each other, as they are to all particles in the universe, via A-lines
perpendicular to their own individual axial A-lines. These extra-normal connections (outside of the space-time sheet)
cannot be equated to the special quality that makes life different from inanimate matter, but they are related to that
special quality which is unique to all life. Inanimate matter would have similar connections between particles, but the
complexity of chemical interactions between the various chemical substructures of living organisms, in conjunction
with these extra-dimensional A-lines, allow a special coupling between inanimate particles within living organisms
which is directly related to that which is life in any particular animate body. This coupling is not present between the
particles that compose inanimate bodies of matter.
Living organisms are negentropic. There is a tendency of matter to form less complex bodies or systems that is
called entropy. This tendency is explicit in the second law of thermodynamics and other physical principles. On the
other hand, living organisms form more complex structures and substructures rather than devolving into less complex
structures. So they evolve in a direction opposite to entropy and are thus negentropic. Evolution is a natural process
that is associated with living bodies primarily because they are negentropic. The use of the concept of evolution with a
negentropic system denotes an element of time that is perfectly logical since entropy is considered the ‘arrow of time’
in normal physics. Complexity is related to negentropy, and negentropy is related to living organisms. So, through
some mechanism that is not yet completely known to science, living organisms reorganize matter in more complex
systems and structures to grow and thrive. This negentropic effect characterizes all living bodies (or animate matter),
but it also affects the A-lines of the individual particles which make up a living body in a specific manner which allows
the A-lines to couple together independent of their connections with other material particles in the universe.
There are ongoing chemical processes in living bodies that are unmatched in inanimate bodies. These chemical
processes include the exchange of energy between particles. Normally, particles of all types of matter have various
energies of motion associated with them, such as the kinetic energy of orbiting electrons in atoms and molecules. But
animate organisms must also have special energies associated with the special chemical reactions and interactions that
are unique to life. These special chemical reactions follow specific repetitive patterns throughout the living body and
could be interpreted as kinetic energy interchanges between the particles undergoing the specified chemical reactions.
So there are specific patterns of energy exchange between the material particles from which a living body is
composed. The more complex the living organisms, the more complex the chemical patterns or signatures of the
organisms, and the more complex the patterns of kinetic energy interchange between particles. Living organisms are
characterized by specific energy signatures, which are related to the patterns of chemical interactions in their bodies.
In a physical sense, these patterns or signatures could be considered resonances between the five-dimensional
extensions of the material particles within living organisms and the wave functions of the material particles would
form a special coherence or coherent state.
As an individual particle’s speed increases or decreases during the interchange of kinetic energy of the chemical
process, its extension in the fifth dimension increases or decreases according to the special theory of relativity. The
particle’s center of field density shifts further into the fifth dimension when its speed increases and back toward the
space-time sheet when its relative speed decreases. In the case of a living organism in which specific and well-defined
chemical patterns are established and repeated throughout the organism, specific patterns of five-dimensional
extensions are also established. The concerted shifting of centers of field density along the axial A-lines of large
numbers of particles cause specific orientations in the A-lines connecting particles perpendicular to the axial A-lines.
Since the same chemical affects numerous particles in living bodies processes, these extra-dimensional A-lines couple
together in specific ways. These A-line couplings are unique to living organisms and give living organisms an extradimensional signature that is missing in inanimate matter and non-living bodies. The more complex the living
organism, the more complex the patterns of A-line coupling, and the more evolved the living organism. These
couplings are intimately related to life and the living process, although they neither define nor constitute the quality of
matter called life alone. These couplings and their related energy signatures do constitute what is normally called Chi,
Qi or Ki in Eastern science and philosophical systems.
The physical framework of psi
“Eureka!” he exclaimed, with a psi of relief
The various forms of psi
Tying up loose ends: Exploring the forbidden country
Some preliminary concluding remarks
Conclusion
Solving the Universe!
W.K. Clifford used the phrase “Solving the Universe” in the 1870's. Clifford was the first scientist
to consider a single field solution for all of physics. His legacy is carried on today by those scientists
who wish to develop a ‘theory of everything.’ Today’s theory would need to include consciousness and
psi phenomena, but any theory to include these must proceed from physics rather than to physics
from the paranormal.
TOEs?
“Solving the universe” is a phrase that was used by William Kingdon Clifford during the 1870s to describe his
attempts to derive a theory of matter based upon curved space. Today, the phrase better fits what are called ‘theories
of everything’ or TOEs. Even though some scientists are working on TOEs, it is questionable whether any theory can
explain ‘everything’ in the universe. The name TOE may be no more than an example of errant human egocentrism
(it is very egocentric to presume that we know the whole universe from our extremely small corner of reality),
scientific bravado or possibly a false hope of a limited reality which we will soon understand, but the acronym TOE is
‘cute’ and ‘catchy’ so it has become popular.
The five-dimensional theory proposed in this volume, dubbed SOFT for single field theory (or single
‘operational’ field theory if the ‘o’ needs elucidation), comes far closer than any other theory to the mystical concept
of a TOE, but it is not truly a TOE. It could only be considered a TOE if we wish to belittle what the term
‘everything’ actually refers to.
Several philosophical problems arise if the notion of a TOE is even contemplated. P.C.W. Davies and Julian
Brown have stated that the ultimate TOE would not be subject to falsification.
The ultimate TOE would, ideally, need no recourse to experiment at all! Everything would be defined
in terms of everything else. Only a single undetermined parameter would remain, to define the scale of units
with which the elements or the theory are quantified. This alone must be fixed empirically (In this ultimate
case, experiment merely serves to define a measurement convention. It does not determine any parameter in
the theory). Such a theory would be based upon a single principle, a principle from which all of nature
flows. This principle would presumably be a succinct mathematical expression that alone embodied all of
fundamental physics. In the words of Leon Lederman, director of Fermilab - the giant particle accelerator
facility near Chicago - it would be a formula that you could ‘wear on your T shirt’. (Davies and Brown, 7)
So, if the ultimate TOE that scientists presently seek explains ‘everything,’ it can be neither verified nor falsified as the
true picture of nature. This conclusion would seem logical given Gödel’s theorem whereby we would have to
somehow go outside of the universe, or outside of this ultimate TOE, to a more comprehensive physical reality to
determine the validity of this mythical TOE. Quite simply, a true TOE could never be verified or otherwise subject to
falsification. If it explained everything, it could never predict new phenomena to test and therefore, could never be
open to falsification. But even before that point is reached, the most advanced theories will become more difficult to
falsify or verify. The more comprehensive and complete any theory becomes, the more difficult falsification will prove
to be, and SOFT is no exception.
As it stands, SOFT is too broad and flexible for direct falsification. In general, scientists do not like such broad
and comprehensive theories since they leave nothing from which they can be verified. At the very least, scientists are
suspicious of the claims made by those who propose such over-reaching theories. But this theory represents the
logical application of the single simple hypothesis of a real fifth dimension that is either falsifiable or verifiable by its
individual elements, consequences and applications, although not the whole. The problem with proving this theory, of
demonstrating that it is open to falsification, is inherent in the very nature of the five-dimensional concept.
It is just as Einstein stated, the five-dimensional hypothesis cannot be accepted unless an explanation of why we
cannot perceive the fifth dimension is given. Expanding upon this idea, the linear one-dimensional extensions of the
dimensionless mathematical points in the fifth dimension which represent real positions in the four-dimensional
space-time continuum require that every occurrence, event and phenomenon in the normal four dimensions have a
direct five-dimensional counterpart or component. On the other hand, any five-dimensional occurrence, event,
phenomenon or (subtle) influence such as psi invokes a corresponding four-dimensional action at the
mechanical/chemical/physiological level. This requirement is a new version of the chicken-egg paradox.
There is no easy way to distinguish action, interaction or influence in the fifth dimension if a corresponding fourdimensional explanation can be derived. The fifth dimension and the four normal dimensions of space are not
separate things. So demonstrating the validity of the five-dimensional hypothesis is difficult since the ‘cause’ of an
event could just as likely be found in either the normal four-dimensions of experience or the fifth dimension structure.
Scientists and philosophers will always choose the explanation built upon that which is normally perceptible over one
that which is not. However difficult falsification may be, it is not impossible since SOFT is not a true TOE, or rather
it is not the ultimate TOE. It is subject to justification, verification and falsification by determining physical
phenomena or events that depend directly upon the geometrical characteristics of the five-dimensional structure.
Four obvious areas, which will lead to falsification, were implied in the earlier discussion of this theory. Each of
these areas or aspects of SOFT represents what would have appeared as independent theories if not for the umbrella
of SOFT. They are a comprehensive theory of the atomic nucleus, a theory of galactic spiral formation, a quantitative
physical theory of life based upon the chemical structure of a simple living organism, and the relationship between psi
and human emotional states. These individual theories were either briefly explained within the text or directly implied
in the foregoing explanation of the overall theory.
The purpose of this work was not to give a complete and comprehensive exposition of individual consequences
of the five-dimensional hypothesis, but to support the hypothesis of a real fifth dimension by demonstrating its logical
necessity, illustrate how the five-dimensional hypothesis can be used to unify other physical theories and solve logical
paradoxes between those theories, and explore how the fifth dimension can account for a vast array of physical
phenomena that are otherwise without explanation. The key feature of SOFT is its overall ‘wholeness’ - how all the
elements come together or fall into place within a single logical framework. Individual consequences and applications
of SOFT, such as these four theories, are to be worked out later. ‘Solving the universe’ takes more time and effort
then can be expended within the writing of a single volume. In this respect, SOFT is a theory that is also its own
‘programme’ for future development as well as its own metaphysics.
No theory of the atomic nucleus has ever been completely successful. In fact, different theories explain different
physical features of the nucleus, but are not compatible with each other. The five-dimensional theory of the atomic
nucleus suggested by SOFT depends on the specific geometrical characteristics of the five-dimensional structure, so it
should verify the existence of the fifth dimension and render SOFT falsifiable. In this case, the explanation of the
nucleus reduces to a five-dimensional packing problem with the nucleus taking on the particular geometrical
characteristics of the extended dimension. The fifth dimension actually determines the physical characteristics of the
nucleus in our four-dimensional space-time continuum and will thus lead to predictions regarding nuclear decay and
the stability of nuclei.
The predominant spiral form of galaxies results from a Coriolis Effect related to the expansion of the universe, a
known phenomenon. Without the extra dimension, there would be no such Coriolis Effect. Using a mathematical
model based on chaos theory, computer simulations of galactic evolution given specific initial distributions of matter
and other initial conditions will explain the variety of galactic shapes observed by astronomers. Not only will the spiral
shapes be explained, but globular clusters and other formations will also be accounted for. When the same techniques
are applied to smaller volumes of primordial gas and dust clouds, the evolution of stellar systems will be explained.
Even the Titus-Bode law will finally be explained. On yet a smaller scale, subtle planetary precessions such as Earth’s
‘Chandler wobble’ will become a consequence of the same Coriolis Effect. As the calculational techniques are
perfected, measurements of ‘Chandler’s wobble’ and other observed phenomena will be used to predict the overall
size and curvature of the universe. Such predictions can be verified by astronomical observations, rendering this
particular aspect of SOFT falsifiable. Predictions of the shape and distribution of matter in other planetary systems
will also be verified as we discover more planetary systems through new observations.
The mathematics of chaotic systems will also provide the models necessary to explain the physical basis of life. A
detailed study of the chemical interactions within the simplest living organisms and the application of the fivedimensional hypothesis will confirm the new theory of life. When all of the chemical interactions within the simple
organism are identified and catalogued, the energy patterns representing each different class of chemical interactions
can be modeled. The levels or states of order that emerge from the seemingly chaotic energy interactions will yield
further complexities upon which life itself will be modeled. Testing new variables and varying existing parameters
within this simple model will allow scientists to predict the different life forms that can be found in nature. Further
studies upon the simplest models will yield predictions regarding the nature of life and characteristics of individual
organisms as well as the emergence of mind and brain as well as other functional organs within life forms.
The verification of psi will come from experiment rather than theoretical research since psi is the result of action
and interaction at a much higher level of complexity than simple life. SOFT implies that psi reception occurs best
when emotional factors are considered. The relationship between energy changes and alterations in the fivedimensional single field, which influence non-local objects, is quite straightforward. On the other hand, the
physiological relationship between emotional states and energy fluctuations within an organism or body are also well
documented. Therefore, an experimentally verifiable link exists between emotional states and psi reception/
transmission. Knowledge of this link should lead to new methods and designs of experiments that will verify psi and
confirm the link between psi and the physics of extra dimensions. Eventually, quantifiable models of psi will evolve
from SOFT, but that possibility depends upon new advances within biology, physiology and understanding of living
organisms.
While these four aspects will lead to complete theories in themselves, they are interrelated through SOFT. Each
of these theories confirms another portion of the overall space-time structure represented by SOFT. This situation
may well be unprecedented in physics. Soft cannot be proven as a whole, but each individual triumph for the theory
builds confidence that the basic hypothesis of an extra dimension is both viable and necessary for the advancement of
science and the human species. Nor do these four theories exhaust the possibilities represented by SOFT. They only
represent the most obvious extensions of the five-dimensional hypothesis. More applications will be found and
eventually psi will be used as a tool of exploration in science.
The fact that this theory can be extended in so many different ways, even while it covers so many various and
seemingly unrelated phenomena, is its most intriguing feature. SOFT is so open-ended that it will act as its own
metaphysics and guide for individual applications as it develops beyond the initial stages of its application. The
relationship between physics, paraphysics and metaphysics thus alters the range and scope of possibilities of science
itself. If we take the sum total of that we can know, normal physics explains only the small portion of things that are
known.
Metaphysics acts as a buffer between the two. Metaphysics is the academic study or the area of thought where what
we know about nature and reality leads to speculation about the rest of nature and reality that are not yet known. So
metaphysics acts as a guide and a list of possibilities for future physics and science, but the establishment of
paraphysics alters this basic relationship.
In one fell swoop, paraphysics intervenes between normal physics and metaphysics. It pushes the whole of
physics well into the area of metaphysics and forces metaphysics to a new level of speculation and application within
the whole.
This new relationship results from the fact that SOFT is based upon a more fundamental concept than normal
physics: Matter and ‘matter in motion’ which form the basis of normal physical explanation are reduced to curvature
and ‘variations of curvature’ in paraphysics while field density variations in the fifth dimension are the new physical
basis of the space-time continuum in which curvature occurs. The switch to a more fundamental concept for physics,
creating the new paraphysics, renders many old problems and priorities in metaphysics into physical terms and
enlarges the overall field of physics although paraphysics still remains. Physics is natural and normal, but paraphysics
is natural and paranormal.
Yet not all of the problems and questions in the older metaphysics are solved, a few remain for metaphysical
explanation and speculation. Among those questions that still remain metaphysical, Free Will is probably the most
tenacious and far reaching. An explanation of Free Will is beyond even the concept of consciousness because Free
Will assumes choices based upon knowledge. The ultimate act of Free Will must therefore include not only knowledge
gained through psi, consciousness, mind and life, but Free Will must be able to manipulate these facets of physical
reality for its own benefit, to gain knowledge. Free Will is the highest form or result of human conscious activity. Free
Will is the last refuge of metaphysics that cannot be reduced to paraphysics or physics. Yet free will shall eventually be
reduced to choices that are based upon the human awareness or knowledge of physical reality in our expanding
concept of what is physically real.
It may be wise to differentiate between ‘relative’ free will (denoted by small case letters) and ‘absolute’ Free Will
(denoted by capital letters). In our everyday lives, we normally deal with free will, i.e., we make choices which are not
forced upon us by purely environmental (common physical) concerns: We choose how to act, react or interact to the
physical stimuli through which we sense the world around us. We assume free will because we make choices on how
to act, react and interact with some form of reality. On the other hand, truly Free Will assumes no outside influence.
Yet we cannot really say that our free will choice to eat asparagus (for example) is absolutely free of influence. At any
particular time we make that or any individual choice based upon either some event or sequence of events that
occurred in our past or present environmental influences. So free will may not always be so free. We can never isolate
and identify all of the influences that may affect our choices, so we only assume that some common choices are based
on free will.
From the five-dimensional point of view, the situation is even more complicated. The physical influences upon
which we make free will choices are further influenced by the vast new array of physical stimuli and knowledge
arriving via higher levels of conscious non-local contacts, consciousness itself and psi. Beyond even this, there is the
fact that future events can influence free will choices in the present through conscious or subconscious precognition.
The past and future are equally distant and accessible from the present in the five-dimensional perspective. So free will
seems even less likely from the five-dimensional point of view. However, the fact that we can recognize the existence
of free will and contemplate the concept free will does imply that Free Will exists. Where reason fails to confirm the
existence of free will, intuition validates its existence. We accept the concept of Free Will based upon faith as well as
our common experience of free will. There is no reason to doubt the existence of Free Will, which has now been
raised to a much higher level of existence than ever before.
In order to be a true TOE, the ultimate (proverbial big) TOE, a theory must explain Free Will, which may be
impossible. This admission does not mean that Free Will is impossible; it means that an explanation of Free Will may
be beyond both reason and intuition. A true TOE will eventually reduce physical reality to human reason at which
time ‘only’ intuition could move thought beyond the TOE. If humans indeed possess Free Will, then there is an
extension of each and every human being which exists beyond anything that we can imagine at present, because true
Free Will assumes everything that we presently know, can know in the future or imagine is used to make a final
decision or choice. Choice presupposes a comparison of at least two possible outcomes and if one possible outcome
is represented or explained via a true TOE then the other possible outcome must go beyond that TOE and the
physical reality that it explains. If that ultimate TOE includes the sum total of reasonable knowledge about physical
reality, then only a single final intuitive leap beyond all possible reason can give a being the second option for making
a Free Will choice.
In other words, if everything in the universe, all of physical reality, can be reduced to its most fundamental
elements (represented by mathematical axioms) and described by a single concept (represented by a mathematical
equation) then we would still have the choice of whether to apply this knowledge or not within the physical universe.
This assumes that the infinite regress of fundamental elements by which we construct our notion of the physics of
reality has a logical endpoint upon which we can base that single most fundamental concept. In accordance with
Gödel’s theorem, a confirmation of the existence of that physical reality must depend upon some still higher and
more comprehensive reality (the penultimate mathematical system?). The final choice by Free Will must depend on
the existence of that higher reality which allows us to formulate the second option for a legitimate choice utilizing
Free Will. How we may wish to define that final reality would be more of a religious conviction than a scientific
prediction at this moment in science and human evolution.
Bibliography