Cosmogenesis: The Alpha and the Omega

14/05/2014 14:05
 
1
Cosmogenesis: The Alpha and the Omega
Paul A. LaViolette
October 1973 - January 1974
This unpublished paper presents a glimpse into the early formulation of the
subquantum kinetics methodology less than one year after its initial inception and
11 years prior to its first journal publication. It presents philosophical
underpinnings of the theory showing the rationale for an open system view of the
microphysical realm and the need for more than three dimensions for space. It
also discusses how this novel approach allows a departure from the conventional
indeterministic physics paradigm in agreement with the views of de Broglie and
Einstein and offers a natural solution for the field-source dualism that plagues
standard physics. In this earlier version LaViolette did not refer to ether
substrates or etherons, but used the less controversial term "media" to describe the
entities engaged in the postulated subquantum reaction and diffusion processes.
Comments to the text are marked as updates.
Warehouse Myopia
What could be less interesting than seeing the inside of a furniture warehouse? A huge dusty
room hundreds of feet long, furniture neatly piled along its center isle; the appliances and lamps
to the left, the rocking chairs to the right. Further down to the right you can see the buffets and
shelves stored on their roller carts ready to be pulled out of their respective places at a moments
notice. These groupings of furniture seen as a whole seem to compose an exquisitely ordered
structure, an inanimate spatial classification of wares, static and immortal.
But, to stop our description here would be misleading for all about there is much activity.
Workers are unloading furniture from incoming company vans and carting the pieces to their
respective places on the warehouse floor. If they have unloaded a sofa, it will be wheeled to the
spot where the sofas are stored. If its a rug, it will be loaded into the rug bin, and so on. At the
same time other workers are removing pieces of furniture from their respective locations and
carting them to the outgoing delivery trucks.
After many weeks of observation we would notice that these groupings of furniture had not
changed appreciably in size or in relative placement. Yet, the furniture or components which
compose the groupings might be entirely different from week to week, a particular piece of
furniture having an average of seven days residence time in the warehouse. In such a situation it
is said that the inflow and out flow of furniture pieces maintains a state of dynamic equilibrium
(or steady-state equilibrium), where the process of building up of furniture groupings (anabolism)
is balanced by the process of their destruction (catabolism). The two processes taken together
are referred to as metabolism. The structures thus formed, the furniture groupings, are said to
metabolize. If either of the metabolic subprocesses ceases, the state of dynamic equilibrium will
be upset and the metabolic structure will disappear. For example, if the delivery trucks went on
strike, catabolism would cease, and the furniture would begin to build up in the warehouse. The
groupings would become choked and eventually completely disordered. On the other hand, if the
company vans went on strike, anabolism would cease and the furniture in the warehouse would
2
begin to dwindle. The groupings would become atrophied and would eventually disappear
altogether. While observing, we would also notice that the metabolic process is dissipative, the
workers being seen to expend energy to move the furniture. Consequently, we may refer to the
furniture groupings as being "dissipative space structures".
From our observations of a typical furniture warehouse we have learned two things about
"warehouse structure": 1) warehouse structure is formed only in the presence of a component
flow accompanied by a dissipation of energy, and 2) warehouse structure is metabolic; it persists
as long as the import and export flows are in a state of dynamic equilibrium.
Nevertheless, a naive myopic observer may come to a different set of conclusions. Not being
able to distinguish the individual pieces of furniture nor the workers busily carting the wares to
and fro, he might see only the overall groupings of classified furniture. The overall ordered
structure might seem as if composed of a collection of static entities. To him, these imposing
piles of goods would seem to cast an air of tomb like serenity within the cavernous warehouse.
Seeking to explain their origin in the warehouse he might conclude that some time in the past
these mounds were brought in and left on the floor where they have stood ever since. He might
be led to postulate a sort of "primordial creation''.
Now, imagine that one day there is an earthquake and all the piles of furniture are thrown
about the room in a state of disorder. Yet, the workers go about their business as always,
building up new groupings of order with the incoming furniture and carting away the disordered
furniture to the delivery trucks. After a week or two our piles are all back in their original places
with no trace of disorder. The naive myopic observer, however, is led to conclude that there
must be some "force of attraction" which is responsible for this amazing regeneration of
structure. He postulates that after the earthquake, these massive mounds must have gravitated
back to their original positions as might be observed when boulders roll down into a valley.
The truth is, we all suffer from warehouse myopia. It is a characteristic of perception that in
viewing dynamic forms our mind tends to grasp underlying patterns of a more static nature. For
example, when viewing a color wheel at slow speeds, we are able to distinguish the separate
colors as they whirl around. But when the speed increases, the individual colors blur, and we
instead see a circular disk of blended color. Because of the mind's inability to grasp the rapid
motion of the separate parts of the wheel, perception shifts and focuses on a "time-stable
system", the whole disk, a system constituted by a non varying pattern or repetitive event
sequence. To the naive observer the wheel indeed appears static.
Open Systems vs. Closed Systems
When we see a tornado in the distance we see a slowly moving dark funnel shaped cloud
whose internal structure appears to be static. Yet, we know its form is dynamic and owes its
existence to a rapid whirling flow of air, which we can see if we have the daring to come close
enough. Like the warehouse, the tornado imports and exports matter, in this case air of differing
densities. A given packet of air perhaps remains within the boundaries of a tornado for less than
20 seconds, yet the tornado may have a lifetime of more than half an hour. If this massive flow
of air were to suddenly cease, the tornado would disappear; its structure persisting only in the
presence of flow.
Metabolic structures such as tornadoes and furniture groupings in a warehouse are commonly
referred to as open systems, meaning that the physical boundaries of the particular structure or
system are open to the flow of components such as matter and energy. That is, the term implies
3
that there is an importing and exporting of components between the system and its environment.
On the other hand, systems lacking this characteristic of exchange are termed closed systems, i.e.
they are closed with respect to their environment. Processes taking place inside a closed system
must therefore be attributed solely to phenomena occurring within the system's boundaries.
But, as we have seen, it is easy for a myopic observer to mistake an open system for a closed
system, especially when the dynamic elements of the open system remain hidden from view.
That is, he thinks he has taken into account everything related to what he sees when in reality he
is not seeing everything. Seeing the interacting structures of the system as being static entities,
the observer may likely choose the closed system approach due to its simplicity. Our naive
myopic was guilty of this when he postulated his theory of attraction among warehouse furniture
groupings. He was attributing their behavior solely to agents inherent in the groupings. To
postulate open system behavior would necessitate introducing a new dimension to the system,
the environment. But since the environment remains invisible to the myopic, such a course of
action would be seen as being unnecessarily complicating.
Today it is rather commonly agreed that the closed system view has no place in the life
sciences as an explanation of structure and process. In the fields of biology, psychology,
sociology, business, economics, and information science, open system theory is widely accepted
as an approach to understanding the origin of structure and system behavior. However, it is
interesting to note that each of these sciences at one time clung tightly to a closed system view.
In the "classical era" some 60 or so years ago, business organizations were regarded as selfdeterministic
closed systems functionally independent of their environment. Classical economics
took a closed system approach with quite undesirable consequences. Before the advent of Floyd
H. Allport's event-structure theory,(1) psychology had numerous closed system theories such as
Mill's" building block" theory and the field theory of the Gestalt school of thought. Even
biologists at one time took a closed system view of life theorizing that a sperm contained a fully
developed human being in miniature which eventually grew in size.
The physical sciences, however, still adhere to the closed system view. For example,
classical thermodynamics expressly declares that its laws only apply to closed systems.
However, recently (since the mid 1940's) a new branch of thermodynamics has emerged called
nonequilibrium thermodynamics, which is concerned with the study of open systems. It has
found application in the fields of hydrodynamics and organic chemistry describing the dynamics
of matter in open systems. For example, the open system approach has proved to be useful in
describing phenomena such as the candle flame, ball lightning, the hurricaine, the tornado,
turbulent flow, thermal convection currents, and nonequilibrium coupled chemical reactions.
Nevertheless, one branch of physlcs in particular has remained steadfastly rooted in the
closed system view, namely microphysics. Why should this be the case? Of all the sciences,
why should microphysics be the last to free itself from the closed system approach?
Observatlonal myopia may lie at the root of the problem. Microphysics deals with phenomena
on a scale that is observationally far removed from the human scale.
To see open systems at work in a warehouse we need only open our eyes. To view the
metabolic behavior of a microrganism, we need only look through a microscope or study its
chemical composition. But, the observation of microphysical structures such as subatomic
particles is limited by their extremely small size. The well known Heisenberg uncertainty
principle, (ΔX)(ΔP) ≥ h, states that the product of the uncertainty in a particle's positlon ΔX,
multiplied by the uncertainty in a particle"s momentum ΔP`can never be less than the constant h
(also known as the "quantum of action"). In other words, assuming that the world is
4
fundamentally probabilistic, it states that the more accurately we come to know a particle's
position the less accurately we know its momentum or state of motion. This is because to
determine a particle's position more accurately we must bombard it with exploratory radiation
having an increasingly shorter wavelength. But photons of shorter wavelength (higher frequency)
have greater energy and thus are capable of transferring more momentum to the particle being
studied. Thus, it is impossible to have precise and simultaneous knowledge of a particle's
position and velocity (momentum). Of course, Planck's constant, h, which equals 6.63 X 10-27
erg-sec, which is extremely small with respect to more typical units of measure, making the
quantum uncertainties negligible for physical phenomena of ordinary human scale. But, in the
submicroscopic study of physical phenomena, the relative magnitude of the uncertainty is
considerable and places microphysics in the midst of a very dense observational fog. In effect,
the Heisenberg uncertainty principle declares that no matter how hard we try, we will always be
myopic in our observations of microphysical phenomena.
In light of this, how should we view material reality? Consider for the moment an electron.
Should we regard it as physicists presently do, as a closed system, as an isolated particle whose
apparently static structure has no way of being fully understood? Or, should we regard it as an
open system, a dynamic metabolic structure requiring a sustaining flow of components and that
has a determinable existence? Observations cannot tell us which view is correct because our
observations will always be myopic. So, in postulating a model for the electron, physicists have
chosen the simplest conceptual description, the closed system description. Just as our
warehouse myopic, why should they theorize about component flows which they can never
hope to see? Would this not smack of quackery. The closed system view is conceptually quite
logical so why not use it?
If indeed the electron were an open system, would not proof be required of its inherently
invisible component flows, of these "little workers" running around with their carts continually
building up and breaking down its structure? The biologist need only throw back the shutters on
his windows to see the energy source that drives the hierarchy of life. With what instrument
does the physicist "see" that vast hypothetical animating gradient that continuously maintains
the structure of all matter in the universe? Fortunately, a rational choice can be made to
determine whether the open system approach offers a better description of microphysical reality
than does the closed system approach. The superior approach will be the one which is best able
to unify all known experimental data into a simple, coherent, and understandable theory.* It is
hoped that the open system approach to be presented in this paper will eventually fulfill this
objective.
The Chemical Reaction Model of Cosmogenesis
The open system approach to microphysics which is to be developed shortly, is more easily
understood by reference to a conceptual model. Our present objective, therefore, is to become
familiar with such a model and to do this we turn to the field of chemistry. It might be noticed
that a certain class of chemical reactions exhibit structural and kinetic ordering phenomena much
like those observed at the micro-physical level. Such chemical systems are of the nonlinear
coupled variety and will be examined in the chemically open mode, far from thermodynamic
_____________________
* [update] Or, in the words of Einstein: "We are seeking the simplest possible scheme of thought that
can tie together the observed facts."
5
equilibrium where, it will be seen, they exhibit temporal and spatial ordering of their reactants.
To become familiar with open systems such as this, we will first examine one of the simpler
reaction systems whose behavior is portrayed by the Lotka-Voltera model.
The Lotka-Voltera model was originally introduced in the field of population biology to
describe the predator prey interaction among species. However, it has found other applications
such as in modeling macroscopic stock market behavior,(2) and in representing certain biochemical
reaction systems characteristic of neural networks. A nonlinear, open chemical reaction system
of the Lotka-Voltera variety is represented below.
A + X ——❿
k1
k-1
➛—— 2X k1 > k-1
(1) X + Y ——❿
k2
k-2
➛—— 2Y k2 > k-2
Y ——❿
k3
k-3
➛—— Ω k3 > k-3
Here we have assumed that the reactions are reversible. However, when this model is used in
population biology, only the forward reactions are written.
The X here represents the concentration, or quantity, of the "prey" chemical or species, and
the Y represents the concentration of the "predator". Flow enters the system in the form of A
which is the food supply or energy supply of the prey, and flow leaves the system in the form
of Ω representing the dissolution or death of the predator. The global reaction appears as the
following transmutation: A → Ω. The nonlinear nature of this system arises from the
autocatalytic action of the first and second equations, the first positive feedback equation
exhibiting autocatalysis with respect to X and the second with respect to Y. Given an open
system, i.e., a supply of species A continually entering the system, the first equation taken by
itself, would produce an exponentially increasing concentration of X, or in other words, a
nonlinear increase in X. But due to the coupling of the first equation with the second equation, X
is continually removed, and so it never builds up indefinitely. Similarly, Y is removed in the third
equation.
At equilibrium, the species concentrations are determined by their kinetic constants ki and by
the concentration of A in the following manner:(3)
(A/Ω)eq = (k-1 k-2 k-3)/(k1 k2 k3)
(2)
Xeq = (k1 /k-1) A Yeq = (k1 k2/k-1k-2) A
If the ratio A/Ω is only slightly different from its equilibrium value shown in (2), reactions (1)
proceed steadily to the right in a linear manner, the reactants tending towards their equilibrium
values. In this near equilibrium regime the system behaves according to the laws of classical
thermodynamics which predicts that any arbitrary fluctuation in the concentration of any
chemical species tends to become damped by other spontaneous fluctuations and the resulting
species concentration tends to regress in an aperiodic manner to its steady state value.
However, suppose the ratio A/Ω deviates significantly from its equilibrtuin value, either by
making the reverse reactions negligible (i,e. k-1, k-2, k-3 → 0), or by causing the flow into.the
system of the energy releasing chemicals A to increase, i.e. A/Ω → ∞. Then, the gradient or
6
affinity of the reaction system to go toward Ω tends toward infinity, and the reactions become
irreversible.
As the affinity of the reaction is increased, a certain critical threshold will be reached. Below
this threshold, the reaction will operate in the near equilibrium regime, its reaction kinetics
proceeding randomly at the molecular level. The system can be described macroscopically by
classical thermodynamics. Its chemical concentrations will show no time dependence. Hence the
system is said to maintain a steady state.
But, beyond this threshold this steady state becomes unstable. A nonlinear regime is entered
in which the system behaves according to a new set of principles which predict the creation of
temporal ordering in the concentrations of its chemical species.* The behavior of the system,
here, is best analyzed with the use of nonequilibrium thermodynamics. The nonlinear behavior of
the first and second autocatalytic reactions (see (1)) tends to override the disrupting effect of
spontaneous fluctuations. An arbitrary fluctuation becomes sustained rather than damped and
becomes manifest as a periodic oscillation in the concentrations of X and Y. The concentrations
of X and Y, now being time dependent variables, may be described by the following kinetic
equations, A and Ω being maintained time independent:(4)
dX/dt = klAX - k2XY
(3)
dY/dt = k2XY - k3Y
The solutions to these equations for various (X, Y) values corresponding to various
magnitudes of fluctuations from the steady state are shown in figure 1.(5) Each orbit plotted in
Figure 1. Oscillations of variable species X and Y about the steady state in a
Lotka-Volterra (predator-prey) system (after Glansdorff and Prigogine, 1971).
_____________
* This event may be viewed as a set-superset transition, a simple example of how hierarchy may
arise naturally in nonliving open systems.
7
the (X, Y) phase plane denotes the periodic behavior of the system over a complete cycle. As is
seen, an infinite number of orbits around the steady state S, is possible, each corresponding to a
different set of initial (X, Y) values.
Each orbit appears as a state of marginal stability, where even a minor fluctuation is sufficient
to change the oscillation of the system to a new orbit and consequently to a new frequency.
These sustained oscillations provide an example of dissipative temporal ordering. The use of the
word "dissipative" implies that chemical energy is being expended or dissipated and as a result of
this time ordered patterns emerge in the chemical concentrations.
There are other nonlinear, open reaction systems which not only exhibit temporal ordering
but also spatial ordering of their chemical species. The thermodynamics of such systems were
pioneered by Ilya Prigogine and his coworkers.(6) Here we will review some of their work. One
such reaction scheme, known as the Brusselator system, is shown below. It is not realistic from
a chemical standpoint because tri-molecular reactions such as (4-b) are very uncommon.*
Nevertheless it is studied due to its simplicity.
A → X (a)
2X + Y → 3X (b)
(4) B + X → Y + D (c)
X → Ω (d)
The initial products, A and B, and final products, D and Ω, are maintained space and time
independent throughout the system while X and Y are free to vary as dependent variables. The
inverse reaction rates are neglected and the forward kinetic constants are set equal to unity
placing the system at an infinite distance from thermodynamic equilibrium. Under these
conditions two overall irreversible reactions (A → Ζ and B → D) will take place in a steady state
manner homogeneously throughout the chemical medium provided that the concentration of B is
below a critical threshold Bc where Bc = A2 + 1.(7) Thus, matter which was originally structured
as chemicals A and B ends up composing chemicals D and Ω and in the process of this
transformation composes the intermediate chemicals X and Y. In this steady state, X and Y will
have values X0 = A and Y0 = B/A.(8)
As the concentration of B is increased past threshold Bc, the concentration of Y becomes
significantly elevated and affects the dynamics of equation (4-b) which is autocatalytic with
respect to X. The steady state now becomes unstable and the system enters a nonlinear regime
where temporal fluctuations in mixture composition become amplified rather than damped. A
new stable state is reached characterized by the appearance of sustained periodic oscillations in
the concentrations of X and Y. This periodic process, called a "limit cycle", is seen in figure 2 for
a hypothetical example where A =l unit and B = 3 units.(9) Values of X and Y for different points
in time are here plotted against each other as was done in figure 1. However, it is seen that unlike
the Lotka-Volterra model, the limit cycle here is uniquely defined as a single, irreversible orbit.
Its frequency and amplitude are uniquely determined by the kinetic constants as well as by the
concentrations of the initial and final products in the reaction. Also, as is seen in figure 2, the
system's approach to the limit cycle is equifinal and independent of the initial values of X and Y.
________________
* Nevertheless, Lefever, et al. (1988) have shown that trimolecular reaction (4-b) can be expanded
into two coupled bi-molecular reactions. [Lefever, R., Nicolis, G., and Borckmans, P. "The
Brusselator: It does oscillate all the same." J. Chem. Soc. Faraday Trans. 1 84 (1988): 1013-1023.]
8
Figure 2. Computer simulation of the Brusselator system showing oscillations
of variable species X and Y in an equifinal approach to a limit cycle oscillation
(after Glansdorff and Prigogine, 1971).
So far we have discussed only temporal ordering in which the homogeneous chemical
transmutation of matter, which is a time-dependent process, becomes unstable and achieves a
new stable state where this transmutation is inhomogeneous with respect to time. Now, let us
consider also the phenomenon of spatial ordering. In this case, a new dimension of activity must
be added to the chemical kinetic process, this being a molecular transport; i.e., chemical diffusion.
Within this more general framework of both time and space ordering, pure temporal ordering
becomes a special case in which the diffusion coefficients of all reacting species are assumed to be
infinite or where the reactants are assumed to be homogeneously mixed.
However, if the diffusion coefficients of the oscillating species, Dx and Dy , are comparitively
low and if the reaction medium is left mechanically undisturbed, it should also be possible to
observe spatial ordering of species X and Y, that is, provided that the concentration of B is
greater than the critical threshold Bc' where Bc' = [1 + A(Dx/Dy)½]2.(10) Figure 3 shows the onset
with the passage of time of such spatial variations in the concentrations of X and Y where the
subscripts denote two adjacent boxes in space.(11)
A small spatial fluctuation in the homogeneous state of Y, ΔY = Y2 -Y1, initiated at time 0
becomes amplified by the autocatalytic reaction (4-b), wherein this spatial fluctuation in Y
induces an amplified spatial fluctuation in X, which in turn feeds back through equation (4-c) to
further augment the spatial fluctuation in Y. Providing that B is sufficiently large, this autocatalytic
amplification process will override the dampening effects introduced by the random diffusion
of the chemicals. Thus, a single fluctuation may augment driving the system to a new final
state of order characterized by spatially alternating concentrations of X and Y; see figure 4.(12)
Just as in the case of the limit cycle, this new state is reached equifinally regardless of the
initial concentrations of X and Y. The wavelength and amplitude of the pattern are uniquely
determined by the concentrations of A and B, the kinetic constants, and the diffusion coefficients
Dx and Dy. Diffusion coefficients of the initial and final reactants, Da, Db, Dd, and Dω, are
assumed to be infinite, i.e., these species are assumed to be homogeneously distributed.
9
Figure 3. Computer simulation of the Brusselator showing the onset with the
passage of time of an inhomogeneous steady state distribution in X and Y in a
simplified two box reaction volume (after Glansdorff and Prigogine, 1971).
Figure 4. Computer simulation of the Brusselator showing the final periodic
steady-state distribution attained by the concentrations of the reaction variables
X and Y (after Glansdorff and Prigogine, 1971).
10
An important point which should be emphasized is that small fluctuations, for example of
thermal origin, can no longer reverse the system configuration back to the homogeneous state.
Destruction of this "super-set" pattern will occur only if the perturbations of the steady state
concentrations are of the same order of magnitude as the difference in concentration between two
adjacent locations.
An interesting case emerges when Da is chosen to be finite but large with respect to Dx and
Dy, and when A is nonuniforinly distributed.(13) Under these circumstances, the space ordered
structure described above may be localized inside the reaction volume, its boundaries being
uniquely determined by certain concentration and diffusion parameters. Its organization would
now be maintained by a flux in two dimensions; one flux, A → X, occurring as before in the
"reactant dimension", and now, an additional flux of A in the spatial dimension crossing the
spatial boundary of the structure.
Figure 5(14) shows a "nonequilibrium phase diagram" depicting the various states of time and
space ordering as they depend on the concentration of B and on the diffusion coefficient Dy,
parameters A and Dx being held constant for simplicity. In domain I, the homogeneous steady
state is stable with respect to fluctuations in mixture composition. In domain II, fluctuations
increase monotonically driving the system to a new inhomogeneous steady state corresponding to
a regular static spatial distribution of X and Y, a "dissipative space structure". Domain III marks
the appearance of a dissipative structure ordered in both space and time. In this regime the
medium remains spatially inhomogeneous while the concentrations of X and Y at each point
undergo periodic oscillations creating the appearance of a propagating spatial pattern. If Dx and
Dy are taken very large, then in domain III all space dependencies disappear and the reaction
volume oscillates everywhere with the same phase. However, in domain II the system would
remain in an aperiodic steady state.
Taking a moment to step back, it is interesting to note the similarity between the chemical
system just described and our crude warehouse model. The various transmuting chemical species
represent the component flow, i.e. the furniture being moved around. Both systems are
dissipative, in one case, heat is being released from the reacting chemicals and in the other case the
workers are consuming their reserve of energy, getting tired, and giving off heat. Just as does the
Figure 5. Nonequilibrium phase diagram for the Brusselator system
(after Glansdorff and Prigogine, 1971).
11
warehouse, the chemical system has an input flow, A → X, and an output flow, X → Ω.
Finally, in both systems there emerges a hierarchic ordering or patterning of components. On the
one hand, this is manifest as inhomogeneities in chemical composition, and on the other hand, this
is evidenced by furniture groupings.
The Belousov-Zhabotinskii Reaction
The Belousov-Zhabotinskii reaction is a fascinating example of an open chemical reaction
system that is capable of exhibiting both temporal and spatial patterning. Field, Körös and
Noyes have succeeded in determining the mechanism by which this reaction produces temporal
oscillations in a stirred homogeneous system. Their reaction scheme is shown below.(15)
HOBr + Br- + H+ ➛ ————❿ Br2 + H2O
HBrO2 + Br- + H+ → 2HOBr
BrO3
-+ Br- + 2H+ → HBrO2 + HOBr
2HBrO2 → BrO3
- + HOBr + H+
BrO3
- + HBrO2 + H+ ——❿ ➛—— 2BrO2 + H2O
BrO2 + Ce+.3 + H+ ➛ ————❿ HBrO2 + Ce+4
BrO2 + Ce+4 + H2O → BrO3
- + Ce+.3 + 2H+
Br2 + CH2(COOH)2 → BrCH(COOH)2 + Br- + H+
6Ce+4 + CH2(COOH)2 + 2H2O → 6Ce+.3 + HCOOH + 2CO2 + 6H+
4Ce+4 + BrCH(COOH)2 + 2H2O → Br- + 4Ce+.3 + HCOOH + 2CO2 + 5H+
They explain its general functioning in the following way:(16)
"In a stirred sulfuric acid solution containing initially potassium bromate, cerium sulfate, and
malonic acid, the concentrations of bromide ion and of cerium (IV) undergo repeated
oscillations of major proportions. The concentrations of these species have been followed
potentiometrically, and the detailed mechanism of the reaction has been elucidated. When the
solution contains sufficient bromide ion, BrO3
- is reduced to Br2 by successive oxygen
transfers (two-equivalent redox processes), and the malonic acid is brominated by an
enolization mechanism. When the concentration of bromide ion becomes too small to remove
HBrO2 sufficiently rapidly, the latter reacts with BrO3
- to form BrO2 radicals which oxidize
cerium (III) by one-equivalent processes. As a result, HBrO2 is produced autocatalytically in
the net reaction BRO3
- + HBrO2 + 2Ce3+ + 3H+ → 2HBrO2 + 2Ce4+ + H2O. Indefinite
buildup of HBrO2 concentration is prevented by the second order disproportionation of this
species. The cerium(IV) oxidizes bromomalonic acid with liberation of bromide ion which
ultimately terminates the autocatalytic production of HBrO2 and initiates a repeat of the
cycle."
The species Ce3+ and Ce4+ are seen to oscillate with respect to each other in the manner of a
limit cycle, much like the oscillations in X and Y predicted by the previous model. By adding
Ferroine redox indicator to the solution the valence of Ce may be visibly followed by a color
change, the indicator being appearing either red or blue depending on the ionic state. Glansdorf
and Prigogine describe the following experiment performed in a one dimensional medium, a
12
vertical test tube, in which they observe the appearance of a dissipative space structure; see
figure 6.(17)
"Equal volumes of Ce2(SO4)3, (4 × 10-3 M/l); KBrO3, (3.5 × 10-1 M/l); CH2(COOH)2, (1.2
M/l); H2SO4, (1.5 M/l); as well as a few drops of Ferroine (redox indicator) were stirred with
a magnetic agitator for 30 minutes at room temperature.
Two milliliters of this homogeneous mixture were then put into a test tube kept at the
constant temperature of 21 C by a thermostat and stirring discontinued.
Temporal oscillations immediately appeared; the solution in the test tube changed color
periodically from red, indicating an excess of Ce3+, to blue, indicating an excess of Ce4+, the
period depending on the initial concentrations and temperature, For the above conditions, the
period was about four minutes. The oscillations did not occur simultaneously throughout the
solution but started at one point and propagated in all directions at various speeds. After a
variable number of oscillations, a small concentration inhomogeneity then appeared, from
which alternate red and blue layers were formed one by one. ... During the formation of these
layers, time oscillations continued to be observed in the part of the solution where the
structure had not been established."
Arthur Winfree performed the reaction in a petri dish containing a solution layer of up to
2mm deep, hence forming a two dimensional medium; see figure 7.(18) He made the following
observations:(19)
"Pseudo waves (phase gradients in bulk oscillation) sweep across the reagent at variable
speed. In addition, blue waves propagate in concentric rings at fixed velocity from isolated
points (pacemaker centers) with a period shorter than the period of the bulk oscillation.
Unlike pseudo waves, these waves are blocked by impermeable barriers. They are not
reflected. They are annihilated in head-on collisions with one another. The outermost wave
surrounding a pacemaker is eliminated each time the outside fluid undergoes its spontaneous
red-blue-red transition during the bulk oscillation. Because of uniform propagation velocity
Figure 6. Dissipative space structure seen in the
Belousov-Zhabotinskii reaction. (Courtesy of Glansdorf and Prigogine, 1971)
13
Figure 7. Chemical wave fronts propagating in a dish containing the Belousov-
Zhabotinskii reaction. Courtesy of A. Winfree.
and mutual annihilation of colliding waves, faster pacemakers control domains which expand
at the expense of slower ones: each slow pacemaker is eventually dominated by the regular
arrival of waves at intervals shorter than its spontaneous period."
The pseudo waves or bulk oscillations spoken of here refer to spatially homogeneous limit
cycle oscillations of the medium. On the other hand, the propagating concentric rings originating
from pacemaker centers are an example of chemical ordering which is both space and time
dependent, similar to the condition predicted to exist in region III of the phase diagram in figure 5.
Winfree observed that pacemaker centers seem to arise at discontinuities such as at nuclei on
the air-liquid interfaces. He also observed pacemaker periods ranging from 15 seconds to several
minutes and found that at 25 C their waves propagate about 6 mm per minute in a 1 mm deep
medium. In another paper by DeSimone, Bell, and Scriven, wave diffraction phenomena were
reported where wave fronts propagating in a two dimensional medium were observed to diffract
around a barrier placed obliquely to their frontal boundary.
Although the mechanics of temporal ordering in the Belousov-Zhabotinskii reaction are fairly
well understood, the processes involved in the production of spatial ordering have thus far been a
subject of controversy. At present two explanations have been offered for these spatial patterns.
One view is similar to that discussed earlier with reference to the model represented by equations
(5). This view is of the opinion that diffusion plays an important role. However, some have
noted that diffusion alone cannot explain the rapidity with which these rings propagate. It has
been suggested that perhaps a "reaction enhanced diffusion" may be involved where the diffusion
14
of HBrO2 and the autocatalytic reaction, BrO3
- + HBrO2 + 2Ce+.3 + 3H+ → 2HBrO2 + 2Ce+.4 +
H2O, work together to speed up the propagation of the wave front. Field and Noyes(20) have
developed a detailed explanation along these lines. They have concluded that each band as it
propagates through the medium leaves in its wake a region unfavorable for the propagation of
another band. From this they are able to explain why a trailing band will never overtake a leading
one, why a band will not be reflected by a physical obstruction, and why two colliding bands will
annihilate each other.
Another view, held by A. Kopell and A Howard considers diffusion as being a relatively
unimportant factor, but only in explaining the band patterns observed in the one dimensional,
vertical tube experiments. They feel that the oscillations in the reaction medium are spatially
uncoupled, in other words, that a valence transition in one unit volume of reaction medium is not
responsible for initiating a similar transition in an adjacent volume. They believe that the bands
appear when one of the chemical species, such as H2SO4 becomes inhomogeneously distributed.
Since the frequency of oscillation of the reaction is dependent upon the concentration of this
species, a frequency gradient would be established corresponding to this species concentration
gradient. The spatial patterns could thus be due to spatial phase variations in this temporal
ordering phenomenon, giving only the appearance of spatial ordering. So, by this second view,
only temporal ordering occurs and not as suggested by the theory of Prigogine. Glansdorf and
Prigogine themselves admit that the static band pattern they observed "always appeared after an
oscillatory state"(22) and that they have not yet observed a range of concentrations, as predicted
by their theory, where a spatial structure has become established without oscillation. Dieter
Thoenes(23) has extended the phased oscillator view to explain the ring patterns observed in the
horizontal, two dimensional experiments. But here, there is a conflict with the diffusion
explanations offered by Field and Noyes, and others. Perhaps further experimentation will be
necessary to resolve these differing views.
Having gained a familiarity with nonlinear chemical kinetics we are now in a fairly good
position to tackle an open system approach to microphysics. But, before proceeding let us
survey the current state of the art in theoretical microphysics, as it presently stands based upon
closed system concepts.
Contemporary Microphysics: A Divided Science
We have seen earlier that observation of microphysical phenomena is quite difficult, that all
of us are natural born myopics. Then, in reviewing the various scientific theories on this elusive
subject what should we expect to find, mutual agreement or discord? For a preview, we may find
it helpful to study the parable of the three blind men and the elephant which many are
undoubtedly familiar with.
Three "myopic" blind men, on a walk one day, came upon an elephant. The first, feeling its
trunk, exclaimed that they must have come upon a rope. The second, feeling the elephant's side,
disagreed saying that it seemed to him more like a wall. The third, feeling the foot of the
elephant, disagreed with both of them insisting that they had come upon a tree. All the while, the
elephant was curiously amused.
Similarly, we find today that twentieth century microphysics is involved in the same sort of
myopic dilemma. The currently accepted field theory approach has evolved three main
theoretical approaches: quantum mechanics, wave mechanics, and relativity theory. Each of
these represents a separate, coherent body of knowledge, but each provides an understanding of
15
only a part of the totality of observed physical phenomena. Attempts toward a unification have
so far been unsuccessful. Unified field theories have instead tackled unifying electromagnetism,
gravitation, and the strong force. In an attempt to reconcile the divergent views of quantum
mechanics and wave mechanics, physicists have adopted the dualistic view that each be taken as
equally valid interpretations of the microphysical world, like viewing two sides of the same coin.
However, the coin itself has not been holistically comprehended. It is as if the blind men in our
parable have agreed that they had found an object which was both a rope and a wall, without
comprehending that they had found an elephant.
One of the major challenges for microphysical theory has been this dilemma that particles of
matter and quanta of radiant energy possess dual characteristics, in some situations seeming to
behave as corpuscles and other times seeming to exhibit wave properties. Neils Bohr took an
indeterministic approach to the problem saying that because of the complex nature of
microscopic reality it was not possible to devise a single mental picture of a corpuscle. In other
words, that not only are we unable to directly observe corpuscles with our instruments, as is
stated in the uncertainty relations, but that even so, the human mind is simplistic and would be
incapable of conceiving this reality with a single picture. He suggested that in order to describe
this complexity it might be necessary to use successively two (or several) idealizations for a
single entity, like visualizing a cone in two-dimensions as being either a circle or triangle. He
pointed out that the particle and wave pictures do not come into direct conflict thanks to the
uncertainty relations, that the more precise it is desired to make one picture through observations,
the hazier the other becomes. Thus although one continually expects a battle between the
wave and corpuscle, it never occurs because there is never but one adversary present. This view
held by both Bohr and Heisenberg has become known as the probabilistic approach. According
to this, the corpuscle is viewed as being spatially and dynamically indeterminate having a range of
possible locations and momenta, or similarly a range of possible frequencies or energies. Louis de
Broglie, however, expressed doubts about this view of reality:(24)
"All physicists are aware that for the past 25 years wave mechanics has been interpreted on
the basis of pure probability. In this interpretation the wave associated with the particle is a
probability function which varies with the respective state of' our knowledge and is thus
subject to sudden fluctuations, while the particle is said to lack a permanent localization in
space and thus to be unable to describe a well-defined trajectory. This way of looking at the
wave-particle dualism goes by the name of 'complementarity' a very vague notion which some
have tried to extrapolate from physics to other disciplines, often with dangerous
consequences."
This subjectivist approach, although successful at keeping peace between quantum and wave
mechanics, has consequently underrated man's ability to grasp microphysical reality and tended
to negate the possibility that a hidden underlying reality may exist. The uncertainty principle
which was originally intended as a statement of the limitations of observation has been extended
by the probabilistic interpretation as a law of nature. De Broglie expresses the following feelings
regarding indeterminism in modern physics:(25)
"There has been a great deal of discussion in the last years about this question of
indeterminism in the new mechanics. A certain number of physicists still manifest the
greatest repugnance to consider as final the abandonment of a rigorous determinism, as
present day quantum physics must do. They have gone to the length of saying that a non16
deterministic science is inconceivable. This opinion seems exaggerated to us, since quantum
physics does exist and it is indeterministic. But it seems to us perfectly permissible to think
that, some day or other, physics will return to the paths of determinism and that then the
present stage of this science will seem to us to have been a momentary detour during which
the insufficiency of our conceptions had forced us to abandon provisionally our following
exactly the determinism of phenomena on the atomic scale. It is possible that our present
inability to follow the thread of causality in the microscopic world is due to our using
concepts such as those of corpuscles, space, time, etc.: these concepts that we have
constructed by starting with the data of our current macroscopic experience, these we have
carried over into the microscopic description and nothing assures us, but rather to the
contrary, that they are adapted to representing reality in this field."
In an attempt to achieve a more concrete picture of the wave-corpuscle duality, de Broglie
proposed his pilot wave theory. According to this, the corpuscle is considered as a kind of
singularity in the midst of an extended wave phenomenon. These pilot waves, as they are called,
are seen as being separate from the particle but closely associated with it such that the particle's
motion is controlled or piloted by them, much like a cork that is carried along by a current. The
amplitude of this wave group must be modulated in such a way that their value is non zero only
over a finite region of space in the vicinity of the particle. Properties such as mass, charge, and
spin are viewed as being characteristic of the particle or singularity only. However, de Broglie
met with difficulty in trying to formulate a mathematical description of his model particularly
with regards to the structure of the singularity and its synergism with the pilot waves. Also, he
could not find a reasonable explanation as to why the wave mechanics of Erwin Schrödinger, had
proven itself to be so successful by considering only continuous solutions to wave equations, the
so called ψ waves, and why it could ignore singularities.
Erwin Schrödinger critically opposed to the probabilistic interpretation, favored a description
which denied the existence of the wave-particle dualism. He believed that waves alone have a
physical significance, while the propagation of waves could occasionally give rise to corpuscular
appearances, but that these would be appearances only. At first, Schrödinger wanted to compare
the corpuscle to a small train of waves, but this interpretation could not be upheld, because a
wave train, in the manner he had defined it, would always have a tendency to expand rapidly and
continually into space and consequently could not properly represent particles of lasting
stability.
Albert Einstein tended to side with Schrödinger in criticizing probability theory. He raised
the following objection. Let a particle and its associated plane, monochromatic wave fall
normally on a screen pierced by a circular hole. The wave will be diffracted in passing through it
and will form a divergent spherical wave behind the screen. If a hemispherical photographic film
is placed behind the screen, the particle will reveal its presence at a particular point on this film
by making a photographic impression. But, in doing so, the probability of its passing through
any other point of the film becomes zero. Thus, it seems impossible to explain how a
photographic effect at a point P could prevent a simultaneous event at a point Q unless the
particle is actually localized in space.
However, according to the probabilistic interpretation, before the photographic impression is
made, the corpuscle is potentially present in all points of the region behind the screen with a
probability equal to the square of the amplitude of the ψ wave. The moment that the
photographic impression is produced at a particular point the probability of its presence at any
17
other point instantaneously vanishes. But, according to Einstein such an explanation would be
contradictory with all our ideas on space and time and with the restrictions that physical actions
are propagated through space at a finite velocity.
This turmoil over the probabilistic interpretation of the wave-particle dualism was one of the
major causes of the schism that occurred within the early twentieth century microphysics
community. There are many, however, who would tend to place the blame for this at the
foundation of microphysics, namely on the field theory approach, which has demonstrated an
inadequacy to properly integrate observable phenomena. As will be seen, the field theory
approach itself has been construed upon a dualism, the field-source dualism.
The field theory approach of modern microphysics may best be visualized as a skeleton in a
closet, the "remains" of the nineteenth century ether theory. At the time of Maxwell, it was
believed that the universe was filled with one or several inert ethers of infinite extent having
mechanical properties such as elasticity and compressibility. It was within this theoretical
framework that Maxwell conceived his equations of electromagnetism. It was postulated that
space contained a luminiferous ether, a continuous medium that acted as a mechanical carrier of
light and electromagnetic radiation; much the same way that a body of water acts as a carrier of
surface waves. The ether had also been conceived as a carrier of gravitation. According to this, if
a celestial mass were suddenly brought into existence, it would create a distortion in the ether
which would propagate outward in all directions. Upon reaching a neighboring celestial sphere
this ether distortion, or warp, would act upon this body forcing it towards the source sphere.
The gradual downfall of the mechanistic ether theories was brought about by the results of
the Michelson-Morley experiment, which were interpreted by many as an indication that the
velocity of light remained invariant with respect to any frame of reference. The theory of
relativity which emerged was incompatible with the concept of an ether with an absolute frame of
reference. Hence, the concept of an ether filled space became gradually abandoned and with it
went Maxwell's conceptual model of electromagnetic propagation. All that remained was a
truncated version of his original equations which, after Maxwell's death, became reformulated into
their present version in the 1880's by the mathematician/physicist Oliver Heaviside. What are
today called "Maxwell's equations" are but a mathematical skeleton of what formerly had been
Maxwell's theory. The original equations, which had intended to describe the electromagnetic
behavior of the ether, were now made to describe physical vector and tensor magnitudes existing
in an empty space without reference to any underlying medium. These interrelated magnitudes
were seen to compose a continuous, non-mechanical field that mathematically portrayed the
electric and magnetic state of each point in space. Thus, the field theory approach to
electromagnetism emerged as a "court-martialed" version of the ether theory, a mathematical
theory devoid of its conceptual model.
The field theory later became infected with the idea of particles existing as singularities. This
began with the idea introduced in the 1890's by Hendrik Lorentz of conceiving charged material
corpuscles or subatomic particles such as electrons and protons as being the sources of electric
fields. He envisioned these as being immersed in a luminiferous ether, yet distinct from that
ether. This idea seemed plausible, being closely linked to one's everyday experience of seeing
solid objects surrounded by gas or liquid media, yet distinct from those media. But, with Lorentz
this familiar concept became extrapolated to the microphysical level where it introduced an etherparticle
dualism into microphysics. With the abandonment of the ether theory and the adoption
of the force field concept, this ether-particle dualism was transformed into a field-source dualism,
the source particle necessarily constituting a distinct charge singularity in the field; see figure 8.
18
Figure 8. An illustration of the source-field dualism.
Once it had become conventionalized, this field-source dualism became implanted into
modern microphysics where it has since led to much objection. One of the major critics of the
idea was Albert Einstein who noted the incompatibility of this concept with his theory of
relativity. In his article, "On the generalized theory of gravitation," he expressed the following
views as to the coexistence of fields and singularities:(26)
"The introduction of the field as an elementary concept gave rise to an inconsistency of the
theory as a whole. Maxwell's theory, although adequately describing the behavior of
electrically charged particles in their interaction with one another, does not explain the
behavior of electrical densities, i.e., it does not provide a theory of the particles themselves.
They must therefore be treated as mass points on the basis of the old theory. The
combination of the idea of a continuous field with that of material points discontinuous in
space appears inconsistent. A consistent field theory requires continuity of all elements of
the theory, not only in time but also in space, and in all points of space. Hence the material
particle has no place as a fundamental concept in a field theory. Thus, even apart from the
fact that gravitation is not included, Maxwell's electrodynamics cannot be considered a
complete theory."
Curiously enough, Floyd Allport expressed similar discomfort with the field theory approach
as it applied to psychology. His reference to the "inside-outside problem dealt with this same
difficulty of representing singularities.(27)
"The inside-outside problem has been a stumbling block for physical field-theory as well as
for psychological. Maxwell dealt with it, in the same way Lewin did, by taking a small, but
still real, area within the field as a locus for determining the magnitude and direction of the
field-vectors. But what about the region within that small portion that was taken? This
region is not a part of the field itself; it represents only something that is acted upon by the
19
surrounding field forces. Does it have an inside field all its own? If it had, we should not
know what to do with it or how to integrate it with the field outside. Hence its status is quite
anomalous."
It is interesting to note that the event-structure theory which Allport proposed, which takes
an open systems approach, succeeded in circumventing this inside-outside problem and many
other related problems inherent in field theory. In a similar fashion, a reformulation of
microphysics along the lines of the open system, reaction-diffusion ether model proposed in the
present paper would resolve its current field-source dualism as well. Einstein, who objected to
the field-source dualism, in fact offered a solution very compatible with this metabolic ether
approach. He felt that it was incorrect to regard fields as being the externally generated
phenomena of material singularities. He believed to the contrary that matter and energy were
formed from fields themselves either as static or translationally dynamic field densities as the
case may be. He felt that fields in nature although continuous must always contain very small
regions in which the field values are extremely high. These he referred to as "bunched fields" and
would correspond to the conventional notion of particles:(28)
"Since the theory of general relativity implies the representation of physical reality by a
continuous field, the concept of particles or material points cannot play a fundamental part,
nor can the concept of motion. The particle can only appear as a limited region in space in
which the field strength or the energy density are particularly high"
Also de Broglie quotes Einstein as saying:(29)
"A stone's throw is, from this point of view, a varying field in which states of maximum field
intensity are displaced through space with the velocity of the stone. The new physics will
not have to consider fields and matter; its only reality will be field."
But, in stating his generalized theory of gravitation, Einstein, like Lorentz, was forced to couple
his field equations with extraneous terms representing the field sources. For example, in his
equation of gravitation,
Rik = ½gikR - KTik ,
the first term on the right is expressed using field components, namely the metric tensor, gik, and
the Riemann-Christoffel tensor, R. However, the second term contains Tik, the energymomentum
tensor, which is needed to represent the gravitational field sources, i.e. the
distribution of matter and energy that produces the curvature of space. He hoped that a unified
"field" theory based on a field of more complex nature would resolve this dualism or field-source
problem:(30)
"These differential equations completely replace the Newtonian theory of the motion of
celestial bodies provided the masses are represented as singularities of the field. In other
words, they contain the law of force as well as the law of motion while eliminating 'inertial
systems'.
The fact that the masses appear as singularities indicates that these masses themselves
cannot be explained by symmetrical gik fields, or 'gravitational fields'. Not even the fact that
only positive gravitating masses exist can be deduced from this theory. Evidently a complete
relativistic field theory must be based on a field of more complex nature, that is, a
generalization of the symmetrical tensor field."
20
The notion of regarding material particles as inhomogeneities of an underlying continuous
substance was not first proposed by Einstein. In the mid 1800's, Lord Kelvin proposed his
hydrodynamic vortex atom theory of matter, where a corpuscle was seen to consist of a
hydrodynamic vortex in the ether. This idea was suggested to him by Helmholtz's discovery of
the great stability of vortex motions such as smoke rings. The atom vortices were considered to
be non-dissipative, the ether being assumed frictionless. Hence Kelvin's theory of matter could
not be considered an open system theory, although his model, the smoke ring vortex, in fact, is an
example of a hydrodynamical open system.
Another attempt to describe matter as an inhomogeneity in an underlying continuum was
made by Abraham after the inception of the field theory approach. In an attempt to resolve
Lorentz's field-source dualism, he attempted a pure field theory description of matter in which he
assumed the electron to be a rigid structure whose mass was fundamentally an electromagnetic
manifestation. He showed that the entire mass of an electron in motion could be built up from
field magnitudes, although he could not account for its rest mass. Others objected to his theory
on the grounds that the electric charges composing the electron, being of the same sign, should
repel one another, causing the electron to explode. They argued that to account for the stability
of an electron, it would be necessary to postulate some extraneous force of non-electromagnetic
origin to prevent its charges from escaping.
The Need for Higher Dimensions
But, basically, the nineteenth century ether theories and the contemporary field theories have
failed to unify microphysics because they have all been closed system theories. They all contain
the underlying assumption that all observable physical phenomena are produced by agents such
as fields and corpuscles all residing within a four-dimensional, space-time universe. One's
immediate reaction to this is why not. It sounds like a reasonable assumption. Besides, how can
phenomena within the universe be produced by agents outside the universe? The universe is
infinite; so how can there be anything "outside" the universe? Perhaps we had better discuss this
question because it is central to the concept of an open system theory.
The open systems approach recognizes that all observable phenomena cannot be entirely
accountable to agents within the universe, but that there must be an outside environment which
plays an active role, for example, see figure 9. The line of infinite length pictured here is a
graphical representation of the fifth dimension, and the point, U, on this line represents our
observable, four dimensional, space-time universe.
Now it becomes easy to see that any points on our fifth dimension line not coincident with
point U would necessarily lie "outside of the universe", in the universe's higher dimensional
environment. To avoid confusion, suppose we call the entire fifth dimension line the "Cosmos",
with the understanding that the Cosmos encompasses all five dimensions. We distinguish this
from the physical "universe," whose extent is defined by only four dimensions (space-time) and
Figure 9. Our perceived physical universe depicted as a point
residing along a higher fifth dimension continuum.
21
Figure 10. Relationship of the observable universe U
to the higher dimensional Cosmos C.
which appears as a point on our fifth dimension line. So, the universe is seen to be contained
within the Cosmos.
This may be represented also with the aid of set theory, see the Venn diagram in figure 10.
Set U, a four-dimensional space, represents the universe, the set of all physically observable
phenomena. Set C, a five-dimensional space, represents the Cosmos. This includes all
physically observable phenomena of set U and innumerable other non overlapping sets, U', U",...
Set U is considered a subspace of C, and set C is considered the hyperspace of U.
A fifth dimension is a necessary requirement for an open system microphysical theory, but
not a sufficient requirement. Indeed, several theoreticians such as Einstein and Kaluza have
employed a fifth dimension in their calculations. Einstein, a believer in rigorous causality, hoped
to devise a unified field theory in accordance with the principles of relativity that would
encompass the description of both electromagnetic and gravitational phenomena. He believed
that such a synthesis could not be brought about in four-dimensional space-time using
combinations of the known fields, but that a field of a more complex nature, of higher
dimensionality, would need to be postulated. As did Kaluza, Einstein found it necessary to
introduce field magnitudes that were thought to be possible only in a five-dimensional world. He
felt that the rationality of postulating a fifth dimension would necessarily be judged on
theoretical grounds, rather than experimental; i.e., on the resulting degree of theoretical coherence
and simplicity:(31)
"It must be conceded that a theory has an important advantage if its basic concepts and
fundamental hypotheses are 'close to experience', and greater confidence in such a theory is
certainly justified. There is less danger of going completely astray, particularly since it takes
so much less time and effort to disprove such theories by experience. Yet more and more, as
the depth of out knowledge increases, we must give up this advantage in our quest for logical
simplicity and uniformity in the foundations of physical theory."
Although Einstein always referred to "fields" in his nomenclature, by a stretch of the
imagination we see that he was proposing what might be described as five-dimensional
mathematical media, continuous in both space and time, whose existence could not be attributed
to material singularities, but on the contrary, whose interactions produced all observable physical
phenomena. While Einstein's attempt at unification did not offer an open systems approach,
some of its precepts seem strikingly similar to those which will be developed shortly.
22
The Primal Flow
The open system description of microphysics which we will now examine is fundamentally
based upon the dynamics of a five dimensional medium. Our physical universe may be depicted
as lying within this medium as was seen in figure 9. Let us freeze time for a moment, our fourth
dimension, and try to visualize how this hyperdimensional medium might appear to us 3D
creatures. First imagine "space" a void of infinite extent. Now, imagine space to be filled with an
infinite number of invisible media, each medium being continuous and present throughout all of
space. Here we fragment our 5-dimensional medium into an infinite number of 4-dimensional
media since we as physical beings are able to best conceive only one infinite medium at a time.
Now, let us take a "trip" with our mind and imagine that we perceive one of these media after the
other in succession. Suppose that each successive medium which we perceive has a higher
valence or ranking than the previous one. Proceeding in this manner through a succession of
conceptualizations, we have journeyed into the fifth dimension or into the "medium dimension".
What we have just accomplished can be more easily seen when we consider how 2D planar
creatures might visualize a cube, By mentally placing their planar world such that it intersects the
cube at right angles, they may examine thoroughly this planar section of the cube and discover
that it appears to them as a square. Now by taking a trip with their minds and imagining their
planar world to move in successive increments through the cube, they may examine its entirety
finding that the square intersection vanishes at the boundaries of the cube. This is similar to the
exercise which we have attempted above in our visualization of the fifth dimension.
Let us now leave our accustomed 4-dimensional frame of reference and view this succession
of media graphically. Figure 11 depicts the medium dimension as a partitioned dimension line
where each increment represents a particular 4-dimensional medium. Each increment or medium
has an associated reactive potential, or "energy level," and these are arranged in monotonically
decreasing value along the dimension line. Going from left to right, media of higher reaction
potential transmute successively into media of lower reaction potential. Taken together, these
indicate the medium flux, or "primal flow".
This process may be clearly illustrated by the use of a hydrodynamic analogy, see figure 12.
Here we have a series of tanks where water drains from one to another in succession. Each tank
represents a particular medium, and has a nozzle opening k1 proportional to the medium's
transmutation constant. The tanks are shaped such that the velocity of flow from each tank is
proportional to the amount of water it contains. At equilibrium where the flow velocities are all
equal, it will be seen that the tanks with the smallest nozzles will contain the greatest quantity of
water, or analogously, the greatest media concentration.
The medium dimension may be assumed to be infinite in extent and its unidirectional primal
flow may be considered to be continuous and unending. If point U in figure 11 represents our
physical universe, it is seen that the primal flow enters and leaves the universe as it flows in the
fifth dimension. The open systems approach to microphysics considers that all physical
phenomena within the universe such as radiant energy and matter are a maintained by the primal
flow. If the flow were to cease somewhere "upstream" on the medium dimension, this event
Figure 11. Illustration of the primal flow along the medium dimension.
23
Figure 12. Hydrodynamic analogy of the medium transmutation
taking place along the medium dimension.
would eventually reach our universe. With no driving flow to maintain it, our universe would
dissolve and only a timeless void would remain.
Matter and energy may be viewed as metabolic structures whose forms (inhomogeneities in
the hyperdimensional substance) are continuously regenerated by the medium flow. Radiant
energy is an example of a space-time dependent structure whereas matter demonstrates only
space dependence. These dissipative structures exist only in the presence of an energic flux . We
may say that the real essence of radiant energy, matter, and even the essence of motion and
evolution in our universe is basically derived from the primal flow. Physical phenomena are like
eddies in a river, inhomogeneities in this underlying dynamic reality, a reality which remains
invisible to us. The warehouse model is useful for picturing this metabolic concept. The pieces
of furniture being moved about may represent the hyperdimensional component flow or primal
flow, and the furniture groupings which are formed may represent the physically perceived
patternings such as radiant energy and matter.
These processes can also be understood with the help of the chemical reaction model. The
Brusselator reaction scheme represented by equations (4), it may be recalled, was viewed as
having a "reactant dimension". As is seen in figure 13, the reactants may be ranked in order of
Figure 13. Mapping out the Brusselator reactants
along a hypothetical reactant dimension.
24
their molecular energy level. Each chemical species or coordinate composing the reactant
dimension has three spatial dimensions and a time dimension, as in our cosmic medium model,
and the reactant dimension represents the fifth dimension. The more energic chemicals A and B
interact in a parallel manner to form chemicals X and Y and finally reduce to form chemicals D
and Ω. The dependent variables X and Y, which form the time and space ordered structures of
the system, can be taken together as constituting a set U, or "universe", of the reaction system.
In this example the reactant dimension has a finite length of three species.
The Belousov-Zhabotinskii reaction system, represented by equations (5), would have a
somewhat longer reactant dimension and would be somewhat richer in terms of parallel reactions.
Its universe might involve four or more principle time-dependent species; i.e. Ce3+, Ce4+,
HBrO2, and Br-. The number of parallel reactions may be viewed as a sixth dimension, but it is
not necessary to make this complication. However, it is apparent that for the reaction system
variables to be able to form regular spatial or temporal patterns, the reaction system must have at
least two parallel reaction paths along the reaction dimension that intersect and interact with one
another.
Building on the basis of the chemical reaction model as represented in figure 13, we may
hypothesize a medium interaction scheme as shown in figure 14.
The physical universe is depicted here as being composed from a set of three interacting
media: G, X, and Y, the dependent variables of the universe. The source media, A and B, and the
sink media D and Ω remain homogeneous in time and space. Also, as before, we might picture an
infinite ranking of media extending to the left and right on this dimension. The set of kinetic
equations presented in reaction system (6) describes this medium interaction.* Note the
similarity with reaction scheme (4).
Figure 14. Media reaction sequence proposed as a generator of a physical universe U
whose states are shown mapped along a hypothetical reaction dimension.
_________________
* This reaction scheme is undoubtedly oversimplified as a kinematic representation of the media
dynamics involved, but it will serve its purpose well as an organizing vehicle of the thoughts to be
presented here.
[Update comment: This reaction scheme constitutes the earliest presentation of what later I termed
"Model G". This version involved the same set of reaction species as those adopted later, but with
the exception that all reactions were depicted as forward reactions. It was not until 1978 that I
realized the importance of the reverse reaction X ← G which allowed Model G to generate an
autonomous, localized, dissipative soliton having particle-like properties.]
25
A → G (a)
G → X (b)
(6) 2X + Y → 3X (c)
B + X → Y + D (d)
X → Ω (e)
The overall irreversible reaction proceeds as: A, B → G, X, Y → D, Ω. The G, X, and Y
media are here cast in the roles normally reserved for the gravitational, electric and magnetic fields
of contemporary field theory. One important difference, however, is that the media proposed
here always have positive values or positive concentrations even during periodic oscillation,
unlike conventional field magnitudes which may take negative values as well as positive.*
Figure 15 shows the interaction of the G, X, and Y media in greater detail. The reaction
proceeds as follows: Medium A converts to G which converts to X and finally to Ω.
Simultaneously, medium G combines with X to form Y and D (reaction d). In turn, Y combines
with X to autocatalytically produce more X, (reaction c).
Figure 15. A medium reaction scheme proposed
as a generator of the physical universe.
The Created Universe
When the primal flow is below a critical threshold, the reaction system is in the steady state.
The media kinetics proceed in a linear manner and all media concentrations remain time invariant
and homogeneous throughout space. This must have been the status quo before the creation of
our universe, before the appearance of physical structure. But, suppose that some time in the
past the G medium developed an inhomogeneity and that in a local region of space its
concentration became low enough that the critical reaction threshold was exceeded. As a
consequence, the concentration of X would have decreased to the point where the autocatalytic
X-Y loop would have become nonlinear. Temporal fluctuations in the steady state
concentrations of these media would have become amplified rather than damped and a new stable
inhomogeneous state would have been reached in the concentrations of X and Y. A universe was
born.** Two types of structures were eventually formed, radiant energy and matter. Let us a
analyze these structures and their interactions using the chemical kinetics analogy. We will begin
with the photon.
__________________
* [Update comment] In the original manuscript I had used the symbols E and B, instead of X and Y
and had identified these with electric and magnetic field intensities. Later I realized that both
variables depicted the electric field potential and labeled them instead as X and Y. At that time I
realized that the magnetic field was not itself a real entity but only a manifestation of a moving
electric field; i.e., of vortical movement of the X and Y media.
26
Diffraction experiments indicate that photons do not have significant lateral interaction with
their environment past a distance of 1λ. We may analyze this as follows. When a given spatial
location is crossed by a photon, a disturbance in the G, X, and Y media will be present for a time
1/ν. During this time these media inhomogeneities would propagate a lateral distance 1λ, after
which the photon would have passed by, and the surrounding media would then revert back to
the homogeneous steady state.
It has been observed that the light from distant stars is bent when passing near the Sun.
Einstein explained this by saying that space was warped in the vicinity of massive bodies. Here
we offer a novel approach to this phenomenon. Suppose that due to the large mass of the Sun
that in its vicinity there is a strong gradient in the gravitational medium. Imagine that the photon,
which has a spatial extent of 1λ, passes through this gradient. The frequency of its X, Y cycle
would be increased preferentially on the side toward the sun where the G medium concentration
is greater. Thus, the structure of the photon is subjected to a stress, its oscillating X, Y media
being subject to a frequency gradient.
It could be said that the photon behaves in a homeostatic manner. According to Le Chatelier's
Principle "if by any action (e.g., change in concentration) a shift in the equilibrium state is
produced, the nature of this shift is such that the initial action is reduced in magnitude". So, to
maintain the spatial integrity of its quantized medium structure, the direction of catalysis of the
photon is continuously altered and its path of propagation turns into the gradient (in the
__________________
** [Update comment] In the original version of this manuscript, I had proposed that this emergent
inhomogeneous state was periodic in both space and time producing a limit cycle oscillation. I had
suggested the possibility that this had emerged as an outward expanding spherical wave that may have
nucleated at a particular point in space as in the fireball theory of creation, or that this disturbance
may have been randomly precipitated throughout space. But in later years seeing that subquantum
kinetics predicted a tired-light effect obviating spacial expansion, I came to reject the standard big
bang cosmology. Also I adopted the position that media substrate fluctuations instead led to the
formation of inhomogeneous steady-state dissipative structures, i.e., to the nucleation of material
particles.
In the original version of this manuscript I had conceived the photon as a space-time dependent
structure, in some ways resembling the chemical structure classified in region III of the phase diagram
presented in figure 5. I had imagined the concentrations of its two coupled species, X and Y (or E
and B), as mutually oscillating 180° out of phase in a limit cycle fashion, its frequency, ν, amplitude
a, and wavelength λ all remaining time invariant. However, I realized that such a representation had
several shortcomings, one being that the chemical model did not predict the corpuscular nature of the
photon, the fact that the photon travels in a linear manner. For example, I noted that the
propagating ring pattern of the Zhabotinskii reaction is observed to spread out as an expanding wave.
Since the media concentrations of the wave remain undiminished as the wave expands, the quantity
of temporal structuring would increase with time. On the other hand, a photon is observed to
propagate linearly, not concentrically. Its medium structure remains spatially localized as it travels,
a quantized packet of unit disturbance. I theorized the possibility of a G medium inhomogeneity
packet of given magnitude for some reason diffusing or propagating in a linear manner with a
velocity c uniquely determined by the reaction kinetics and diffusion coefficients. Then I imagined
that this propagating G medium inhomogeneity was somehow accompanied by a periodic limit cycle
disturbance in the X and Y media, the magnitude of this disturbance and consequently the frequency
of its oscillation being somehow dependent upon the magnitude of the G medium inhomogeneity. In
this manner, I imagined that higher frequency photons, or higher energy photons, would be
associated with greater gravitational media concentrations. All of this was rather vague speculation
and one that I abandoned in later versions of subquantum kinetics.
27
direction of lower G concentration). This phenomenon may be also viewed in a manner similar to
that suggested by Kopell and Howard for viewing the spatial band patterns of the Belousov-
Zhabotinskii reaction. By assuming that the gravitational medium gradient laterally decouples the
X, Y oscillation, then the turning of the photon may be viewed as a migration of the disturbance
pattern as it maintains phase coherence.
Now let us examine the time-independent space order patterns of the interacting G, X, and Y
media; i.e. subatomic particles. The transition from the time-dependent state, radiant energy, to
the time-independent state, matter, is observed in the phenomenon of pair production. This may
occur when a photon having an energy of greater than 1.02 Mev passes in the neighborhood of a
massive nucleus. Two subatomic particles of opposite charge and equal mass are produced, the
electron and the positron. If these particles collide with one another, they will become
annihilated; the matter state then converting back to the radiant energy state.
Let us analyze pair production from the chemical kinetic viewpoint. Suppose that when the
photon is in the vicinity of a nucleus it encounters a strong G medium gradient. This induces the
formation of a time-independent, spatially ordered structure having alternate shells of X and Y
media concentrations similar to the vertical band patterns observed in the Belousov-Zhabotinskii
reaction except that the spatial patterns produced here have a spherical geometry.
Figure 16 depicts the layered structure of the electron and positron, the light areas
representing a predominant concentration of the X medium and the dark areas representing a
predominant concentration of the Y medium. Note that the pattern sequence of the one is
reversed in the other, hence they are compliments of one another. Let us hypothesize that the
Figure 16. Depiction of the electron and positron as localized dissipative space structures.
28
electron has a core of Y medium surrounded by a shell of X medium, and that its antiparticle has
the opposite configuration with an X medium core and a Y medium shell.
Each shell, let us suppose, maintains the medium inhomogeneities in the shells lying
immediately within it and without it. For example, if a shell has a high X medium concentration
(and therefore a low Y medium concentration), it will tend to catalyze more Y medium in the
shells above and below it as its reaction diffuses outward and inward. So, taken together, with
each shell maintaining its neighboring shells in a steady state manner, the whole pattern remains
time invariant.
At the instant that these particles are formed, their spatial patterns would propagate outward
at the speed of light. It is as if the original photon which had formed them were still traveling at
the frontiers of the space structure patterns. Since the photon is quantized, the amount of
medium disturbance it creates in traveling outward remains constant. Hence, each subsequent
shell that is formed has medium concentrations that are lower than the previous. Therefore, the
medium concentrations fall off as the inverse square with the distance from the center of the
structure. The G medium attenuates continuously with distance, whereas, the X and Y media
attenuate with a superimposed periodicity, the shell pattern.
[Update comment] Actually, my suggestion here was incorrect. With an assumed inverse square
decrease, the amount of "action" within a given shell (the shell's summed deviation from the
steady state) would have remained constant in shells situated at successively greater distances
from the particle's center, hence the total action for the particle as a whole would have tended
toward infinity as radius increased. So action in a given shell must necessarily decline rapidly to
avoid this infinity problem. Thus the idea of a photon traveling radially outward was misguided.
It is the particle's nuclear electric field pattern that propagates radially outward as a spherical
wave. In later papers on subquantum kinetics, I had predicted that the X-Y pattern at the core of
a subatomic particle should fall off much faster than inverse square, hence allowing the shell
pattern's integrated action, its sum total deviation from the steady state concentrations, to
converge to a finite number at large radial distances. This predicted rapid power law decline was
later confirmed in computer simulations performed on Model G.
Each set of X, Y shells composing the electron's structure has a wavelength or spacing of λ.
Thus, the electron space structure should be expected to diffract from a grating in the same
manner as a photon. Its wave characteristics, therefore, are not a result of an associated wave as
suggested by de Broglie, but are due to the periodicity of the medium densities composing its
structure.
In an inertial frame of reference in which the electron is at rest, its space structure would have
a spherical geometry and its shell spacing would be uniquely determined as λc. However, if the
electron were traveling at a constant velocity, v, with respect to an observer's frame of reference,
it would be expected to exhibit a shorter wavelength when approaching the observer and a longer
wavelength when leaving the observer, a sort of Doppler effect. Also, its space order structure
would appear to be compressed together in front and expanded in back relative to its direction of
travel.
Let us now investigate the accelerated motion of a particle where its velocity changes with
time. This being a complex phenomenon, it will be necessary to examine it in an incremental
fashion for ease of comprehension, much like the method employed in the elementary treatment
of mathematical integration. So, although acceleration may indeed be continuous, we will imagine
that it occurs in jumps or quantum transitions.
29
Space ordered structures such as the electron are spatially uniquely determined independent
of time, and therefore, may be supposed to exist only in inertial frames of reference, i.e., ones
that are time invariant or not accelerated. By this criterion, an electron may travel through space
at any arbitrary velocity, but it may never change its direction or speed without destroying its
structure, for its structure is a time invariant phenomenon. To undergo accelerated motion the
electron must jump from one inertial frame to another, each time increasing its velocity, each time
creating a new space structure and allowing its former structure to dissolve.
When the former space order structure dissolves, it returns to the time-dependent state and
radiates out as a photon. This may be seen to occur in the following way. Suppose an electron
is at rest at location x0 in reference frame α0 and that at time t0 it jumps to reference frame α1
having a relative velocity, v, with respect to α0. In making this jump, it has not changed its
spatial location. However at time t1, it will have been displaced a distance Δx = x1 - x0 = v(t1 -
t0) from its position at t0. Also, at time t1 its α1 structure would have radiated out a distance of
d = c(t1 - t0) and its former α0 structure would have receded radially by an equal amount. Both
wave fronts at this time would be out of register in the direction of v by an amount Δx = v(t1 -
t0). Suppose that these two space structures must be out of register by a critical distance, λ/2
before the α0 structure converts to the time-dependent state and that this spacing is reached at
time t1. At this time the spatial dislocation would have reached a shell in the α0 structure at a
distance d from its center. Whereupon, this shell would convert into the radiant energy state
forming a photon whose direction of propagation would be perpendicular to the axis of
dislocation. If the acceleration is great, i.e., if the quantum jumps are great, the critical dislocation
will be reached sooner in the inner lying shells, and the radiation will necessarily be short wave.
With slower quantum jumps the more outlying shells will become unstable resulting in long wave
radiation. The phenomenon which we have just described, whereby charged particles undergoing
acceleration radiate photons, is known as bremsstrahlung radiation.
[Update comment] Clearly, the understanding of particle acceleration given in the above
paragraph needs further development. In later versions of the subquantum kinetics theory (and
the version that was published in 1985), I identify particle charge with either a positive or
negative potential biasing of the X, Y space structure pattern relative to the ambient
homogeneous steady state X and Y concentrations. So the above discussion of an accelerated
space structure producing bremsstrahlung radiation as a result of dissolution of its previous space
structure configuration should apply only to charged space structures, hence those having X, Y
space structure patterns biased away from the homogeneous steady state. To be realistic, this
approach must show that neutral particles, such as the neutron, whose space structure has no such
positive or negative potential bias, would produce no photon radiation upon acceleration.
Another phenomenon which may be discussed here is inertia, or the tendency for massive
bodies to resist acceleration. We have said that when a particle accelerates, it must recreate its
structure with each quantum jump. Thus, particles of greater mass, i.e., ones having greater G, X,
and Y media concentrations, must organize greater quantities of media when making quantum
jumps. Therefore, compared to less massive particles, such particles require a greater amount of
time to carry out the same action.
Let us now examine how a particle's acceleration is related to medium gradients in its
environment. Imagine that an electron, particle P, is subject to a negative G medium gradient due
to a nearby space structure, particle Q. Viewing this from Q's frame of reference, we would see P
30
approaching Q at a velocity, v. At a given instant t0, we observe that Q's G medium gradient
induces an X concentration gradient across particle P, causing P to have lower concentrations of
both X and Y media, and consequently a lower X-Y cycle flux, on the side toward Q where the G
medium is less concentrated. Consequently P's periodic space structure would have a shorter
wavelength on the side toward Q and a longer wavelength away from Q. But, this is exactly the
same as the "relativistic" rod contraction distortion we would expect to find when viewing P
traveling at a velocity v with respect to Q's frame. Let us take other observations of P at times
t1, t2 , etc., at closer proximity to Q and consequently in steeper G medium gradients. We would
find that at each such time particle P would adopt a velocity toward Q such that the resulting rod
contraction distortion of P's space structure would match the distortion that would arise due to
Q's gravity gradient causing P's space structure to be more compressed on the side facing Q.
Thus, particle P's tendency to adapt to the changing G medium gradient may explain its
accelerated motion.
On the other hand, if P were in a circular orbit around Q, its accelerated motion, being
perpendicular to its direction of travel, would only serve to change P's direction of travel, and
hence, the direction of its gravitational spatial distortion with respect to Q. P's relative velocity
with respect to Q would not change since the G concentration would remain constant along P's
orbital circumference.
Where X and Y medium gradients are involved as in electrostatic attraction and repulsion,
there are two possibilities for accelerated motion. Choosing two complimentary particles, such
as the electron and positron, we would observe an attractive acceleration. On the other hand, two
identical particles, such as two electrons, would undergo a repulsive acceleration. These two
aspects of electrostatic motion can be attributed to the structural sequencing of the X and Y
media in the particles under consideration. We would find that, on the average, locations in the
vicinity of an electron have a higher average Y medium concentration and lower average X
medium concentration. This is because the Y medium is present in a more concentrated state at
the electron's core, whereas the X medium in the surrounding shell has a much lower
concentration. This situation is just the reverse for the positron. So, the electron has a positive
net Y medium radial gradient in its space structure and the positron has a positive net X medium
radial gradient in its space structure.
Suppose an electron were subject to an X medium gradient, for example, a gradient originating
from a positron. Thus, there would be an elevated X medium concentration on the side of the
electron nearest the positron. The electron's space structure has a "need" for higher X
concentrations since it has an overconsumption of X in its core which causes a net low X
concentration there. So its appetite for X would be more satisfied by moving closer to the
positron and into its region of higher X concentration. This shift will be accompanied by a
transition in inertial frames with the result that the electron's space order structure is
foreshortened on the side toward the positron and elongated on the side away. The movement of
the electron may be visualized as the migration of a sand dune on a windy beach. The particles of
sand composing the dune on the windward side are eroded away while on the leeward side they
are accumulated. Thus, the dune migrates by a process of metabolization.
The same reasoning can be employed when considering the repulsion of two electrons. In
this case we may view one electron as being perturbed by the other's X medium gradient. That
is, the first electron will be subjected to a lower X concentration on the side facing the other
electron. Since this lower concentration reduces its ability to satisfy its overconsumption of X, it
will want to move away from its partner electron into a more favorable environment. A similar
31
but complementary argument could be made for the Y medium components of their space
structures.
The elastic collision of two electrons may be viewed as a situation in which the two particles
undergo spatial metabolization as their space order structures mutually adjust to each other's
presence. Let us view this collision from an inertial frame located at the center of mass of the
system. The two electron space structures will be assumed to approach each other at time t0
with a relative velocity, v. Both space structures are distorted such that they are compressed in
front and elongated behind with respect to their direction of travel. As they approach closer,
they will enter steeper X and Y medium gradients. Adapting to these environmental
disturbances, each electron will shift inertial frames, reducing its velocity with respect to the
other particle. Accompanying this deceleration will be the phenomenon of inertia and
bremsstrahlung radiation. At their closest approach both electrons will be in the observer's frame
of reference. The space structure of each electron will now be spherical with unique wavelength,
λ0. At later observations the electrons will be seen accelerating away from one another, this
acceleration diminishing in magnitude as the electrons become more separated. Their space order
structures again will appear distorted with respect to their direction of travel. When the electrons
have reached the distance of separation which they had at time t0, they will be observed to be
moving apart with a relative velocity less than v. This is due to the fact that a portion of their
media concentrations had become lost as bremsstrahlung radiation, and consequently, their
transition to new inertial frames was "nonconservative".
By visualizing the inelastic collision in the above example, we may come to understand why
matter can give the impression of being solid even though it is composed of diaphanous
substances. From this, we may see more clearly what Einstein meant when he spoke of a stone's
throw as "a varying field in which states of maximum field intensity are displaced through space
with the velocity of the stone."
There are many other subatomic particles of which we have not yet spoken, most of them
more massive than the electron or positron. Particles such as the proton, the neutron, and the
meson, are examples of other stable states existing as time independent space structures. Being
more massive, they involve media disturbances of greater concentration. For example, protonanti
proton pairs may be created by the pair production process in the same manner as electronpositron
pairs, however the incident photon must be 1840 times more energetic. The fact that
we observe only certain types of stable particles in nature means that only certain wavelengths
are allowed to exist in the time independent state. Upon investigation, these quantized media
states should serve as valuable clues to the nature of the media kinetics which have created our
universe.
[Update comment] In this paper I had proposed that a neutron may be a dissipative structure
whose core polarity is in oscillation, alternating between a high X concentration and a high Y
concentration as in the propagating ring pattern seen in the Belousov-Zhabotinskii reaction. I
had proposed that this would result in electrical neutrality since the neutron's X and Y media
gradients would be the same on the average. I later abandoned this oscillatory particle model of
the neutron when I realized that, as in the Brusselator, these space structure would undergo a
secondary bifurcation in which a charged state would appear in which their X and Y space
structure patterns were biased relative to the homogeneous steady-state (X positively and Y
negatively biased, or X negatively and Y positively biased). A particle in the neutral state, such
as a neutron, would simply be the particle before having undergone this secondary charge
bifurcation.
32
It is interesting to note that a particle's media concentration, or "mass" is independent of its
space order structure sequencing, or "charge." Hence, we observe particles of different masses
having charges of equal magnitude.
Conclusion
The open systems approach which has been sketched out here appears to fulfill the basic
requirements of the "unified field theory" which Einstein had envisioned. From an open systems
model of interacting media we were able to predict the existence of matter and radiant energy
states and the mechanism of their interchangeability. By analyzing the structure of the matter
state as a dissipative space structure, we were able to offer explanations for the existence of
charge, mass, and wavelength. By studying the dynamics of how space structures adapt to
medium gradients, we were able to predict gravitational attraction, electrostatic attraction and
repulsion, and understand their connection with the concepts of inertia, relativistic rod
contraction, and radiating charges. By this approach, the field-source problem and the waveparticle
dualism of field theory are eliminated and a framework is established in which relativity
theory may be made compatible with quantum mechanics. Also the open systems approach
offers an opportunity for physics to return to the classical determinism it once knew.
Even more significant, this approach revives microphysics from its inanimate, closed system
past, and brings it under the framework of general system theory as a "life science". Formerly,
efforts to unify physics and biology under a common theory had proven futile, like trying to mix
oil -and water. One dilemma which was particularly puzzling was that in observing physical
systems one would conclude that entropy increases, whereas, in biological systems entropy
decrease. However, the advent of open systems microphysics undermines the tenet that positive
entropy is the law of the universe, and this age old dilemma becomes resolved.
For example, when gas molecules displaced to one end of a volume are left to expand and fill
the whole volume, classical thermodynamics tells s that they go toward a state of disorder, i.e.
entropy increases. Yet, when a plot of land is cleared in a jungle and left untended, it becomes
overgrown with vegetation. With the open systems analogy, we may view, these two situations
as being essentially the same. The gas molecules, like the jungle plants, are negentropic, open
systems which behave in a homeostatic manner. In both cases, however, it appears that an
ordered placement has tended toward a disordered placement. We then realize that perhaps
positive entropy is merely a manifestation of the behavior of open systems, negentropic systems
seeking mutual equilibrium. Perhaps positive and negative entropy are just two sides of the same
coin.
Closed system concepts, such as the atom of Democritus and the subatomic building block
particles of modern theories will have to be dispensed with. The new physics will be describable
on the basis of the warehouse concept and open system principles. Physical reality will become
regarded in a new light. It will become realized that the cosmos is dynamic; that existence is
dynamic. The primary principle or law of nature which operates is the evolution of ordered
systems, where dynamic events repeated in the same manner with great frequency give rise to the
appearance of structure. Structure will be viewed holistically rather than indeterministically.
Structure will be understood only within the context of its dynamic, sustaining environment. The
formation of the universe will become regarded not as a past event, but an ongoing process.
33
References
1) Allport, F. H. Theories of Perception and the Concept of Structure (New York: Wiley, 1955).
2) LaViolette, P. A. "The predator-prey relationship and its appearance in stock market trend
fluctuations." General Systems 19 (1974):181-194
3) Glansdorff, P., and Prigogine, I. Thermodynamic Theory of Structure, Stability and
Fluctuations (New York, 1971), p. 224.
4) Glansdorff, P., p. 225.
5) Glansdorff, P., p. 230.
6) Glansdorff, P., p. 233.
7) Glansdorff, P., p. 236.
8) Glansdorff, P., p 236.
9) Glansdorff, P., p. 241.
10) Glansdorff, P., p. 249.
11) Glansdorff, P., p. 258.
12) Glansdorff, P., p. 260.
13) Prigogine, I. "Thermodynamics of Evolution, Physics Today, Nov. 1972, p. 26.
14) Prigogine, I., p.26.
15) Field, R. J. "Oscillations in chemical systems. II." Journal of the American Chemical
Society, 94 (Dec. 13, 1972), p 8657.
16) Field, R. J., p. 8649.
17) Glansdorff, P., p. 262.
18) Winfree, A. "Scroll-shaped waves of chemical activity in three dimensions," Science, 181
(Sept. 1973), p. 937.
19) Winfree, A. "Spiral waves of chemical activity," Science, 175 (Feb., 1972), p. 634.
20) Field, R. J., and Noyes, R. M. "Explanation of spatial band propagation in the Belousov
reaction," Nature, 237 (June 16, 1972), p. 391.
21) Kopell, N., and Howard, L.N. "Horizontal bands in the Belousov reaction," Science, 180,
(June, 1973), p.1171.
22) Glansdorff, P., p. 263.
23) Thoenes, D. "'Spatial oscillations' in the Zhabotinskii reaction," Nature Physical Science,
243, (May 14, 1973), p. 18.
24) De Broglie, L., New Perspectives in Physics (New York, 1962), p. 108.
25) De Broglie, L., The Revolution in Physics, (New York, 1953), p. 216.
26) Einstein, A., "On the generalized theory of gravitation," Scientific American, 182, (April,
1950), p. 14.
27) Allport, F., 1955, p. 159.
28) Einstein, A., p.15.
29) De Broglie, L., 1962, p. 144.
30) Einstein, A., p. 16.
31) Einstein, A. p. 15.