I have a vivid memory from my first or second year of college when discussing mathematics with an older student. I don’t remember the specific topic, but at one point he pointed out to me that mathematical truths are only true relative to particular axioms and definitions that we have assumed, and that they are not true independently, in and of themselves. Moreover, the axioms are merely assumed to be true, and do not rest on any deeper foundation. This may not have been the first time I considered this, but in this particular instance the insight was especially profound for me. None of our mathematical theorems is true in any context-independent way. And yet, mathematics is the paradigm of certain knowledge. On the one hand, once we clearly specify a particular set of axioms and definitions, there is no ambiguity or doubt or uncertainty about what statements are and are not true relative to that context. (For example, no postmodern critique or skeptic can ever show that there is a largest prime in the natural numbers, or that the Pythagorean theorem is not true in Euclidean geometry.) On the other hand, in mathematics we are free to choose any axioms and definitions we like, and true statements will change accordingly. (Finite fields can have a largest prime, and the Pythagorean theorem is false in non-Euclidean geometry.) So there is no absolute, context-independent truth.

For me personally, this insight had implications far beyond mathematics. For example, in situations where I would find my own ideas in conflict with the ideas of someone else, instead of following the impulse to prove myself right and them wrong, or figure out who is “really” right, I would instead seek to understand in what sense each is true in its own context. How do our assumptions or definitions differ? Of course, there is always the possibility that we do share common assumptions and definitions but one of us is simply being inconsistent. But the effect of the insight was to give others the benefit of the doubt, to look at apparently conflicting positions as alternatives that can be true in their own context instead of as opposed to each other in a dogmatic battle for truth. Many years later, I wrote a playful parable about this insight here.

I think it is fair to say that this insight from mathematics has had for me a moral dimension, insofar as it has helped to support personally an attitude of openness and interest in superficially conflicting ideas and opposing viewpoints. This kind of openness is essentially a form of love. The golden rule would have us consider the perspectives of others not as opposed to our own but as another possibility to be understood on its own terms, on equal footing with our own.

The insight has also helped clarify my thinking about certain general issues of morality, such as the problem of moral relativity. On the one hand, the insight implies that moral principles are not absolute, context-independent truths. They are either assumptions, or based upon assumptions. And such moral relativity implies that, ultimately, there is no absolute moral foundation. This can raise the concern that everyone can then have their own personal morality and assert that there is no basis for saying their morality is any less valid than anyone else’s. But this is no more a concern than it is a concern that mathematics allows each person to choose their own definitions and axioms and develop their own mathematical theorems. They are free, as a matter of principle, to do so. In practice, however, if a mathematician wants to be a member of a community, they are obliged to use conventional definitions and focus their research on areas of mathematics that are considered by the community to be relevant. For example, I’m free to make up my own idiosyncratic definitions for common mathematical concepts like “associative” and “commutative”, use non-standard notation instead of “+” to represent addition, or adopt different axioms for well-established mathematical objects like groups, rings, and fields. But I can’t do all of that and expect to have my work considered relevant by others. To be part of a community means to share common conventions, assumptions, terminology, notation, and so on. The same, I would submit, is true for morality. In order to live with others harmoniously, we need to share at least some basic moral principles. They need not be absolute to serve this function. Individuals whose morality deviates in significant ways from the society in which they live will have problems living within that society. An analogy that I find helpful to illustrate this is the US-Mexican border. On the one hand, its existence and location is not an objective truth. It is a relative truth, based upon an agreement between the governments of the US and Mexico. But its status as a relative truth does not make it subject to arbitrary whims of each individual. Quite to the contrary, if individuals ignore the established conventions (i.e., laws and regulations) relating to the border, they will suffer very real consequences. So, the point here is that the relativity of truth does not imply that “anything goes” or cause us to degenerate into anarchy. Mathematics does just fine with the relativity of truth. In fact, it sets a fine example of how to look at relative truth, including our morals in society: we should strive to make our assumptions, definitions, conventions, etc. as clear as possible so as to avoid confusion and conflict. And we should develop ways to arrive at consensus regarding standards that are adopted by each community or society, so that members of that community can work harmoniously together.

(The above is an excerpt from personal correspondence with Moral Math pioneer Sarah Voss, August 2018, written in response to her solicitation of my perspective on the topic. -TJM)

The seed of time and space are implicit in the first distinction. This then provides the framework for the possibility of different states in time and space, by means of subsequent distinctions. And entropy is indeed a formal or mathematical notion within that context.

Even in traditional classical statistical mechanics, entropy is defined mathematically as a comparison of the number of (unobservable) microstates of a system that give rise to the same (observable) macrostate. Note that this definition is based on the assumption that the actual state of a system is characterized by (unobservable) distinctions (associated with microstates) that have been ignored / conflated / confused to yield the same (observable) macrostate. The key underlying distinction here is between observable and unobservable states of a system: If we could observe all microstates, there would be no meaning associated with entropy. Entropy arises when we describe a system by means of unobservable microstates, and the entropy of a macrostate is a measure of how many microstates are conflated in the same macrostate.

Now, I would propose that this distinction between observable and unobservable states is also implicit in the first distinction: The first distinction distinguishes a present object, which is observable, from its absence, which is unobservable. However, to get entropy requires, as you propose, some additional structure, so that there is the possibility of counting numbers of unobservable states corresponding to a given observable state. The larger the number, the larger the entropy. And, indeed, in the beginning, there is just one observable and one unobservable state, so the universe begins in a low entropy state.

The possibility of increase in entropy arises only through the process of making additional distinctions that allow for a universe to have a larger variety of unobservable states. In addition, there also needs to be a subsequent process of ignoring or conflating those distinctions, so that the distinctions between these states can not be observed, and get identified or conflated into a single macrostate that is observable.

Entropy thus arises by a two-fold process of imagining distinctions and then conflating them. But one has not completely conflated them, since they remain part of the theoretical description of microstates. They just become empirically unobservable. This is essentially a kind of symmetry: the description of the system can be transformed between any of the microstates without changing the observable macrostate. The observable macrostate is thus an invariant of the symmetry transformation between microstates.

Now, to define the notion of entropy, we first need to define some more fundamental notions:

1. A set of multiple objects (e.g., a set of coins, or a set of logical statements), called a system.

2. A set of possible states for each object (e.g., heads/tails for each coin, or true/false for each statement).

3. From (1) and (2) above, we can define a set of possible states of the system (e.g., for a three-coin system the possible states are HHH, HHT, HTH, THH, THT, TTH, HTT, TTT, where H=heads and T=tails).

4. A set of possible generic states of the system, which is meant to describe a general property of the system, e.g., a generic state of the three coin system is the total number of heads, which can be 0, 1, 2, or 3.

5. A function from the possible states of the system to the set of possible generic states of the system. Note that this function ignores or erases distinctions between possible states of the system to produce generic states (e.g., The state HHH maps to 3, the states HHT, HTH, THH all map to 2, the states THT, TTH, HTT all map to 1, and the state TTT maps to 0.

Now, with all that in place, given a particular generic state, we can count the number of possible states that correspond to it, and define entropy as a quantity proportional to this number. Thus, the generic state 3 corresponds to just one state HHH, while the generic state 2 corresponds to three states HHT, HTH, THH. When this number is large, it means that the generic state discards more distinctions (higher entropy), and when this number is smaller, it means that the generic state retains more distinctions (lower entropy).

All of the above can be translated or expressed directly in the language of distinction, because it involves just sets and elements and numbers. I’ve not done so here, though, because it would make the concepts less transparent to readers unfamiliar with that notation.

Following is an example expressed in terms of the Laws of Transformation. http://www.integralscience.org/lot.html

Let’s say we have distinguished three spaces, and each space can be marked by O or , indicating it has one of two states.

Then the possible states of the three-space system can be expressed as

{O,O,O}

{ ,O,O}

{O, ,O}

{O,O, }

{O, , }

{ ,O, }

{ , ,O}

{ , , }

To use the earlier terminology, these are microstates of the system. We then group these into generic states, as follows:

{O,O,O}

{ ,O,O}

{O, ,O}

{O,O, }

{O, , }

{ ,O, }

{ , ,O}

{ , , }

To use the earlier terminology, these are the macrostates of the system. We assign to each macrostate a number corresponding to the number of microstates in the group:

{O,O,O} -> O

{ ,O,O}

{O, ,O} -> OOO

{O,O, }

{O, , }

{ ,O, } -> OOO

{ , ,O}

{ , , } -> O

The entropy of a given macrostate (group of states) is proportional to the assigned number. The macrostates assigned O have the lowest entropy, while the macrostates assigned OOO have high entropy.

This example shows explicitly how the creation of distinctions results in a set of distinguished microstates of a system, and then through a process of ignoring some of the distinctions between the microstates, those distinctions become unobservable, and the system is seen only as having a smaller set of observable macrostates. Each macrostate corresponds to a number of microstates (and its assigned number is an invariant of a symmetry transformation between its microstates). The more microstates a given macrostate has, the more distinctions were ignored to conflate those microstates into a single macrostate, corresponding to a higher entropy state.

As mentioned earlier, the seed of time arises with the first distinction. However, this description of entropy suggests that the *arrow* of time (the increase in entropy that is associated with the asymmetry between past and future) is associated with *ignoring* distinctions that have been made. Specifically, when distinctions between microstates have been ignored in a non-uniform way, this favors the appearance of some macrostates over others, assuming that all microstates are equally probable.

But perhaps there is a greater similarity between mathematics and empirical science than this common view of their differences suggests. Indeed, the eminent mathematician Kurt Gödel viewed mathematics as analogous to empirical science in many ways. Gödel explicitly wrote of the mathematics-physics analogy in some of his more philosophical writings. A basic feature of his analogy is that, just as physical objects are accessible by physical senses, mathematical objects are accessible by mathematical intuition. According to Gödel, we explore and discover the world of mathematics much like how we explore and discover the world of physics. Granted, for most people mathematical objects may not seem so clear and distinct as physical objects. But, just as someone who has very poor eyesight might not see physical objects very well, people have not exercised their mathematical intuition will not see mathematical objects very well.

Following are some of my further elaborations upon this analogy. Indeed, as we will see, there is more than a mere analogy here: both math and physics may be seen as instances of more general principles of scientific investigation. We will begin, though, by looking at the similarities between math and physics as conventionally understood. This will then lead to some new insights as the discussion unfolds.

A science, whether mathematical or empirical, typically regards its objects of study as valid or justified constructs if they can be accessed by other researchers in a systematic way. Phenomena that can not be reproduced or are otherwise inaccessible to others, whether through the physical senses or through mathematical intuition, are not amenable to scientific investigation. Science brings order to its collective endeavor to understand experience by defining systematic ways to access such scientific objects. For example, in empirical science universal conventions are adopted for defining fundamental units of measurement, and calibration procedures are established for ensuring that measurements in different times and places are standardized. Measurement procedures are also specified, as well as conditions under which the measurements are performed. All this is done to make explicit the assumptions relative to which the measurement is made, not only to allow others to replicate the measurement but also to allow any influences of the measurement procedure to be clearly distinguished from the properties of the object being measured. Analogously, the properties of mathematical objects of study are “measured” relative to a specification of a set of definitions of basic terms, certain fundamental axioms, and accepted rules of inference. These establish a context relative to which the properties of the objects can be clearly defined and investigated in an objective manner. This context is analogous to the clear definition of measurement that allows the mathematician to access the object under study in a repeatable and reliable way. It is interesting to note as well that, just as a standard of measurement can not measure itself but provides the reference for the measurement of everything else, the fundamental notions and definitions in a mathematical context can not define or justify themselves but provide the reference for understanding everything else in the mathematical system.

In empirical science, the standards of measurement provide a framework for creating a coherent and reliable set of measurements in different times and places. Without such standards, the measurements performed following different procedures, using different units, and so on, could not be coherently related to each other. And only coherently related measurements can be correlated to each other and used as a basis for discerning patterns of order and abstracting general relationships. In short, such standards are what allow invariants to be identified amidst the various observations. Similarly, in mathematics, the specification of a system of definitions, axioms, and rules of inference is necessary to allow different properties to be coherently related to each other as a basis for building up more general lemmas and theorems about mathematical objects under study. In both cases, it is the establishment of a standardized context for empirical investigation that gives unambiguous meaning across time and space and investigator to the objects under study and their properties, thereby allowing more general, abstract, and subtle levels of order to be discovered.

It is generally thought that the empirical data of science is a fundamentally different kind of knowledge than the conceptual theories of science. In fact, however, empirical data is, like the theories, purely conceptual in nature. Empirical measurement results are by definition quantitative (i.e., mathematical) objects. Measurements are mathematical in nature by design and by necessity, for otherwise there would be no way to rigorously relate the measurements to mathematical theories. In other words, both the theories and data in empirical science are of the same nature: mathematical. And, obviously, the general theorems as well as the specific properties of mathematical objects under study are both mathematical in nature as well. So in this respect there is an analogy as well. Empirical data in both cases is not of a different kind, but merely concepts of a narrower range of generality.

In empirical science, theory and measurement are related to each other as follows. First, as we have been discussing, particular properties of objects under investigation can be measured (within the context of a well-defined measurement context) and represented mathematically. Second, the abstract theory of high degree of generality is particularized in accordance with the specific features of the measurement context to obtain a special instance of the theory. For example, Newton’s law of gravitation may be adapted to a small region on the surface of the earth to obtain a specific acceleration due to gravity of 9.8 m/s, which can then be mathematically compared to particular quantitative measurements of position and time of various objects in free fall, where the measurements are made using standardized procedures. Analogously, in mathematics, particular properties of mathematical objects under investigation can be “measured” relative to a standardized context defined by specifying a particular mathematical system (e.g., properties of a specific 3,4,5 right triangle can be investigated in the context of Euclidean geometry). These properties can then be compared to special cases, or instances, of general mathematical theorems (e.g., the Pythagorean theorem) to see if there is agreement between theory and experiment. If there is not agreement, a counter-example has been found that invalidates the theorem, and the error in the proof of the theorem must be found, and the theorem generalized or otherwise corrected. Similarly, if empirical data contradicts a scientific theory, the theory needs to be extended or adapted to accommodate the data. Of course, the other possibility in either case is that the data may be at fault. In any case, the inconsistency prompts the investigator to identify and clarify the source of the inconsistency and make appropriate adjustment to restore coherence between the theory and experimental data. Yet another possibility is to alter the context (e.g., shift to non-Euclidean geometry).

As is well known, Gödel proved that, for a mathematical system of sufficient sophistication, there are truths of that system that can not be proved from the axioms of the system alone. In other words, even supposing all the basic definitions and axioms and rules of inference of a system are made totally rigorous and completely explicit, there are nevertheless some mathematical objects without definite properties relative to that system. The analog of this in empirical science would perhaps be that, even supposing measurement devices and procedures are fully defined and explicated and followed, there are nevertheless some physical objects without definite properties relative to that experimental arrangement. This has a striking similarity to the situation with quantum measurement. That, however, opens up a new topic deserving its own exploration at another time.

Another aspect of the analogy relates to the dependence of the properties of the object under investigation upon the tools used to measure or investigate it. In quantum theory, the properties of the object under investigation are only well-defined relative to the specification of a particular measurement procedure. (This is also the case in relativity theory, where properties such as length and mass only have definite values relative to a chosen reference frame of observation.) Similarly, properties of mathematical objects are not well-defined without reference to a system of definitions, axioms, and rules of inference. If such rules are changed, the properties (and to some degree perhaps even the object itself) may change as well. One manner in which both types of scientific investigation develops is to attempt to identify properties that are invariant under a certain class of reference systems/frames. Another goal of scientific investigation is to find the reference systems/frames from which the objects are revealed in the simplest, clearest and most elegant manner.

Finally, one may also see an analogy with the selection of the distinction between, on the one hand, the reference system/frame defining the observational perspective or methods of investigation, and, on the other hand, the object under investigation. This distinction between the measurement system and the measured system, between the observational frame and the observed objects, is to a large extent flexible in its placement. As illustrated by Schrödinger in his famous thought experiment with the cat, the system under investigation can be defined to include a detection apparatus and even an animal. In the case of quantum theory, the result is that the apparatus and animal evolve with the atom into a quantum superposition of states until the box is opened and the whole system observed. There is, however, no clear termination to this, resulting in an infinite regress. Similarly, in mathematics, one can treat a formal mathematical system itself as an object of meta-mathematical investigation, as Gödel did in his proof of his incompleteness theorem. One may then find a truth that is not provable by the system. This truth could then be adopted as a new axiom, creating an extended system of axioms. This new formal system can then itself be an object of meta-mathematical investigation, resulting in an infinite regress. Thus, both measurement in physical science and in mathematical science do not succeed in fully capturing the object of investigation. Any object, after all, is itself a construct of science, and is only a meaningful construct within the broader context of scientific investigation, which is a framework for viewing that, by design, reveals invariant patterns of order. Such an order (or “cosmos”) we call the world. We must remember, however, that the appearance of such a world arises in dependence upon the scientific framework, and does not have any independent existence. It is thus natural that we can never fully grasp any such object.

]]>This relationship can be translated into more affective terminology as follows. First, we note that ponderability is equivalent to objectification: something is ponderable to the extent that it has been objectified as existing independent or outside of consciousness. Now such an objective world, Wolff explains, arises as the result of a blanking out of consciousness:

The apparently inert and lifeless matter comes to be viewed as merely a partially obscured Consciousness. Thus, if we regard a portion of an originally homogeneous Consciousness as partly blanked-out or neutralized by its own other, the result is some degree of relative unconsciousness. This relative unconsciousness is the objective world, or, in other words, the basis of the whole universe as experienced through the senses. …It may now be said that the universe is produced by a process which we may call a partial blinding, and that the reverse process, i.e., that of Awakening, destroys the universe to just the extent that the Awakening has proceeded. [Wolff, Pathways Through to Space, Ch. 61]

So, Awakening to Substantiality is inversely proportional to the extent to which this blinding or blanking out has taken place. Furthermore, this blanking out is a resistance or negation of the original Substantial Reality. As Wolff states in his Aphorism 36,

The Universe as experienced is the created negation that ever resists.

We can thus say that Awakening to Substantiality is inversely proportional to resistance. In other words, more resistance to the immediacy of Consciousness corresponds to more experience of the samsaric universe of delusion, i.e., suffering.

This formulation is strikingly similar to a principle formulated by Shinzen Young in Working with Physical Discomfort During Meditation:

SUFFERING = PAIN x RESISTANCE

This theorem, however, includes the coefficient of proportionality explicitly, namely, the intensity of the pain. Generalizing this to an intensity of any experience, we could translate back to modify Wolff’s aphorism to read Substantiality = 1 / Intensity x Ponderability. In other words, objectifying something intense is blanking out more consciousness than objectifying something not so intense, and there is more substance in objectifying something of low intensity than something of great intensity.

]]>When the mind is quiet and clear, this imaginative overlay might be temporarily suspended. It is then possible to directly realize that there is fundamentally no train, no space, no time, and no sound “behind” the phenomenon. There is just the immediacy of “whooooo! whooooo!” that is in awareness and not separate from awareness. It’s not some unity that is the result of some complicated nonlocal quantum interconnection of all the various separate things, but an original direct simplicity prior to any mental fabrication.

Of course, we can’t always be in such a state of suspension of mental activity (nor would we want to be). But when these imagined constructs (such as time) do arise, we can still look at them and try to see their true nature. Alternatively, we can intentionally evoke these concepts in order to investigate and liberate them. Do space, time, trains, and sound objectively exist “out there”, or are they creative acts of imagination? This can be investigated by simply looking directly at the thoughts as phenomena and letting them self-liberate. If we’re not quite up for that yet, it can also be investigated by analyzing the concepts and the “things” they refer to, in order to see if there is anything objective about them. The intent of this analytical style of meditation is not to arrive as some conceptual conclusion but to see that there is no basis for considering them to be objectively real. It is a way to cultivate a non-conceptual insight into emptiness. This kind of analytical meditation is practiced in Buddhism where it is considered complementary to calm abiding meditation. The two can actually support each other. (In the original Greek, the word “analysis” actually means to “free up,” or liberate.)

Nagarjuna, the founder of the Madhyamika school of Mahayana Buddhism, is famous for his dialectical deconstruction of time, space, causality, and other fundamental concepts that form the foundation of delusion. Nagarjuna provided a classic analysis of time in his Mulamadhyamikakarika. In these philosophical aphorisms, he examines time (among other things) and finds that it cannot be conceived of as an entity existing independently. He begins with the conventional division of time into past, present, and future. He then argues than not one of these can be said to inherently exist: The present and the future either depend on the past or they do not. If they do, then they must in some sense already be implicitly contained in the past, in which case their distinction with the past does not make sense. If they do not, then there can be no relation or connection to the past, and it makes no sense to talk of them as linked phases of time. Instead, time must itself be regarded as a set of imaginary relations, and nothing in itself. With repeated practice in this sort of analysis, there can be deeper non-conceptual insights into the emptiness of time, and our experience of time can become more “transparent” to the emptiness out of which it arises.
]]>

Consider a child’s visual experience of a wooden block, i.e., a cube. As the child turns it around and looks at it from different perspectives, the visual appearance changes. The shape of the image changes depending on which side is facing the child. Never does the child see the entire block at once (one side is always hidden, facing away). Yet, the child learns to correlate all these different 2D visual images, abstracting from them a 3D object that persists. This object, however, is not itself experienced. It is conceptually constructed and posited as existing “behind” the variety of 2D visual appearances in experience. At the heart of this connection between the multitude of appearances and the invariant reality behind them is the principle of symmetry.

Put simply, symmetry is an invariance amidst change. More precisely, consider something X (e.g., a set of two points in the plane) and a transformation R that changes X to X’ (e.g., a rotation of the plane). Now suppose we have abstracted a feature D that can be defined for both X and X’ (e.g, the Euclidean distance between points). Then we say that D is a symmetry of R if D is invariant with respect to the transformation R (e.g., distance is a symmetry of rotation if distance is unchanged when the plane is rotated). So, although rotations of the plane change the positions of points, the distances between those points are not changed. We can also express this in terms of a rotation of the coordinate system: although the coordinates of points change when the frame of reference (or perspective) is rotated, the distance between two points remains the same. Thus, the distance is an “objective feature” that persists amidst changes in perspective, while the coordinates are “mere appearances” that depend on the perspective and are not objective.

In physics, Noether’s theorem establishes a correspondence between symmetries in the dynamical laws of a system and conserved quantities (i.e., invariant features of the system). For example, if the laws are rotationally symmetric, then angular momentum is conserved. Simply put, Noether’s theorem relates the objective dynamical features D of a system to the transformations R of its dynamical laws that leave the laws unchanged, i.e., that represent mere changes in perspective and not real changes.

In mathematics, Felix Klein’s Erlangen program characterizes different geometries by their corresponding symmetry groups. Euclidean geometry, for example, corresponds to the group of rotations, reflections, and translations because the geometric features that characterize Euclidean geometry (e.g., distances and angles) are invariant under those transformations. Different geometries can then be related to each other by relating their corresponding symmetry groups. For example, projective geometry has a larger group of symmetries, and it has fewer invariant features (e.g, cross-ratio, incidence, tangency, colinearity, but not distances or angles). The symmetry transformations correspond to changes in appearance due to change in perspective rather than change in something objectively real. This is why the larger symmetry group corresponds to fewer objective features. As physics moves towards higher symmetry, objectivity dissolves until it vanishes in the limit of perfect symmetry where everything is seen as a hierarchy of symmetry groups corresponding to successively deeper levels of perspectives.

Consider a set X and the group G of all possible transformations of X to itself (i.e., the group of automorphisms of X). One feature of X that is symmetric under all the transformations of G is the size, or cardinality, of X. Most often, though, the more interesting features of X are not symmetric under G, but are symmetric under some subgroup of G. For example, if X is the real number line then the Euclidean distance between two points is invariant under the Euclidean group E(1) of translations and reflections, but is not invariant under scalar multiplication. E(1) is a subgroup of G that corresponds to the merely apparent transformations of perspective, the transformations that do not change anything real. The factor group G/E(1) represents the transformations that are regarded as objectively real, factoring out all the merely apparent transformations of the symmetry group E(1). These real transformations effectively define what it means for objects to undergo objective changes, i.e., to change from one object to a different object. The identity element of G/E(1) corresponds to the transformations of perspective that do not change the identity of objects.

Let’s reconsider now the example of the child and the block. Although the child does not know abstract mathematics, the different perspectives on the block are understood as transformations of a symmetry group E(3) with corresponding invariants of distances and angles. Because no other transformations are experienced except for those in the group E(3), the child posits a real (i.e., invariant) object existing in a 3D world with a constant identity amidst the changes in perspective. This is how symmetry and invariance is implicitly built into the very roots of our experience of an objective world.

]]>If we carefully examine the process of measurement, however, we see that time and space actually emerge empirically. To illustrate, let’s first consider just time. When multiple measurements of position are performed, we assign the measurements a linear ordering to form an ordered sequence of measurements. We then correlate these sequential measurements to construct something called time that is a useful parameterization of position (and perhaps other constructed quantities). So, to do all this we need to presuppose some ability to define a linear order of measurements, and to define the position measurements in some systematic way so that they can be correlated with each other in some coherent way. You might say, we need to superimpose some primitive structures of protospace and prototime in order to even start studying the world in a systematic way by relating measurements to each other. And when we do this, various kinds of time and space might emerge, such as Newton’s space and time and Einstein’s space and time.

To elaborate a bit on these ideas of proto-time and proto-space, if we start from a purely empirical basis, then you might say we have a set of measurements, which by nature are discrete. Then proto-time would be the linear order of this set, giving it the structure of a linear sequence of measurements. The continuous time parameter would then emerge as a kind of interpolated continuum constructed as a layer that emerges when we correlate the discrete sequence of measurements.

The proto-space is a kind of presupposed unit of measurement and method for systematic measurement of position using that unit. Its the very least we need to measure something like space or distance or length. Once this is defined, then if it is consistently used in all the measurements, then it provides a coherent way to make the correlations among the measurements at different times in the sequence. Without any such standard for position measurements, there is no basis for making sensible correlations between different measurements in the sequence. Then, just as continuous time emerges as a derivative layer from the discrete sequence of measurements, continuous space emerges as a derivative layer expressing the coherence between position measurements.

Thus, time and space emerge not objectively, but are constructed through the very process of measurement itself. This provides a new conceptual foundation for the formulation of physics that does not presuppose objective time and space.

]]>Not only is collapse of the wave function totally unverifiable and nonphysical, but another big problem with collapse is that it is in blatant violation of the Schrödinger equation! Any other scientific hypothesis that both violates known laws of physics and is not verifiable would normally be immediately rejected as pseudo-science. Why, then, has the notion of collapse stuck? Perhaps because one consequence of rejecting collapse would seem to be that it would lead us inevitably to the many worlds interpretation. Strange as the many worlds interpretation may be, however, it does have the virtue of being consistent with the laws of physics, at least as we know them so far.

The many worlds interpretation is often rejected as outrageous because it seems to imply that all the separate “worlds” have some actual existence, just like ours. But, it’s more like none of the “worlds” have actual existence, including ours. To make an analogy with the theory of relativity, it’s not like there are many actual velocities of the earth in space, each existing as its own separate actualized “world.” Rather, it’s that the earth has no actual objectively existing velocity at all. Velocity only has meaning relative to a reference frame, and reality does not have any privileged reference frame. We happen to observe things in the reference frame of the Earth where that velocity is zero. If we were on the Moon, things would be different. Is there really some mystery here? How is this so different from quantum theory? The original “relative state” formulation of quantum theory seems to be in line with this view, and calling it a “many worlds” theory is just as misleading as calling relativity theory a “many worlds” theory. It’s just “many reference frames” and one world. One might complain that the “one world” is a strange one, but that’s no less true in relativity theory where nothing has any objective mass, length, time, etc. The only objective realities are the four-dimensional invariants. These are almost as weird as coherent superpositions.

It is good to remember that physical theories in general are abstractions, describing a reality that is beyond our direct experience. We experience our immediate sensations of sight, sound, etc., and never directly experience the abstractions of “atoms” or “fields” which are only indirectly inferred from experience. (The same is actually true of a “chair” or “rock” as well.) These may be useful abstractions, but we never actually experience them directly, and can never know if they really exist the way we think. In fact, we don’t really know that they exist at all. We could be a brain in a vat or having a lucid dream right now. Science tries to balance the belief in some objective reality with the fact that we can never know the thing in itself. As Heisenberg wrote,

We have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.

It is actually more radical than Heisenberg suggests. Consider again the double-slit experiment. A simple photon which “measures” which slit the particle went through does not actually collapse the wave function to be localized in just one region of space. It merely entangles itself with the system. Provided no decoherence has taken place so that the coherence of the original system is not washed out in many degrees of freedom of the measurement system, then there is no sense in which an irreversible measurement interaction has taken place. So one is still free to decide what will ultimately be measured. Because there has not been any interaction with a particular well-defined measurement apparatus (by which I mean a device that involves decoherence) the attributes of the system are likewise still undefined.

The above situation with regard to a quantum system is analogous to not having defined any particular well-defined reference frame in relativity. If I do not specify a reference frame for an observation of a monolith floating in space, then it has no definite well-defined value for various properties such as velocity, mass and length. Once the reference frame is specified, however, then one can meaningfully talk about definite values for these quantities. Similarly, once one specifies a particular measurement apparatus (that involves decoherence), then one can say there is a well-defined meaning to talking about certain properties. The coherence is lost and there is no practical possibility to erase that measurement choice after the interaction with the measurement apparatus and choose instead to measure a complementary observable. And all observers will agree on what is measured.

In connection with this, Pauli has this interesting statement:

Just as in the theory of relativity a group of mathematical transformations connects all possible coordinate systems, so in quantum mechanics a group of mathematical transformations connects the possible experimental arrangements.

And Bohr writes:

In neither case [of quantum theory or relativity theory] does the appropriate widening of our conceptual framework imply any appeal to the observing subject, which would hinder unambiguous communication of experience. In relativistic argumentation, such objectivity is secured by due regard to the dependence of the phenomena on the reference frame of the observer, while in complementary description all subjectivity is avoided by proper attention to the circumstances required for the well-defined use of elementary physical concepts.

Admittedly, the analogy with relativity only goes so far. In the case of relativity, the choice of reference frame is sufficient to provide a unique and definite value for physical attributes. In quantum systems, on the other hand, although the interaction with a particular decohering measurement apparatus gives a particular observable well-defined meaning, it still does not result in a definite value (i.e., the wavefunction is not collapsed). The analogy with relativity, it seems, is a similarity between the choice of reference frame and the choice of a particular decohering measurement apparatus. These choices are sufficient to give well-defined meaning to certain physical quantities. The difference seems to be that in quantum theory, even though the quantities may have well-defined meaning, they still have not been actualized. For example, once the atom has interacted with the Geiger counter and poison bottle, it makes sense to say that Schrödiner’s cat is either alive or dead (there is no longer any coherence that would allow one to perform a measurement of a complementary observable to the alive/dead observable).

The actualization of a particular value could be described in terms of the many worlds interpretation as the choice of which world “you” get identified with. In relativity, though, one can actually imagine something analogous, but we don’t regard it as a mystery for some reason: The description of the world according to relativity does not specify which moment in spacetime we should be experiencing as “here and now”. So, what determines which point in Minkowski space is “actualized” in our experience as here and now? Why should we experience this here and now rather than some other? This question seems quite similar to the question of why we experience ourselves in one of the many worlds as opposed to some other. What “collapses” us into a particular here and now? Clearly, there is no such collapse, just as there is no collapse in quantum theory. The theory is an abstraction from the here and now. If we get confused and think that we really live in the abstraction, then we become perplexed at how the specific here and now is mysteriously “collapsed” from all the possibilities in the general, abstract world we’ve dreamed up.

There is also an interesting similarity between the role of decoherence, which effectively cuts us off from ever detecting any of the worlds that have decohered from ours, and space-like separation in relativity. There are spacelike separated regions of spacetime that can not have any interaction or communication with us. So, what justification is there for saying that they exist at all? They can never be observed or verified to exist. Is this really any different than the other branches of the universal wave function that we can no longer detect because of decoherence?

]]>Although heaven and earth have, at first sight, very different natures, they are also interconnected. The radiance of the Sun shines upon the Earth from above and gives life to everything here below. The Sun evaporates waters from the oceans, drives the winds that carries that moisture to the mountains, where in falls, creating powerful rivers; the Sun feeds all vegetation with the energy they need to grow. When animals eat these fruits and vegetables, they partake of the Sun’s power, releasing its energy in every beat of their hearts. All life is deeply and profoundly connected with the Sun. This undeniable link between heaven and earth is a symbol of the bridge between eternity and time.

The connection between heaven and earth is manifested in the synchrony of heavenly and earthly cycles. Although the stars appear essentially eternal, a perfect image of timelessness, the Sun, Moon, and planets move through regular cycles in what Plato called a “moving image of eternity.” And these cycles of the Sun are synchronized with the daily rhythms of life. In the cycle of day and night we experience the duality of light and dark, life and death, warm and cold. Our bodies pass through cycles of activity and rest, while our minds pass through cycles of consciousness and unconsciousness. Our inner lives, as well as our outer lives, are powerfully attuned to this cycle. But while the passing days on earth result in aging and death, the cycles of the Sun repeat perfectly with mathematical precision. Although the heavens move, there is no imperfection, decay or death in their movement.

Because the cycles on earth are reflections of those in the heavens, we can come to know the heavens through knowledge of ourselves, and we can come to know ourselves through knowledge of the heavens. This revelation, expressed by the ancient insight, “as above, so below,” was certainly one of the most profound ever experience by the human species, and provided the root metaphor for many ancient cosmologies. Ancient astronomy was seen as a revelation of the profound connection between the rhythms of heaven and earth, and of the harmony of the entire cosmos. The mathematical understanding of the observed astronomical cycles was thus a sacred science.

Of course, the daily cycle of the Sun is but one of the many astronomical cycles that can be observed. Another obvious but longer cycle involving the Sun is the annual cycle of the seasons. Each year, the length of the day gradually increases to a maximum at the height of summer and decreases to a minimum in the depths of winter. Like the daily cycle of day and night, all life is organized around this annual cycle. The trees drop their leaves in the fall and blossom in the spring. Winters are dark and cold, while summers are sunny and warm. Spring is the time of planting seeds, while autumn is the time of harvest.

The Moon has its monthly cycle of four phases, which naturally divide the month into four weeks. Like the Sun, the Moon also influences the patterns of life on Earth. The entire oceans of Earth rise and fall in the ebb and flow of the tides under the direct influence of the Moon. These tides take sea creatures onto land, and take land creatures out to sea, providing an impetus for life to transition between land and water. Even as land creatures, our physiology still remains influenced by the lunar cycle.

Because these the cycles of the Sun and Moon have clear connection with cycles of life on earth, it was natural to assume that the cycles of the other planets were similarly connected with life in some mysterious and subtle way. The basic premise of ancient cosmology, that heaven and earth are interconnected, implied that all heavenly cycles have some kind of imperfect reflection in the patterns of life. Thus, the study of the planets and their relationships with each other was viewed as a key to understanding ourselves and life in general.

Each celestial sphere has its own temporal period corresponding to the duration of its cyclic movement in the heavens. These rhythms correspond to frequencies having harmonies with each other. The combined movement of all the celestial objects is thus a grand symphony of cosmic proportions. It is said that Pythagoras was so spiritually developed that he could “hear” this music, presumably because the heavenly spheres are simply the outer reflections of our own inner divinity. Thus, insofar as we are conscious of this inner divinity, we are conscious of these qualitative aspects of the outer divinity as well.

The coherent and precise understanding of the cycles of the Sun, Moon, and planets requires a mathematical treatment. It is here, though, that mysterious problems emerge, for it was found that the various cycles do not harmonize with each other. The month is not equal to an integral number of days, nor is the year equal to an integral number of days or months. It was not possible to construct a single calendar that perfectly combines all these cycles into one coherent framework. For example, if the month is defined to be exactly 28 days long, then after several months the new moon will no longer begin at the start of the month. Similarly, if a year is defined as exactly 365 days, then the calendar will gradually drift out of sync from the seasons of nature. Driven by a faith in the comprehensibility of the cosmos, the ancients struggled with these patterns, looking deeper into the relationships between the cycles. This led to the discovery of even more subtle patterns, such as the precession of the equinoxes. Gradually, over thousands of years, the sphere of the stars gradually shifts ever so slightly. It is as if the cosmos has different asynchronous clocks to measure cycles having different periods of time. But their relationship to each other remained obscure.

The temporal cycles of the heavens were also evidently spatial cycles: the Sun is seen to follow a circular path around the Earth, as does the Moon and the sphere of the stars. The geometric circle is the perfect spatial image of temporal recurrence: Just as a cycle in time exhibits change yet repeatedly returns to an identical time again, so movement around a circle undergoes change yet repeatedly returns to an identical point. The different temporal cycles thus naturally suggested different circles in space, with the Sun, Moon, and stars pictured as concentric spheres. However, while the movement of the Sun and Moon was uniform, the movement of some planets was not: sometimes they went one direction, then other times they would reverse and go backward for a while, only to reverse again and continue forward. This retrograde motion posed a significant challenge to the ancient astronomers. What could explain this non-uniform motion?

For centuries, the retrograde motion was explained with complicated epicycles. Copernicus simplified the model by placing the Sun at the center of the solar system. The reversals were then explained to be illusions of perspective, and the daily motion of the sun around the earth only an appearance and not real. Although this retained the classical circular motion, this was a significant step away from the ancient worldview. Not only was the Earth no longer fixed at the center of the cosmos, but the reality of the cosmos shifted radically. No longer did the apparent motions of the Sun and Moon correspond to their real motion. What motivated and justified such a sacrifice? Although both the Sun-centered and Earth-centered systems both explained the appearances, the Sun-centered system introduced by Copernicus so much more simple and elegant than the complicated system of epicycles. It is remarkable that this intellectual elegance was sufficiently powerful to the human mind to usurp the obviousness of sensory appearances as well as the centuries-old dogma of the Aristotelian worldview.

Copernicus set the stage for another radical departure from the classical worldview: the dropping of uniform circular motion. As empirical observations of the planets became more precise, even the Copernican model was unable to fit the data without awkward, *ad hoc* modifications. After extensive effort to make a circular path fit the data, Kepler concluded that the planets must, in reality, follow elliptical orbits with the Sun at one focus. This bold step usurped the circle from its centuries-old place as the fundamental shape of heavenly motion. As with Copernicus, this revolution took place because the elliptical orbit provided a much more simple and elegant match with the appearances. Even though the circle itself is simpler than the ellipse, it did not have any simple correspondence with empirical observations of the planets. The ellipse, on the other hand, provided a perfect fit. Thus, the aesthetic quest for intellectual coherence drove astronomy to deeper and more subtle understanding of the patterns of the heavens, revealing truths that were not obvious in the appearances.

Not long after Kepler, Newton made a profound unification of the terrestrial and heavenly in his discovery of the universal laws of motion and gravitation. These same laws governed both motion of terrestrial objects and those in the heavens, providing an unprecedented unification of heaven and earth, guided again by the quest for mathematical coherence and unity. The ancient cosmology, which had been based upon the specifics of our particular solar system, was now seen as just one of many possible solutions to Newton’s general laws of motion. With Newton, a giant leap in abstraction was taken, grounding thought in universal mathematical laws rather than specific geometric models. The dichotomy of time and timelessness still exist, but are no longer associated with the obsolete distinction between heaven and earth. Instead, timelessness is a property of the mathematical laws that govern the entire cosmos, both heaven and earth alike, while time is experienced as a property of our specific cosmos, which is one solution to those universal laws. In short, the distinction between time and eternity shifted from a spatial distinction between earth and heaven to a distinction in the levels of manifestation that are universally omnipresent. At every point in space there is contact with timelessness insofar as the general laws are universal. And at every point in space there is contact with time insofar as this universe is a particular instance of those general laws. In this sense, modern science has accomplished the ancient religious quest to unite the realms of eternity and time.

]]>The Tree of Eternity has its roots in heaven above and its branches reach down to earth. …The whole universe comes from him [Brahman] and his life burns through the whole universe.

-The Upanishads. Mascaro, Juan, tr. (New York: Penguin, 1965). p. 65.

The Pythagoreans used a more abstract symbol, the tetraktys, which is an arrangement of ten dots in triangular form:

This symbolizes the development from the single unity (*) into duality (**) then trinity (***) and four-fold multiplicity (****). Although the tetraktys symbol does not have the organic qualities of the tree of life, this mathematical symbol brings out more explicitly the features of harmony and order. For example, the numerical ratio 1/2 between the first and second level correspond to the musical octave. The next two levels give the ratio 2/3, which is the musical interval of the fifth, and the last two levels give the ratio 3/4, which is the musical intervals of the fourth. The levels of cosmic creation thus correspond to musical harmonies. This is the seminal insight at the basis of the “music of the spheres” connecting the structure of the cosmos with music through mathematics.

In *Timeaus*, Plato describes the cosmos as being built from mathematical archetypes. With only one dot, there is not much structure, but with two dots there is now a line, and with three dots a triangle. With four dots a solid object can be specified. Thus, the basic physical elements are viewed as constructed from the non-physical mathematical patterns. From the simple One, structure emerges first as subtle archetypal forms, but then reaches the point where, mysteriously, it becomes physical.

**ONE**

Common to most of the most influential worldviews of humanity is the insight that reality is, in its deepest essence, unified. To illustrate:

The universe, therefore, is nothing but Brahman. It is superimposed upon Him. It has no separate existence apart from its ground.

-Shankara. Shankara’s Crest-Jewel of Discrimination (Viveka-Chudamani). Isherwood, Christopher, tr. (Hollywood: Vedanta Press, ). p. 70.

In essence things are not two but one. …All duality is falsely imagined.

-The Lankavatara Sutra

There is in reality neither truth nor error, neither yes nor no, nor any distinction whatsoever, since all—including contraries—is One.

-Chuang Tzu (A Treasury of Traditional Wisdom, p.979)

This One is a pure nondual unity, transcending even the division between One and Many:

The One … is there before every oneness amid multiplicity, before every part and whole, before the definite and indefinite, before the limited and the unlimited. It is there defining all things that have being, defining being itself … . It is there beyond the one itself, defining this one.

-Pseudo Dionysius (Paul Rorem, Pseudo Dionysius, Paulist Press, 1987, p.129)

Like the symbol of the tree, it is a One that is not exclusively a single trunk or a multiplicity of branches, but somehow both.

**TWO**

The root of all things is difference.

-Ibn Arabi (William C. Chittick, The Sufi Path of Knowledge, SUNY, 1989, p.67)

The first departure from this original unity is a single distinction, giving rise to a basic duality expressed variously as one/many, heaven/earth, infinite/finite, subject/object, transcendence/immanence, ultimate/relative, emptiness/form, eternity/time, being/becoming. This distinction, however, is not ultimately real. The different branches of the tree are only apparently separate. But if we forget this, then we fall into delusion and suffering:

“In the beginning God created heaven and earth,” that is, the first fall of all is from the One into two, from unity into number, from what is perfect, undivided and indistinct into imperfection, division and distinction, and from the whole into parts.

-Eckhart, Meister. Meister Eckhart: The Essential Sermons, Commentaries, Treatises, and Defense. Colledge, Edmund, tr. (Ramsey, N.J.: Paulist Press, 1981). p. 100.

And, as the Bhagavad Gita reminds us:

There are two spirits in this universe,

The perishable and the unperishable.

The perishable is all things in creation.

The unperishable is that which moves not.

But the highest spirit is another:

It is called the Spirit Supreme.

He is the God of Eternity

Who pervading all sustains all.

-The Bhagavad Gita. Mascaro, Juan, tr. (New York: Penguin, 1962). p. 107.

**THREE**

When another distinction is made, duality splits into the tree-fold structure of the trinity. This is expressed variously as Father/Son/Holy Spirit, Body/Mind/Spirit, Sat/Chit/Ananda, Dharmakaya/Sambhogakaya/Nirmanakaya, Gross/Subtle/Causal. This more refined structure provides a more explicit expression of the implicit aspects of the original One. And the process continues indefinitely to increasing multiplicity.

Not only is this pattern of cosmogenesis reflected in the traditional metaphysical systems of the world, but it also manifests in modern physics. In the Big Bang cosmological theory, symmetry breaking leads to the manifestations of distinctions between the four fundamental forces of nature. Prior to 10^-43 seconds all the physical forces of nature were unified in perfect symmetry. After 10^-43 seconds, the force of gravity emerged as a distinct interaction. Then, after 10^-35 seconds, another symmetry broke and the strong nuclear force was distinguished. And at 10^-10 seconds, the weak nuclear interaction was distinguished. Although much more sophisticated than the Pythagorean tetraktys, the essential pattern of multiplicity unfolding from unity is the same.

]]>