Physics without Formulae

Essay at Philpapers

Posted in Uncategorized | Leave a comment

A Strange and Charmed Life

A Strange and Charmed Life

Posted in Uncategorized | Leave a comment

An Introduction to Post-Apocalyptic Christology

An Introduction to Post-Apocalyptic Christology

Posted in Uncategorized | Leave a comment

The Lattice Milieu

Essay at PhilPapers

Posted in Uncategorized | Leave a comment

The Universal Lattice

Essay at PhilPapers

Posted in Uncategorized | Leave a comment

A case for Lattice Schemes in Fundamental Physics

Essay in PDF format – A Case for Lattice Schemes

A Case for pursuing Lattice Schemes in Fundamental Physics

Research has shown that many of us hold intuitive beliefs about the way physical systems behave, which turn out to be contrary to reality.(1) For example, subjects are given a depiction of a billiard ball being drawn in an arc across a table. Asked what happens when the ball is released, some have the ball continuing along the arc, while others have the ball proceeding in a straight line tangential from the arc.

From as early as Zeno in the 5th century BCE, we have known intuitively that for any division of one, 1 / n, there will always be a fraction, 1 / (n+1), that is smaller. This general idea has been rigorously formalized in the Continuum Hypothesis, a proposal that the real number line is infinitely divisible.(2) Yet since Leucippus, a contemporary of Zeno, we have intuitively recognised that any division of matter should eventually arrive at fundamental particles which cannot be divided any further. Mathematics has been routinely and effectively used to model physical systems, yet unreasonably so, for since Zeno and Leucippus, our most primitive assumptions about mathematics have been in fundamental disagreement with our most basic assumptions about matter.

General Relativity (GR) is a so called ‘classical’ theorem, for it assumes a direct correlation between mathematics and the physical world. GR suggests that both space and time, like the real number line, are infinitely divisible, and just as in division of real numbers by zero, GR too breaks down when the dimensions of space and time fall to zero at the ‘singularity’, predicted by GR to have been the starting point of the universe. In an attempt to incorporate GR within Quantum theory, many researchers are considering the possibility that time and space are not continuous, but rather arise in discrete ‘quanta’.(3)

We have long known the scales at which GR and Quantum theory should theoretically merge. Max Planck simply substituted his equations within those of Einstein, from which emerged fundamental units, among them the Planck length and the Planck time. The Planck length ~ (10 exp -35) metres is the distance light travels in a vacuum in one interval of Planck time ~ (10 exp -43) seconds, so that in (10 exp 43) Planck intervals (1 second), light travels ~ (10 exp -35) multiplied by (10 exp 43) metres, or ~ (10 exp 8) metres. In the theory of Quantum Loop Gravity(4), space itself is thought to consist of ‘atomic’ spheres of space each having a diameter of one Planck length. The GR model is now being thought of as having been a useful approximation to what is a fundamentally quantized reality.

Our ancient intuition of reality’s discrete quantization is thus winning favour over our equally ancient intuition of a mathematical continuum. Yet we have known since its inception that the Quantum model is also incomplete, for it can only predict the probabilities, rather than the actuality, of matter’s behaviour, and it is unable to decouple the ‘observer’ from objective external reality.

At the time Quantum theory was introduced, physicists still held great hope for discovering what the philosopher Emanuel Kant called “the thing in itself” – discovering what physical reality actually is, rather than merely learning how to effectively model its behaviour. This hope has however been consistently dashed by experiments which have unequivocally demonstrated non-locality.(5) Entangled particles, separated from each other in space and time, influence each other faster than a signal travelling at the speed of light could be passed between them. Quantum theory, in its most commonly followed guise, implies that objects at the farthest reaches of space can (somehow) exchange information with each other instantaneously, in apparent violation of Special Relativity.

The prospect of returning to realism emerged in the late 1960s with the publication of Calculating Space(6) by computing pioneer Konrad Zuse. Zuse proposed that reality was comprised of machines which he called ‘cellular automata’. He envisaged all material reality to be a cubic lattice of these cellular automata, each one connected to its neighbours on all sides.

Displays such as those seen at the opening of the Beijing Olympics provide a useful illustration of his idea. Participants are arrayed across an arena, each holding a selection of coloured cards, one of which they raise above their heads at any given time. Each person is responsible for just one element of the two-dimensional composite picture that emerges above them. For the purpose of the display, each person assumes the role of a cellular automaton. The music playing in the arena provides a universal ‘clock’ that precisely synchronizes the ongoing changes in the display.

If, for example, a blue ‘dot’ needs to move across the picture from left to right, a simple rule would be for each automaton to get its cue for the next pixel from its current neighbour to the left. Then with each beat of the music, the dot would proceed smoothly across the display. One can imagine providing each automaton with a simple set of rules to follow on each beat, also taking into account its neighbours to the front, back and right, such that intricate and unique patterns emerge.

If such a system is extrapolated out into three dimensions, Zuse suggested that cellular automata could generate complex and unique realities, such as that which we now inhabit, rather than merely their representations. We must ask however, what are these ‘cellular automata’ themselves made of, where are they, and where did their rules of engagement come from, just as we might ask what atoms are made of, or where the laws of Nature come from.

Ed Fredkin, long time champion of ‘computational physics’, argues that the ‘automata’ are constructed out of an abstract substance he calls ‘pure information’.(7) Stephen Wolfram avoids the issue of a substrate altogether, for he does not see cellular automata and their interactions as an actuality, somehow lurking behind our perceived reality. Rather, in his “New Kind of Science”, he employs the theory of cellular automata as an analytical tool for merely modelling reality, just as mathematics is employed for more conventional modelling of physical systems.(8) Max Tegmark however, like Fredkin, wants to know what Kant claimed we can never know – what we are ultimately made of. In his “Ultimate Ensemble”, he argues that physical reality is not merely modelled by mathematics, but that physical structures and mathematical structures are one and the same thing – reality is made of mathematics, just as Fredkin’s world is made of information.(9) This is an attractive idea, for we can easily see what mathematics is (an abstract series of relationships), and just as easily see that all of mathematics in itself has no material substance. Thus in Tegmark’s scheme, we have something (physics) which is constructed out of nothing (mathematics).

Tegmark argues further that the universe is composed entirely of mathematical structures which are computationally decidable. The concept of computability arose out of Alan Turing’s work on a scheme for algorithmically generating mathematical relationships, and then deciding if those relationships were valid.(10) His imaginary ‘machines’ could compute each candidate function for as long as it took to decide its validity. Functions which are both computable and decidable are vital to a quantized model of reality, because like quanta themselves, these functions are finite. The entire computation of such functions, as well as the Turing machines that compute them, can be represented by a finite string of binary digits, and ultimately by a single integer – a Turing machine is fundamentally an abstraction. Turing discovered that a particular class of his machines were ‘universal’ – a Universal Turing machine could simulate any other Turing machine including itself.(11) Such machines have since become a practical reality – today’s general purpose computers.

Nick Bostrom has argued that the substrate of our reality, for example the ‘cellular automata’ of Zuse’s lattice scheme, is in fact a much larger computer that lies outside our perceived reality.(12) The drawback of this idea, a darling of science fiction, is that it merely shifts the substrate of existence back one step. We are left wondering what the “big” computer itself is made out of. This notion does however provide a useful framework for thinking about computer simulation, as does its practical application in the virtual ‘realities’ that now pervade the Internet.(13)

Putting aside the simulation of the entire universe, consider just one of Zuse’s cellular automata. If the automaton is a Turing machine, then it is a ‘computer’ capable of simulating all the properties – vacuum energy, gravitational potential, and so on – of a single atom of space, a sphere with a diameter of (10 exp -35) metres. The machine is not ‘contained’ inside this sphere, nor does it occupy any other volume of space, because space itself does not come into existence until the automaton simulates it.

What then is the automaton? The automaton is itself a virtual machine that is being simulated by another automaton. And what then is this automaton? It too is being simulated by an automaton, but none other than the original automaton. The idea of this self-referential loop (known as a “strange loop”) is superbly illustrated in the famous M.C. Escher woodcut “Drawing Hands”. In the physical world, of course, such a scheme would represent perpetual motion and be thermodynamically outlawed. However, these machines are not part of the physical world, but rather belong to the abstract world of mathematics, which is removed from physical law – these machines are initiating the very existence of physical law itself. Each machine is processing a string of binary digits in a “desultory manner” (as Turing originally described it), and in so doing is simulating the other machine, which is an (identical) string of binary digits. Because the strings are finite in length, the process of stepping through each computation represents a cycle which returns to its starting point in a finite period. This then is the automaton’s internal ‘clock’, the fundamental quantum of time, or (10 exp -43) seconds in absolute terms.

Constructed from pure mathematics, we have then generated both a fundamental quantum of space, AND a fundamental quantum of time. If we return to our stadium in Beijing, we can see that the clock signal (the beat of the music) is delivered to each participant at the speed of sound, practically at the same time. It is not practical however to deliver a simultaneous master clock signal throughout the universe, due to the limiting speed of light. So instead, each element of space (automaton) references its own internal clock, running at a frequency of (10 exp 43) Hertz. The much coarser ‘atomic’ clocks that are routinely used in navigation and communication are based upon physical phenomena, and are subject to significant frequency ‘drift’. The internal clock of the ‘space’ automaton however arises from a non-physical computation, and is immune from drift. Thus all space automata across the breadth of the universe remain precisely and indefinitely synchronised with each other.

We can see from this model why the speed of light should be a limiting speed. Let us suppose that a photon of light is likewise a simulated phenomenon, and that its simulation is enacted through a modified computational state in one of these ‘space’ automata, like the blue dot moving across a sea of white in the Olympic stadium. We presume that the automaton has an input/output interface that can communicate the system state “photon” over to its neighbour, and then change its own state back to “vacuum”, within each clock cycle. If we were to line up (10 exp 43) of these automata circumference-to-circumference in a straight line, we can see how a photon “state” could be passed along this (10 exp 8) metre long “bucket brigade” of automata over the course of one second. In this model, the photon is not a wave/particle “object” that makes its way through empty space. Instead, the photon is a computational state that gets passed along a ‘solid’ pathway of simulated space atoms. A photon, or any other simulated phenomenon, cannot propagate from one space atom to the next in any less than one fundamental clock cycle at a time. However one can consider computational states that take more than one clock cycle to be translated across space, and hence propagate at speeds below the speed of light.

A macroscopic object, such as a proton coupled to an electron, might be enacted through the altered computational states of an agglomerated network of space atoms.(14) This agglomeration of states could likewise propagate (as a whole) through a fixed lattice of space atoms, but at a speed fundamentally limited by the diameter and internal clock frequency of the space atoms that are hosting it. Let’s suppose that this hydrogen atom ‘state’ is translating through the lattice of space atoms at some (necessarily sub-luminal) speed. If the energy state of the electron sub-system changes and a photon state is exported, we can see that the photon state will intrinsically propagate away from the hydrogen atom state, along the frame of the lattice, at precisely the speed of light, despite any existing vector of the hydrogen atom state it was sourced from. However, the existing vector of the hydrogen atom state may very well alter the registered energy (colour) of the exported photon state.

The lattice in such proposals returns us to the Newtonian perspective of an absolute frame. Relativistic effects then emerge from the interactions between the various computational states of the automata that comprise the fixed space-time lattice. Clearly (experimentally) a state such as that representing a photon will routinely be diverted from a straight path, following on from the exchange of information with gravitational states (gravitons) that it encounters during its translation through the lattice. Inertia is explained simply as the endless and desultory processing, in the absence of any intervening input, of an object’s computational states, as they are transferred between the individual automata of the lattice. With astonishing prescience, Newton tried (albeit unsuccessfully) to develop a theory of gravity avoiding non-locality in which “tiny invisible jiggling particles fill all of seemingly empty space”(15)

Where then does the lattice come from? In the 1940s John von Neumann proposed a ‘universal replicator’, a type of cellular automaton that can replicate itself.(16) The code of the space automaton is modified so that in each computational cycle, it produces a new automaton. In this scheme, there is an exponential expansion in the number of extant automata once the replication code is enacted. Indeed, some (2 raised to the power of (10 exp 43)) such automata would be produced in the first second of the universe’s existence. If these atoms of space are close packed, like stacked oranges at a fruit market, then the universe we currently observe (with a radius of ~45 billion light years) would contain a mere (10 exp 185) such atoms – our visible neighbourhood would be a very small speck indeed, of the totality. Because each and every new atom of simulated space replicates itself in each clock cycle, the nascent universe inflates uniformly in all directions from every point within it. The initial creation of space ‘atoms’ would be a turbulent process, so that space itself would behave like a gas, and have a ‘temperature’. The emerging space atoms would behave like ping-pong balls bouncing around in a lottery number generator. The surrealist Salvador Dali perhaps anticipated such an atomic lattice of spheres in his famous painting “Galatea of the Spheres”. Through the seeding of code that acts to halt this replication, regions then form where the ‘temperature’ of space drops to an absolute minimum, an equilibrium that will later encompass super clusters of galaxies. In these regions, additional space is no longer being produced, so that the quanta of space bind to become the smooth, flat and rigid foam that we encounter in our local region. The regions between the galactic super clusters may however continue to produce new space automata, acting to push the super clusters apart.

If the automata that encode the lattice, and encode the realities that emerge from it, are merely strings of binary digits, where did the initial arrangement of the digits come from? It is manifest that the code responsible for the laws of physics, and the evolution of the universe as we now experience it, is not trivial code. However, Juergen Schmidhuber(17), following on from work on algorithmic compressibility by Andrey Kolmogorov and Gregory Chaitin, has shown that the code to generate all possible automata is simpler than the code which generates one specific automaton such as the type which is simulating our local milieu. This ‘optimally compact’ code produces all possible universes (including those like our own universe that have the property of actually “working”). Raw binary states (strings of binary digits independent of any substrate hardware) could randomly assemble into this seminal configuration from which all other possible configurations then emerge. There is a finite probability that this initial combination will obtain, for time itself does not come into being until the basic clock of a self-simulating string pair first ensues.

Each automaton does not ‘occupy’ the space lattice; each merely defines one cell within the lattice. The strings of binary digits that comprise the automata do not have any dimension in space. Likewise, the starting point of this universe, and any universe which has extent in space, is a singularity which has no extent in space. Thus the automata that define our universe, and any other universe, all ‘exist’ at one and the same ‘place’, the singularity.

We usually think of effects being translated across the lattice of space, as we have seen experimentally, at speeds up to the limiting speed of light. However, if these automata can interface with each other, then they can presumably do so directly ‘across’ the point of the singularity. Any element in the universal space lattice can therefore instantaneously communicate with any other element. We thus have a mechanism for effects to be non-local in the context of space (simulated length), but local (to within one clock cycle of simulated time) in the context of the singularity.

This prospective space-time lattice, and its implications, remains highly speculative. The challenge before us is to develop a method of interfacing directly with the code of the automata, so that we can ‘read’ the code, interpret the code, and potentially (carefully) ‘write’ back modified code. The obvious candidate programme for developing such an interface is our research into quantum computing – the ultimate “superposition” of quantum states, as we have just seen, is that of all automata at the universe’s singularity. Obviously no other civilization in our universe has yet written back code that causes the universe to evaporate – the code we are currently running on probably prevents such an event. Any candidate universes whose code was not well protected would have long since halted and thus been discounted from the pool of viable universes, for it is certain that any such exposure in the code would be exploited.

If we were to learn how to access the singularity, then the prospect emerges for us to visit not just the solar system or the galaxy, but any corner of this universe, or any other universe, without ever getting up from our living room, as all these realities share that singularity in common. We should view emerging relationships between mathematics and physics, such as the ‘E8 Lie group’ correspondences recently discovered by Garrett Lisi, as guiding us to the underlying operational code.(18) It is inevitable that the mathematics to which we have access are a subset of the computable functions that gave rise to our universe. It is possible that the mathematics itself has been produced by computing automata. It is of course also possible that more advanced civilizations than ours have already learnt how to access the data at the singularity, and have long since been monitoring our progress towards the same.

For Carl, 1934-1996

1) McCloskey, M (1983) “Intuitive Physics”, Scientific American, April, pp114–123
2) Formally, the hypothesis states that there is no intermediate cardinality between the set of rational numbers and the set of real numbers.
3) Bojowald, M (2008) “Follow the Bouncing Universe”, Scientific American, October, pp28-33
4) Ibid.
5) Albert, D Z and Galchen, R (2009) “A Quantum Threat to Special Relativity”, Scientific American, March, pp26-33
6) Zuse, K (1969) “Calculating Space”, MIT Technical Translation AZT-70-164-GEMIT, MIT (Proj. MAC), Cambridge, Mass. 02139, Feb. 1970.
7) Wright, R (1988) “Did the Universe just happen?”, The Atlantic Monthly, April.
8) Wolfram, S (2002), “A New Kind of Science”, Wolfram Media, Inc.
9) Tegmark, M (2007) “The Mathematical Universe”,
10) Turing, A M (1936), “On Computable Numbers, with an Application to the Entscheidungsproblem”, Proc. London Math. Soc. (2) 42, pp230-65
11) These can be extremely simple, for example the recently discovered 2-state 3-colour Turing machine described at
12) Bostrom, N (2003), “Are you living in a Computer Simulation?”, Philosophical Quarterly, Vol. 53, No. 211, pp243-255.
13) eg.
14) The computational equivalent of the Wave Equation
15) Albert, D Z and Galchen, R (2009) “A Quantum Threat to Special Relativity”, Scientific American, March, pp26-33
16) Burks, A W (1966), “Theory of Self-Reproducing Automata”
17) Schmidhuber, J (2007), “All Computable Universes”, Spektrum der Wissenschaft, March Special “Is the Universe a Computer?”, pp75-79.
18) Lisi, A G (2008), “An Exceptionally Simple Theory of Everything”,

Posted in Uncategorized | Leave a comment