# Determinism and known unknowns

*Determinism* is based on the contextually incomplete assertion that what is done at one time will have definite consequences in the future. That is not the same thing as saying that if we do something now then we can predict what the consequence of that action will be. This inability to calculate the consequences of our actions has nothing to do with any imagined randomness that can upset otherwise predetermined paths. The information we have about the future may be limited, or may be such that any attempt to acquire more information will destabilize our predictions.

There is an analogy here with the solution of equations in mathematics: we can prove that there are five solutions (roots) to a quintic equation, but there is no formula that can give them to us in general.

We should therefore distinguish between two sorts of determinism. A type 1 deterministic system is one such that, given initial conditions at time *t* = 0, we can fully predict the state of the system at any subsequent time *t >* 0. On the other hand, a type 2 deterministic system is one such that we know that a unique state exists at any subsequent time but we have no means of predicting it.

Joseph-Louis Lagrange [1736-1813] refined Newtonian mechanics. His equations of motion are equivalent to those of Newton but based on a different temporal architecture. Newton’s equations of motion are essentially predicated on process time: a system under observation (SUO) is set in motion at an initial time under conditions prevailing at *that* time: these are usually the initial positions and velocities of all the particles constituting that SUO. The future state of the system is then determined solely by the laws of mechanics.

Lagrange went further and used a manifold image of time. He developed the *principle of virtual work,* which was eventually refined as an action principle based on the Calculus of Variations and Hamilton’s principle. These are discussed in Chapter 14. Suffice it at this stage to say that such principles are teleological in flavour, since the application of variational principles requires us to decide what the final configuration of the SUO should be, *before* we work out how the SUO could get there.

In QM, this approach takes on a bizarre flavour. Now we cannot decided what the final configuration should be: there may be many alternatives, and we can calculate only the relative probability of ending up in any one of them. Moreover, unlike CM where the trajectory from initial to final configuration is well defined, in QM we cannot exclude the possibility of the SUO taking any of the countless trajectories to go from initial to final states. The Feynman path integral gives us the rules for taking *all* of their contributions into account [Feynman & Hibbs, 1965]. This is the final nail in the coffin for classical determinism.