Journal Information

Article Information


Emergence and computability


Abstract

This paper presents a discussion of the possible influence of incomputability and the incompleteness of mathematics as a source of apparent emergence in complex systems. The suggestion is made that the analysis of complex systems as a specific instance of a complex process may be subject to inaccessible ‘emergence’. We discuss models of computation associated with transcending the limits of traditional Turing systems, and suggest that inquiry into complex systems in the light of the potential limitations of incomputability and incompleteness may be worthwhile.


Introduction

We suggest that what we intuitively define as (strongly) emergent systems may include processes which are not computable in a classical sense. We ask how incomputable processes would appear to an observer and, via a thought experiment, show that they would display features normally defined as ‘emergent’.

If this conjecture is correct, then two important corollaries follow: first, some emergent phenomena can neither be studied nor modelled via classical computer simulations and second, there may be classes of emergent phenomena which cannot be detected via standard physical measurements unless the process of measurement exhibits super-Turing properties in its own right. Borrowing from recent literature in computer science we then show that tools which enable us to break the classical computational barrier are already available and suggest some directions for a novel approach to the problem.

Emergence

Implicit in most approaches to the study of emergence are 3 concepts:

  1. Multiple levels of representation: there are classes of natural phenomena which, when observed at different levels or resolution, display behaviors which appear fundamentally different (Shazili, 2001; Crutchfield, 1994a, 1994b; Rabinowitz, 2005; Laughlin, 2005; Laughlin & Pines, 2000; Goldstein, 2002);

  2. Novelty: for most complex systems, while we expect the properties of higher levels to causally arise from lower levels of representation, how this happens appears somehow inexplicable (Bickhard, 2000; Bedau, 1997; Darley, 1994; Rosen, 1985; Heylighen, 1991; Anderson, 1972);

  3. Inherent causality: while we expect causality to arise solely from lower levels, for most complex systems the higher levels also appear to possess inherent and independent causal power (Bickhard, 2000; Campbell, 1974; see also Pattee, 1997; Goldstein, 2002; Rabinowitz, 2005; and Laughlin, 2005 for a discussion of the role of causation in complex systems).

The dilemma which has kept scientists and philosophers busy for decades is whether this novelty and inherent causality are real physical phenomena or merely lie in the eyes of the observer; said differently, whether reductionism is the only tool we need to understand Nature.

The limits of mathematics

The most efficient language we possess to study Nature is Mathematics. This is used not only to describe processes but also, by using mathematical transformations rules, to deduce, extrapolate and manipulate novel processes. It is thus crucial to be sure that the mathematical machinery we use is consistent and correct. It is also important that it is as exhaustive as possible, since the more mathematical rules (theorems) we discover, the more options are available to us to interpret and manipulate Nature’s workings. These needs motivated mathematicians at the end of the 19th century who dreamt of devising a set of axioms and transformation rules from which all other mathematical truths could be deduced as theorems. In Hilbert’s dream, this would be achieved simply by mechanical manipulation of symbols devoid of external meaning (Chaitin, 1993, 1997: 1-5). Basically, Hilbert was seeking a consistent and complete formal system which would guarantee that all theorems of Mathematics could be proved. The dream was famously shattered by the work of Gödel (1931) who proved that no formal system in which we are able to do integer arithmetic can be both complete and consistent. For the sake of our discussion, Gödel’s Incompleteness theorems can be summarized as follows (see Gensler (1984) for a simplified explanation of the theorem and its proof). In a formal logical system:

  • Given a set of axioms, and;

  • A set of transformation rules of sufficient complexity1;

  • There exist statements which are either true but not provable, or false and provable. In the first case the system is incomplete, in the second it is inconsistent.

Here we focus on systems which are incomplete, that is, systems which can contain statements which are true, but not provable. Saying that a true statement, T, is not provable in system S means that, by following the transformation rules of S, we cannot derive T from the axioms of S. Importantly, Gödel’s theorem applies to any mathematical system which incorporates basic number theory2. A related result was subsequently demonstrated and generalized by Turing (1931) (the famous Halting problem). Turing showed that there exist processes and numbers which are not computable, where ‘computable’ means that it can be calculated via a mechanical procedure (an algorithm) given a certain input. Here, it is important to notice the relation between a formal system as described above and Turing machines. They both start from some initial conditions (axioms and input data), they both carry out a finite number of predetermined ‘mechanical’ operations (mathematical/logical rules and algorithmic instructions), they both produce results (theorems and outputs), and they both lead to inherently undecidable statements (unprovable statements which are true and incomputable numbers). This is reflected by a formal equivalence between computation and formal logic (as described in Penrose, 1994: 64-66). In the rest of the discussion we will use the words unprovable and incomputable interchangeably.

The science of complex systems

There is quite a body of work which discusses the philosophical basis and nature of complex systems science. Seeking a deeper understanding of the science of complex systems, alternatives to the traditional scientific—reductionist approach are proposed and explored (Mitchell, 2004; McKenzie & James, 2004). Several papers have gone so far as to address the complexity of complex systems science. In these papers, the processes of modelling and studying complex systems are examined by either explicitly or implicitly treating these processes as complex systems in their own right (Medd, 2001; Price, 2004; Cooksey 2001). In many ways, this analysis of complex science parallels the (meta-)mathematical exploration of the foundations of mathematics and, as in meta-mathematical work, we must keep clear the distinction between the system being studied and the means of study. Clearly, advances along this line of inquiry have the potential to put complex systems approaches on a more robust footing, broaden the applicability of techniques, and conceivably make the analysis of such systems more straightforward.

There is a self-referential discourse in our attempts to understand how to “do” the science of complex systems which is maddeningly appealing. While using the structure and language of complex systems science (or something logically equivalent) is probably inescapable, it gives rise to a self-referential element which seems suspiciously analogous to the approach metamathematics takes with mathematics. This sort of approach opens the possibility that some Gödelization of the science of complex systems is lurking in the shadows even as we attempt to understand and classify these systems. Far from signalling a flaw in our reasoning, this may implicitly be one of the hallmarks of a complex system, and an indicator that we must be ready and willing to extend our systems.

Cilliers (2001), perhaps, comes closest to addressing the fundamental issue in his paper “Why We Cannot Know Complex Things Completely”. He ties the process of using the science of complex systems to the fact that the construction of the meanings associated with the endeavour is itself a complex system. He then suggests that the systems we deal with operate within boundaries and limits and that since a system “can only make representation in terms of its own resources [...] it is difficult to see how any intervention in the dynamics of the system can take place.” He goes on to discuss the notion of a limit to knowledge as a means of avoiding what seems an inescapable determinism in the “knowledge” in the system which must be constructed from within. This is precisely the goal of the mathematical constructionists in the late nineteenth century, and to them it seemed that a true statement must inescapably be derived from the axioms. If we take the position that the systems which we consider (either complex systems, or indeed the science of complex systems) possess at least the properties of simple number theory (as nearly every mathematical model will), then we have proof that there will be elements in the system which are true, but can never be reached while staying within the bounds of formal manipulation. It may seem a very tenuous connection to make between Gödel’s theorem and philosophical statements about the nature of the science of complex systems, but recall that Gödel’s ingenious proof rested on just such a bridge between the language of mathematical logic and numbers. The symmetry between the study of the basic structure of mathematics in the language of mathematics and the study of complex systems science in the language of complex systems is striking.

When we say emergent, could we actually mean incomputable?

Here we carry out a thought experiment. Suppose we have a mathematical/physical system which is consistent: we can assume that some physical relationships are robust enough that we may include them in our mathematical structure as axioms (that is we take them to be true3) and that these rules do not undermine the consistency of the system. Now suppose we imagine some physical process which we (magically) know to be incomputable within this system. Our purpose here is not to actually present such a process (or even to assert that such processes must exist), rather to tease out some of its consequences.

We form an extended mathematical system which takes the physical law as an axiom of the or fundamental mathematical/physical system. There is an important point here: Gödel says that our system cannot be both complete and consistent - if our law is inconsistent with the underlying system, then we cannot necessarily make assertions about what must be present (apart from the obvious inconsistencies). For the sake of the thought experiment, we will suppose that we have chosen our physical law carefully and that it is consistent with the rest of the system.

We take this extended system to represent our ‘physics’, that is to say our scientific apparatus; as Gödel’s theorems indicate, the system may exhibit physical laws which are true but not provable, that is, true, but not deducible from the basic ‘physics’ we employed. We cannot necessarily say that a given system will exhibit statements (laws) which are directly related to the new axiom or axioms, or even that it will exhibit physical manifestations of these statements but Gödel provides an avenue by which such properties may appear.

How would this system as a whole (including its true but not provable physical laws) look to us?

  1. We would recognise different levels of representations, one including the very basic axioms and others containing increasingly more complex statements resulting from the application of the transformation rules;

  2. We would not be able to understand how 2. some derived physical laws originate from the initial ‘physics’ (because they are not provable), and even less to predict their existence. These physical rules would look novel to us;

  3. Since they are physical laws, these state3. ments would carry apparent causal power; they would look causal to us, and since we cannot see how they originate from the basic ‘physics’, their causality would appear inherent and autonomous. In fact, this causality results from the basic ‘physics’ (which is indeed enough to determine all higher levels’ features) but in ways we cannot unravel.

Basically, these physical laws would look ‘emergent’ to us, since they satisfy the characterizations commonly used in defining emergence. They would appear to transcend reduction because we are unable to comprehend their formal link to the basic axiomatic physical laws. However, this (like their causal power) is merely apparent. Their properties are inherent in the basic ‘physics’ we started from, but in ways which are not deducible/computable in our formal system.

The traditional way to address emergent processes is to study and describe the different levels separately and, most of the time, independently, by looking for laws which best describe the dynamics of the different levels in isolation. In this way, quantum mechanics describe sub-particle physics, chemistry describes molecular processes, Newton’s mechanics describes macroscopic physics and so on to biology, ecology, sociology, geology, up to relativity theory and cosmology. We ‘know’4 that these systems are nested in a Russian doll fashion, and we can describe each doll separately, but not their nesting. Along these lines, Shalizi (2001) and Rabinowitz (2005) propose information theoretic definitions of emergent levels of representation. These are, in our opinion and to our knowledge, the most developed approaches to this problem. Shalizi and Shalizi (2004) in particular gives a numerical recipe to find the most efficient level to study an emergent system based on a measure of system predictability and complexity. The most important limitation of these approaches is that they cannot discriminate between causality5 and correlation. This would make little difference if we merely wanted to observe and describe a phenomenon, say in the fashion of natural historians of the 19th century. If we wish to manipulate or even engineer for emergence, then we need to better understand causal relations in order to exert control over it. The obvious question is whether we can describe how the emergent levels arise.

Does incomputability exist in Nature?

Since Galileo claimed that the “language of Nature’s book is mathematics”6, it has been assumed that natural processes (physical laws) are computable7. More recently, an increasing body of literature started to question this statement (Kauffman, 2000; Penrose, 1994; Calude, et al., 1995; Cooper & Oddifreddi, 2003; Moore, 1990, 2000; Rosen, 2001). Here it is useful to discriminate among different kinds of incomputability. Fundamental limits to our ability to understand and model Nature arise from a number of sources which are well known to both the scientific and non-scientific community, among which we include sensitivity to initial conditions (which leads to chaos), inherent randomness of quantum processes, and measurement limitations due to Heisenberg’s principle. Closely related to these is the incomputability discussed by Kauffman in Investigations, namely our inability to pre-state the initial conditions of certain problems8. As Penrose points out there is a fundamental difference between these kinds of incomputability and that derived from Gödel’s theorem9 (see also Moore, 1990). In the formal system scenario described above, there are no dynamics (not even a concept of time!), no missing information, no undetermined initial conditions, no inaccuracy in the description of the transformation rules. Does this sort of incomputability exist in Nature? Penrose, Calude et al. and Kellet suggest it does, but the issue is surely still open to debate10. Unfortunately, this question is often disregarded as irrelevant in applied science (Cooper & Oddifreddi, 2003), and we follow Aronson (2005) in the belief that more attention is deserved, since the potential for scientific breakthroughs could be enormous. In the following, we discuss some potential consequences on the conjecture we proposed above, namely that there may be emergence which arises from incomputability inherent in the system we are modelling.

Some corollaries

It is interesting to discuss some consequences which would arise if our conjecture is correct:

  1. Reduction is Nature’s only currency, but it is unable to fully explain Nature to us. There are physical laws which are indeed merely the consequences of basic axioms, but these basic axioms are not sufficient for us to understand the laws themselves11;

  2. There may be (emergent) behavior which cannot be studied via classical computer simulation, since it is not accessible to classic computation tools; this contradicts a large portion of literature on emergence;

  3. Standard scientific experimental proce3. dures may not be able to detect emergent processes.

The first two statements are straightforward. The third one requires some clarification. The scientific method requires that experiments be reproducible. This implies that an experiment needs to follow a quite detailed and rigorous procedure in order to be replicated by different observers under inevitably slightly different experimental settings. Basically, an experiment is reduced to an algorithm (Stannett, 2003) and consequently scientific experimentation suffers the very same limitation of formal logic and computer systems, and thus is, by itself, unable to detect truly emergent processes unless it has access to super-Turing input. It seems that the very strength of the scientific method, that is, its unique ability to define objective, reproducible and rigorous statements, by following precise measurement and logical procedures, backfires on its very purpose, by denying access to some members of the class of processes which we instinctively define as emergent. An important question which arises in this regard is “Under what conditions is our own involvement in an experiment sufficient to raise its computational power to a level which deals with this problem?”. How much does it take to make our experiments super-Turing or super-Gödelian (Wiedermann & Leeuwen, 2002)?

Breaking the computational barrier

There are models of computation which are not necessarily equivalent to Turing machines. The basic notion of how “powerful” a machine (or model of computation) might be is based on the size of the set of languages which can be accepted by the machine. Thus some systems may be beyond the representational ability of a particular model of computation, but not beyond that of another. These alternatives may make models of many systems more accessible, but they still cannot resolve the fundamental uncertainty raised by Gödel’s Theorem: they still contain the basic number theory which gives rise to Gödel’s result.

Graça and Costa (2003), explore the nature of general purpose analogue computers (GPACs) which are the continuous analogues to the Turing machine. They propose a continous-time GPAC which, while sacrificing some of the generality of Shannon’s original machine in order to exclude undesirable configurations, maintains the significant properties of Shannon’s original machine. The notion of an analog computer has a great deal of appeal since so much of what we model is inherently continuous in its nature. The basic conceptual components of a GPAC map quite readily into the usual toolbox of an analytic modeler. MacLennan (2004) takes the approach to its logical extent and derives a mathematical representation of a model of continuous computation on a state-space which is continuous in all its ordinates (including time). This paper presents a mathematical treatment of a model of computation which is quite different from traditional Turing machines and substantially different from the GPAC of Graça and Costa.

Fuzzy Turing machines (Wiedermann, 2000, 2004), for example, provide super-Turing computational power: machines can be constructed which accept a larger class of languages than a traditional Turing machine is capable of accepting. These less traditional approaches to computation may make accessible some of those emergent systems which are inaccessible to ordinary algorithmic computation. However, we are still left considering the possibility that there are complex systems which arise from Gödelian truths, and can only be studied by stepping outside the system.

Turing never claimed that his definition of computation encompasses all systems in which computation may occur. He imagined an abstract machine which, under restricted conditions, can access superior computational power (in the form on an ‘Oracle’) when faced with specific parts of computation it cannot perform. Surely, following Penrose’s argument (Penrose, 1994), the very fact that Nature displays super-computational power (as he admits), while it highlights the limits of formal logic and classic computability, also in principle shows that processes to surpass those limits may be available, though it should be noted that these arguments may have more appeal by analogy than by robustness (Feferman, 1996). The obvious questions are what these processes might look like and whether we can employ them productively12.

In Turing’s (1931) seminal work, the computer he discussed was an abstract concept, not an actual physical machine. Similarly, several authors have contemplated ideal abstract machines (hyper-machines) which could in principle break classic computational barriers (Ord, 2002; Aronson, 2005). As for today, none of these machines has been built, nor does it seem likely that any will be built anytime soon. More down-to-earth approaches, however, look more promising. In a series of papers (Verbaan, et al., 2004; Leeuwen & Wiedermann, 2003, 2001a, 2001b, 2000), van Leeuwen, Wiedermann and Verbaan show formally that agents interacting with their environment have computational capabilities comparable to Turing computers with ‘advice’, a milder form of Oracle. There are a number of reasons why interacting agents can cross the classic computational barrier: they run indefinitely (as long as the agent is alive), they continuously receive input from a (potentially infinite) environment and from other agents (unlike a classic machine for which the input is determined and fixed at the beginning of the calculations), they can use the local environment to store and retrieve data and can adapt to the environment. None of these features in isolation can provide super Turing computability, but, taken together, they confer a computability power superior to a classical machine. In particular, the agents’ adaptivity to their environment means that the ‘algorithm’ within the agents (their program) can be updated constantly and, in Leeuwen and Wiedermann’s paper of 2003, it is shown how super computability can arise from the very evolution of the agents. Also, the traditional distinction between data, memory and algorithm does not apply in an interactive machine with the result that the computational outcomes are more dynamic and less easily predicted (Milner, 1993). Finally, a number of conjectures have been proposed in the last decades over the possible super computational power of the human brain (Kellett, 2005; Penrose, 1989) and Gödel himself conjectured about this later in his life (Tieszen, 2006). Could a human interacting with a classic computer provide some sort of Oracle behavior? Could these systems, possessing super Turing computability, be used to model, if not understand, incomputable emergence? Could this be the way forward to understand emergence more generally? Intriguingly, could systems like these potentially already sit on our desks?

Today, human-computer interactions are standard in a large number of applications. These are usually seen as enhancing human capabilities by providing the fast computation resources available to electronic machines. Should we see the interaction in the opposite direction, as humans enhancing the computational capabilities of electronic machines? Leeuwen and Wiedermann (2000) speculate that personal computers, connected via the web to thousands of machines world wide, receiving inputs via various sensors and on-line instructions from users, are already beyond classic computers. Sensors now monitor many aspects of the environment routinely and are routinely installed on animals in the wilderness (Simonite, 2005). Can we envisage a network computing system, in which agents (computers) interact with the environment via analog sensors, receive data from living beings, and instructions from humans to deal with unexpected situations?

Further considerations

The purpose of this discussion is not to propose a new definition of emergence nor a taxonomy of complex systems. Despite the fact that the subject we address is fairly theoretical, our aim is pragmatic. We are not interested in defining what emergence ‘is’. Rather, we suggest a new direction of research to address a class of processes which may be normally labelled as emergent and which so far have evaded formal analysis. This is not the first time the concepts of emergence and incomputability are jointly discussed (Cooper & Oddifreddi, 2003; Penrose, 1994; Kauffman, 2000; Darley, 1994; Goldstein, 2002), but to our knowledge a clear relation and a possible direct approach has not been proposed. It is reasonable to ask why we should show any optimism or even a pragmatic interest in tackling a problem which is, by definition, logically and computationally intractable. Our first reason lies in the apparent ease with which incomputability arises. As discussed above, interaction with an unpredictable environment and adaptability seem to be enough to evolve super computability in simple agents with classic computational capabilities and this process seems to be further enhanced by agents’ interaction and information exchange (Leeuwen & Wiedermann, 2003). Second, this seems to confirm the conjecture that the human brain does have super-Turing capabilities. Third, viewed within the perspective of Gödel’s theorem, incomputability and computability seem to come together, in an inseparable fashion13. Designing a set of axioms of sufficient complexity and transformation rules carries incomputability as a natural consequence, that is, it implies incomputability. In other words, it seems impossible to conceive computation without incomputability. Finally, it has also been noticed (Bickhard, 2000; Laughlin, 2005; Atay & Josty, 2003) that emergent process are robust. Despite the fact that they depend on properties of lower levels, emergent processes are robust to small variations and errors at such levels14. This has led to the suggestion that ultimate causal power does not belong to causal laws, but to the organisation of matter (Laughlin, 2005) and processes (Bickhard, 2000). This robustness seems at odds with the ‘other’ kinds of incomputability which may be responsible for chaotic and unstable processes: namely incomplete descriptions of the system, and sensitivity to initial conditions. If emergence is such a robust process, could it itself be harnessed as a means of furthering our computation?

So, what does all of this mean for the study of emergence and complex systems in general? We happily pursue models of all sorts of systems, relatively comfortable with the knowledge that we are approximating a system. As long as we can control the size of the error in our approximations, we remain relatively content. This is the practical side of the analysis of complex systems. As participants in a very large complex system we hope to be able to predict, or at least understand, our interactions with other component systems and as much of the aggregate system as we can. There is a very real survival value in being able to foresee the state of the system. However, in the way that the abstract consideration of non-Euclidian geometry opened the door to a number of different approaches in physics and improved our models of the way the universe may work, so the abstract, impractical side of complex systems science needs to address some basic problems to smooth the path of the practical models. It seems likely that there are systems which exhibit properties which we are unable to model well either because the properties they evince are mathematically inaccessible; the system is algorithmically impossible; or because we are unable to apprehend the true state of the system even though the dynamics of the system are understood. Systems which fall into the first category will remain difficult to model until our mathematics is capable of dealing with them: in this context, the ball is firmly in the metamathematician’s court. The second category is a limitation based on our model of computation, and there are alternative models which may be helpful. Practically, it suggests that we should pursue alternatives to traditional digital computers and the traditional model of computation as an adjunct in our attempts to model and understand systems. The third category is almost certainly the largest. Strictly speaking, it isn’t more “incomputable” or “inaccessible” to us than it is impossible to find the needle in a haystack. Practically, we are still looking for a strong enough magnet.

We conclude with a note about randomness, which further justifies the need for deep enquiry into these problems. Chaitin (1993) shows that incomputability also carries complete randomness. As before, this sort of randomness is not linked to incomplete information and, like formal incomputability, seems to have a more fundamental nature. Since we currently read Nature via a computable language (and experiment with a computable means), we are left to wonder how much of what we assume to be intrinsic randomness actually arises from the limitations of the language we use. Could an exploration of these meta-mathematical enquiries to the physical world have the potential to change the way we perceive several Natural processes?

Notes

  1. In Gödel’s original work, basic number theory (arithmetic) was used and the results can be extended to more complex axioms and rule sets.

  2. For any arbitrary string, t in S, which is unprovable in S, we can extend the system S to some system in which t is provable, but this new system will have its own set of unprovable string … and thus, it would seem, there is no escape!

  3. Clearly, the ‘truth’ values of a mathematical axiom and of experimentally defined physical laws are very different. Here we take the pragmatic view that this choice is the best available in our scientific enquiry and that it is indeed the way (physical) science is carried out. See also footnote 12, below.

  4. Crutchfield (1994a) gives a beautiful description of how agents discover structures and laws in their environment at different level of complexity and different levels of representation.

  5. In the information theoretical language used by Shalizi (2001), the word ‘causal’ is used frequently, but in the sense of automata in Shalizi and Shalizi (2004). Here we use it as Pattee (1997a) does in the sense that would allow an observer to intervene on the causal process and consequently exert control on its future behavior.

  6. “Philosophy is written in this grand book, the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures without which it is humanly impossible to understand a single word of it; without these one is wandering in a dark labyrinth” (Galileo, 1623).

  7. It is often remarked that all known physical laws are computable. This statement carries an underlying tautology, since our current understanding and use of physics relies on and implies computability.

  8. Kauffman (2000) refers explicitly to the impossibility to define ‘a priori’ the state space of the biosphere and consequently our inability to compute its evolution. This is closely related to the fundamental incompressibility of the initial conditions on chaotic processes (p. 117) which results in apparent randomness when a finite precision is imposed upon it (see Crutchfield & Feldman, 2003, for a discussion of the effect on observations induced by sub-optimal modelling).

  9. A very simple approximation of Penrose’s argument might be “a chaotic system can be coded on a computer, so it must be computable”. Despite the fact that the result of the computation will inevitably be imprecise, the statistics of the result will still represent a ‘typical’ possible outcome.

  10. Interestingly, this is closely related to the similarly open debate on why Mathematics is so efficient at describing Nature and the philosophical dilemma of whether it is a ‘natural’ language we discover or an ‘artificial’ language we develop.

  11. Notice the difference between this claim and the common two sides of the standard debate on reduction: a) reduction can explain all working of Nature and one day we will confirm this, and; b) reduction can not explain all workings of Nature and another concept is needed.

  12. It is interesting to notice that Gödel believed in a strong analogy between mathematics and natural science. Mathematics should be studied similarly to how scientists study Nature and the choice of the fundamental mathematical axioms should be based not only on their intuitive appeal but also on the benefit they provided to the development of a theory (Chaitin, 2000: 89-94). Somehow similarly, Chaitin (1997) supports ‘Experimental Mathematics’ (pp. 22-26, 29, 30) according to which mathematicians should approach mathematics the same way physicists approach physics, via experimentation and statistical inference.

  13. As incomputable real numbers seem to arise naturally from computable ones via Cantor diagonalization arguments (see Chaitin, 1997: 9-11).

  14. Small atomic imperfections do not change the rigidity of metal bar macroscopic state; the actions of a New Yorker only very rarely noticeably affects New York’s everyday life; our cells are completely replaced every few days, without changing our personality, appearance and metabolism.

Acknowledgments

We would like to thank our reviewers whose comments contributed greatly to this work and directed us to many useful sources, particularly Torkel Franzen’s book which we hope has steered us away from the worst of the logical mires, or as Franzen puts it (with respect to eating hotdogs anyway) “... [from] making a disgusting spectacle of ourselves” (Franzen 2005).This research was carried out as a part of the CSIRO Emergence Interaction Task, http://www.per.marine.csiro.au/staff/Fabio.Boschetti/CSS emergence.htm

References

ref1?

Anderson P.B., Emmeche, C. Finnemann, N.O. and Christiansen, P.V. (eds.) (2000). Downward Causation: Minds, Bodies and Matter, ISBN 9788772888149.

ref2?

Aronson, S. (2005). “NP-complete problems and physical reality,” http://arxiv.org/abs/quant-ph/0502072.

ref3?

Atay, F. and Josty, J. (2004). “On the emergence of complex systems on the basis of the coordination of complex behaviors of their elements,” Working Paper 04-02-005, Santa Fe Institute, Santa Fe, http://www.santafe.edu/research/publications/wpabstract/200402005.

ref4?

Bedau, M. (1998). “Weak Emergence,” in J. Tomberlin (ed.), Mind, Causation and World, ISBN 9780631207931. pp. 375-399.

ref5?

Bickhard, M.H. (2000). “Emergence,” in P. B. Andersen, C. Emmeche, N. O. Finnemann and P. V. Christiansen (eds.), Downward Causation: Minds, Bodies and Matter, ISBN 9788772888149, pp. 322-348.

ref6?

Boden, M. (1994). Dimensions of Creativity, ISBN 9780262023689.

ref7?

Borwein, J. and Bailey, D. (2004). Mathematics by Experiment: Plausible Reasoning in the 21st Century, ISBN 9781568812113. Also Experimental Mathematics Website, http://www.experimentalmath.info.

ref8?

Calude, C., Campbell, D.I., Svozil, K. and Stefanescu, D. (1995). “Strong determinism vs. computability,” in W. Depauli-Schimanovich, E. Koehle and F. Stadler (eds.), The Foundational Debate: Complexity and Constructivity in Mathematics and Physics, ISBN 9780792337379, pp.115-131.

ref9?

Campbell, D.T. (1974). “Downward causation in hierarchically organized biological systems,” in F.J. Ayala and T. Dobzhansky (eds.), Studies in the Philosophy of Biology, ISBN 9780520026490, pp. 179-186.

ref10?

Chaitin, G. (1993). “Randomness in arithmetic and the decline and fall of reductionism in pure mathematics,” Bulletin of the European Association for Theoretical Computer Science, ISSN 0252-9742, 50: 314-328.

ref11?

Chaitin, G. (1997). The Limits of Mathematics: A Course on Information Theory and Limits of Formal Reasoning, ISBN 9789813083592.

ref12?

Chaitin, G. (2000). “A century of controversy over the foundations of mathematics,” in C. Calude and G. Paun (eds.), Finite Versus Infinite: Contributions to an Eternal Dilemma, ISBN 9781852332518, pp. 75-100.

ref13?

Cilliers, P. (2002). “Why we cannot know complex things completely,” Emergence, ISSN 1521-3250, 4(1/2): 77-84.

ref14?

Cooksey, R.W. (2001). “What is complexity science? A contextually grounded tapestry of systemic dynamism, paradigm, diversity, theoretical eclecticism and organizational learning,” Emergence, ISSN 1521-3250, 3(1): 77-103.

ref15?

Cooper, S.B. and Odifreddi, P. (2003). “Incomputability in nature,” in S. B. Cooper and S. Goncharov (eds.), Computability and Models: Perspectives East and West, ISBN 9780306474002, pp.137-160.

ref16?

Crutchfield, J. (1994b). “Is anything ever new? Considering emergence,” in G. Cowan, D. Pines and D. Melzner (eds.), Complexity: Metaphors, Models and Reality, ISBN 9780738202327, pp. 479-497.

ref17?

Crutchfield, J. and Feldman, D. (2003). “Regularities unseen, randomness observed: Levels of entropy convergence,” Chaos, ISSN 1054-1500, 13(1) 25-54.

ref18?

Crutchfield, J. P. (1994a). “The calculi of emergence: Computation, dynamics and induction,” Physica D, ISSN 0167-2789, 75: 11-54.

ref19?

Darley, V. (1994). “Emergent phenomena and complexity,” in R. Brooks and P. Maes (eds.), Artificial Life IV: Proceedings of the Fourth International Workshop on the Synthesis and Simulation of Living Systems, ISBN 9780262521901, pp. 411-416.

ref20?

Emmeche, C., Koppe, S. and Stjernfelt, F. (2000). “Levels, emergence and three versions of downward causation”, in P. B. Andersen, C. Emmeche, N. O. Finnemann and P. V. Christiansen (eds.), Downward Causation: Minds, Bodies and Matter, ISBN 9788772888149, pp. 322-348.

ref21?

Feferman, S. (1996). “Penrose’s Gödelian argument,” PSYCHE: An Interdisciplinary Journal of Research on Consciousness, ISSN 1039-723X, 2(7), http://psyche.cs.monash.edu.au/v2/p syche-2-07-feferman.html.

ref22?

Franzen, T. (2005). Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse, ISBN 9781568812380.

ref23?

Galilei, G. (1623) Il saggiatore (The Assayer) Ac cademia dei Lincei, Rome.

ref24?

Gensler, H. J. (1984). Gödel’s theorem simplified, ISBN 9780819138699.

ref25?

Gödel, K. (1931). On Formally Undecidable Propositions of Principia Mathematica and Related Systems, ISBN 9780486669809 (1992).

ref26?

Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning, ISBN 9780201157673.

ref27?

Goldstein, J. (2002). “The singular nature of emergent levels: Suggestions for a theory of emergence,” Nonlinear Dynamics, Psychology and Life Sciences, ISSN 1090-0578, 6(4): 293-309.

ref28?

Graça, D.S. and Costa, J.F. (2003). “Analog computers and recursive functions over the reals,” Journal of Complexity, ISSN 0885-064X, 19: 644-664.

ref29?

Heylighen, F. (1991). “Modeling Emergence,” World Futures: Journal of General Evolution, ISSN 0260- 402Z, 31: 89-104.

ref30?

Holland , J. (1998). Emergence: From Chaos to Order, ISBN 9780738201429 (1999).

ref31?

JPL-NASA (2006). Mars Rover Website, http://marsrovers .jpl .nasa.gov/home/index.html.

ref32?

Kauffman, S. (2000). Investigations, ISBN 9780195121056.

ref33?

Kellet, O. (2005). A Multifaceted Attack On the Busy Beaver Problem, Masters Thesis, Rensselaer Polytechnic Institute, Troy, New York, http://www.cs.rpi.edu/?kelleo/busybeaver/downloads/OwenThesis.pdf.

ref34?

Laughlin , R. (2005). A Different Universe: Reinventing Physics from the Bottom Down, ISBN 9780465038282.

ref35?

Laughlin, R. and Pines, D. (2000). “The theory of everything,” Proc of the National Academy of Science, ISSN 0027-8424, 97(1): 28-31.

ref36?

Leeuwen, J. van and Wiedermann, J. (2000). “The Turing machine paradigm in contemporary com-puting,” in B. Engquist and W. Schmid (eds.), Mathematics Unlimited - 2001 and Beyond, ISBN 9783540670995, pp. 1139-1156.

ref37?

Leeuwen, J. van and Wiedermann, J. (2001a). “A computational model of interaction in embedded systems,” Technical Report UU-CS-2001-02, Institute of Information and Computing Sciences, Utrecht University.

ref38?

Leeuwen, J. van and Wiedermann, J. (2001b). “Beyond the Turing limit: Evolving interactive systems,” in L. Pacholski and P. Ruzicka (eds.), SOFSEM 2001 - Theory and Practice ofInformatics, ISBN 9783540429128. pp. 90-109.

ref39?

Leeuwen, J. van and Wiedermann, J. (2003). “The emergent computational potential of evolving artificial living systems,” AI Communications, ISSN 0921-7126, 15: 205-215.

ref40?

MacLennan, B. (2004). “Natural computation and non-Turing models of computation,” Theoretical Computer Science, ISSN 0304-3975, 317: 115-145.

ref41?

Maddy, P. (1997). Naturalism in Mathematics, ISBN 9780198235736 (2002).

ref42?

McKenzie, C. and James, K. (2004). “Aesthetics as an aid to understanding complex systems and decision judgment in operating complex systems,” Emergence: Complexity & Organization, ISSN 1521-3250, 6(1-2): 32-39.

ref43?

Medd, W. (2001). “What is complexity science? Toward an ‘ecology of ignorance’,” Emergence: Complexity & Organization, ISSN 1521-3250, 3(1): 43-60.

ref44?

Milner, R. (1993). “Elements of Interaction: Turing Award Lecture,” Communications of the ACM, ISSN 0001-0782, 36(1): 78-90.

ref45?

Mitchell, S. (2004). “Why integrative pluralism?” Emergence: Complexity & Organization, ISSN 1521-3250, 6(1,2): 81-91.

ref46?

Moore, C. (1990). “Unpredictability and undecidability in dynamical systems,” Physical Review Letters, ISSN 0031-9007, 64, 2354

ref47?

Ord, T. (2002). “Hypercomputation: Computing more than the Turing machine,” http://arxiv.org/ pdf/math.LO/0209332.

ref48?

Pattee, H. (1995). “Evolving self-reference: Matter, symbols and semantic closure,” Communications in Cognition-Artificial Intelligence, ISSN 0773-4182, 12: 9-27.

ref49?

Pattee, H.H. (1997a). “Causation, control and the evolution of complexity” in P. B. Andersen, C. Emmeche, N. O. Finnemann and P. V. Christiansen (eds.), Downward Causation: Minds, Bodies and Matter, ISBN 9788772888149, pp. 322-348.

ref50?

Pattee, H.H. (1997b). “The physics of symbols and the evolution of semiotic control,” Proceedings of the Workshop on Control Mechanisms for Complex Systems: Issues of Measurement and Semiotic Analysis, Las Cruces, New Mexico, Dec. 8-12, 1996.

ref51?

Penrose, R. (1989). The Emperor’s New Mind: Concerning Computers, Minds and the Laws of Physics, ISBN 9780192861986 (2002).

ref52?

Penrose, R. (1994). Shadows of the Mind: A Search for the Missing Science of Consciousness, ISBN 9780195106466 (1996).

ref53?

Price, I. (2004). “Complexity, complicatedness and complexity : A new science behind organizational intervention?” Emergence: Complexity & Organization, ISSN 1521-3250, 6(1,2): 40-48.

ref54?

Rabinowitz, N. (2005). Emergence: An Algorithmic Formulation, Honours Thesis, University of Western Australia, Perth.

ref55?

Rosen, R. (1985). “Organisms as causal systems which are not mechanisms: An essay into the nature of complexity,” in R. Rosen (ed.), Theoretical Biology and Complexity: Three Essays on the Natural Philosophy of Complex Systems, ISBN 9780125972802, pp. 165-203.

ref56?

Rosen, R. (2000). Essays on Life Itself, ISBN 9780231105118.

ref57?

Rosen, R. (2001). Life Itself: A Comprehensive Inquiry into the Nature, Origin and Fabrication of Life, ISBN 9780231075657 (2005).

ref58?

Shalizi, C. (2001). Causal Architecture, Complexity and Self-Organization in Time Series and Cellular Automata, PhD Thesis, http://www.cscs.umich.edu/?crshalizi/thesis/.

ref59?

Shalizi, C.R. and Shalizi, K.L. (2004). “Blind construction of optimal nonlinear recursive predictors for discrete sequences,” in M. Chickering and J. Halpern (eds.), Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference, ISBN 9780974903903, pp. 504-511.

ref60?

Simonite, T. (2005). “Seals net data from cold seas,” Nature, ISSN 0028-0836, 438: 402-403.

ref61?

Stannett, M. (2003). “Computation and hypercomputation,” Minds and Machines, ISSN 0924-6495, 13(1): 115-153.

ref62?

Tieszen, R. (2006). “After Gödel: Mechanism, Reason and Realism in the Philosophy of Mathematics,” Philosophia Mathematic, ISSN 0031-8019, 14(2): 229-254.

ref63?

Turing, MA (1936). “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, ISSN 0024-6115, s2-42: 230-265.

ref64?

Verbaan, P.R.A., van Leeuwen, J. and Wiedermann, J. (2004). “Lineages of automata: A model for evolving interactive systems,” in J. Karhumäki, H. Maurer, G. Paun and G. Rozenberg (eds.), Theory is Forever, ISBN 9783540223931, pp. 268-281.

ref65?

Wiedermann J. and van Leeuwen, J. (2002). “The emergent computational potential of evolving artificial living systems,” AI Communications, ISSN 0921-7126, 15(4): 205-215.

ref66?

Wiedermann, J. (2000). “Fuzzy computations are more powerful than crisp ones,” Technical Report V-828, Institute of Computer Science, Academy of Sciences of the Czech Republic.

ref67?

Wiedermann, J. (2004). “Characterizing super- Turing computing power and efficiency of classical fuzzy Turing machines,” Electronic Notes in Theoretical Computer Science, ISSN 1571-0661, 317: 61-69.


Article Information (continued)


This display is generated from NISO JATS XML with jats-html.xsl. The XSLT engine is Microsoft.