The Death of the Expert?

Kurt A. Richardson1 & Andrew Tait2

Introduction: Traditional Expertise

The concept of expertise and the associated experts themselves play a central role in modern organizations. Whenever a problem arises an almost automatic response triggers us to seek out the relevant expert who, given their superior credentials, will solve the problem at hand. The entire consultancy industry relies heavily on this unquestioned institutionalism. Billions of dollars are paid to those we regard as ‘experts’. But what exactly do we mean by the term? The Oxford English Dictionary offers the following:

  1. One who is expert or has gained skill from experience, and;
  2. One whose special knowledge or skill causes him to be regarded as an authority.

The terms used here such as ‘experience’, ‘knowledge’, and ‘authority’ are used so often these days that their meanings are rarely, if ever, questioned. We assume that ‘experience’ is a legitimate way to develop understanding; we assume that ‘knowledge’ is something tangible that can be shifted from context to context without loss; we assume that ‘authority’ resides with those who are in the ‘know’. Beneath these assumptions are rather simplistic notions of how the world operates, and how we may learn of these operations.

The Aims of this Article

In this article we argue that our contemporary notions of ‘experience’, ‘knowledge’, ‘authority’ and therefore ‘expertise’ and ‘expert’ are outdated and inappropriate for the complex globalized (i.e., connected) times we find ourselves experiencing. They represent artifacts of the reductionist view of reality that regards the universe as the ultimate well-oiled, exquisitely complicated, machine.

The aim of this article is to review these basic notions from a complexity science perspective. Our focus will be on the concept of knowledge and the inescapable limits that are placed upon us as a direct result of the universe’s inherent complexity. We will argue that a commodity-based view of knowledge is inadequate and often wholly inappropriate given the requirements of today’s organizations. From this revision of the concept of knowledge an extended notion of the ‘expert’ will be developed.

The Commoditization of Knowledge

Contemporary philosophers regard the perspective that there is an absolute reality that might be absolutely understood through method (such as the ‘scientific method’) as ‘Modern’. ‘Modernists’, those who espouse the ‘Modern’ view of knowledge, believe that science, or for that matter any sense making, is simply a matter of map making (e.g., Wilbur, 2007). We look at the world as it appears to our perceptions (which are presumed to be largely unbiased), map it, and within that map knowledge can be found. It’s a seductive promise—map the world and understanding shall be yours.

As this understanding is taken to be absolute and perfect then control is possible. If we know how something works then we can predict how it will evolve. Therefore, we can change it (because we can predict how our actions will affect the behavior of it) so that it behaves in a predictable way which meets our needs and desires. With knowledge comes the power to control, and with control comes reward. This is the promise that a Modernist view of knowledge offers.

In a world dominated by (linear) mathematics, quantification is of prime importance. If something can be counted, then one can measure the success of an action designed to change the amount of that something. As the song writer Roger Waters sings, “It all makes perfect sense expressed in pounds, shillings and pence.” And, because the effect of my actions is apparently predictable then I can design specific actions to effect specific quantities in a specific way. But this is not the only benefit of ‘reductionist’ knowledge. Scientific understandings are generalizations. This means that such knowledge is applicable in a wide range of circumstances or contexts. So if I have experience of a wide range of situations then I have the knowledge to deal with many future situations I’ve yet to experience. This is essentially how we go about our lives (e.g., Kelly, 1955). Scientific knowledge is transferable from one context to another. It doesn’t matter that the new context might be slightly different from previous contexts that we have experience of—that would just mean that our knowledge would have to be slightly adjusted to match up with the new context. Let’s look at a specific example.

Most readers will be familiar with Newton’s famous second law,

The acceleration of an object is directly proportional to the resultant force acting on it and inversely proportional to its mass. The direction of the acceleration is in the direction of the resultant force.

Or, in mathematical terms: F (vector force acting on the object) = m (mass of the object) × a (acceleration of the object, or the rate of change of the rate of change of distance with time). This equation (up until Einstein’s discovery of Relativity anyway) was assumed to be true in every context regarding an object with a force acting upon it. For example, let us consider a 20 kg rock being pushed along by a child who is applying a force of 10 N1. Neglecting friction, it is a simple matter to calculate the acceleration of the rock resulting from the child’s 10 N effort. Using F = m × a we find that a = 0.5 ms-2 which means that after 10 seconds the rock would be moving at 5 meters per second, or 2.2 miles per hour. Clever stuff!

Now let’s consider a coin falling from a six story window. What would be the force due to gravity acting on the coin? This would appear to be a very different situation, or context, from the previous example. But according to Newton’s second law it is much the same as the first. All we need to know is the mass of the coin, 2 grams say, and the acceleration due to gravity, which on Earth has been found to be about 9.8 ms-2. With this data in hand it is a trivial matter to calculate the force acting on the coin by the Earth, 0.002 kg × 9.8 ms-2 = 0.02 Newtons. So we not only have numeric accuracy, we have knowledge transferability as well.

Despite the obvious differences between these two contexts the same ‘law of motion’ applied, and it was a trivial matter to adjust the law to the features of each context. Newton’s second law is plainly useful for a wide range of contexts. This is the case for all scientific knowledge. All the laws of science are generalizations extracted from observations made of a wide range of different (but apparently similar at a deeper level) contexts. They are statistical truths—averages—that are taken to be absolute for many different occasions. They are true not only in space, i.e., where events happen, but also in time, i.e., when they happen. So Newton’s laws of motion will be as valid tomorrow as they are today as they were yesterday, whether the event is taking place in your hometown, on the other side of the world, or even on the other side of the universe (we tend to assume).

If (scientific) knowledge was not transferable then life would be very much more complicated and difficult than it is. What would happen if I went to the doctor’s and found that my physiology was so unique and different from others’ that the doctor had to start from scratch in developing knowledge of how I functioned? What if we all functioned completely differently? Every time the doctor met a new patient he would be unable to use any of his prior knowledge based on other patients to help attend to his new patient’s ailments. He would have to develop a unique knowledge base for every patient. Of course, we are all unique in some ways (which is why drugs tend to have a range of effectiveness in treating any particular individual’s ailments), but generally we all function in more or less the same way; blood is pumped around our body by our heart, we develop a sore throat and fever when we catch the cold virus, we all suffer brain damage when our brains are starved of oxygen for a sufficiently long duration, etc. There is a vast body of knowledge concerning the human body which is essentially true for everybody. Medical knowledge is indeed transferable across a wide range of contexts— contexts that are different, but somehow similar.

What about organizations? What about management? Though some might argue as to whether or not management is a science, Management Science is still taught at business schools and the knowledge acquired treated very much in the same way as scientific knowledge is, i.e., it is absolute and transferable. What value would there be in calling in a management consultant if all his knowledge was incongruent with your needs? There would be no value at all. The fact that the knowledge the consultant has accumulated throughout his/ her career does have (apparently) some value to your company means that similarities do exist between different business contexts. We won’t dwell upon management knowledge any longer as it is our contention that much of it is in fact not ‘scientific’ and therefore not easily transferable. Nonetheless, many organizational boards still rely on outside management consultants, or business experts, in the belief that their knowledge is relevant to them, i.e., is transferable.

How would a mechanic fix your car? How could you drive a variety of different vehicles? How would an electrician repair your TV? How could you make food? How could you find your way to the pub? How could you make sense of the words on this page, unless knowledge was transferable between contexts? Cars are mostly based on combustion engine technology and they don’t evolve into apples when you’re not looking! They mostly operate using either a manual or automatic gearbox with an accelerator and a brake. Most (modern) TVs are based on LCD technology. Most recipes will give the same results whether it is a Monday or a Tuesday. The way to the pub does not depend upon whether there is a full moon or not. And although interpretations of words do vary, we can at least make sense of reasonably well constructed sentences.

With specialist scientific knowledge the limits of what we humans can achieve seems unlimited. The latest and possibly greatest technological achievement is that of computers. We can construct unimaginably complicated devices that operate in predictable ways to perform mind-boggling calculations, render intricate and (almost) believably realistic graphics, model the circulation of blood through the heart, etc.

In the ‘Modern’ world everything is seen as analyzable and in some way amenable to science. Scientific knowledge can be obtained about anything we choose to consider. And, scientific knowledge is the only knowledge worth having. There is no other substitute for the certainty and absolute power of science. At least that’s what we’re supposed to believe.

The fact that many different contexts seem to share common features has led many scientists to believe that there do in fact exist a set of features that are common to all possible contexts. The set of features, or laws, are referred to as the “Theory of Everything”. Everything in the entire universe is apparently reducible to a small collection of fundamental rules. From these fundamental rules we obtain the vast diversity apparent in the observable universe. This is the ultimate reductionist dream—a theory…some scientific knowledge…that can account for it all. Such a theory would truly be the thought in the Creator’s mind (if you believe that sort of thing). Some believe that such a discovery would be the end of science itself. Others believe that science will continue, but it will simply be a case of filling in the details left by the “Theory of Everything”. Still others believe that science has already reached the point at which it now continues only to fill in the details left by the Standard Model (Horgan, 1997)—easily the most successful model in the history of humankind.

But, if scientific knowledge is so wonderful, why on earth are we witnessing more human suffering than ever before. It has been suggested that our knowledge resulted in technology that if used might even (however inadvertently) destroy the environment we rely on for our ongoing survival as a species (e.g., CFCs and lead in fuel, the use of both has been reduced radically once their dire environmental effects were realized). If knowledge allows us to control and construct whatever world we might desire, why have we, frankly, done such a poor job of it?

To understand the limitations of scientific knowledge we need to understand the assumptions that the (reductionist) scientific worldview is built upon.

The common feature between the various scientific knowledge domains is that the objects of interest are pretty much stable over time (or at least they are presumed to be). When stability dominates then the reductionist approach to knowledge generation appears to be a wholly appropriate approximation to make. It works—there’s no point in denying its power. But what about unstable, or at least non-stable, systems? The human mind for example—the way we view the world continually evolves and changes. Anything vaguely human-like seems to slip through the scientific net. The sciences that have been regarded as successful are the natural sciences—physics and chemistry mainly, but also some branches of biology. Why haven’t the social, or human, sciences such as psychology, sociology, linguistics, etc. achieved the respected status of the natural sciences? The presumption— e.g., by proponents of rational choice theory (Allingham, 2002)—has always been that the two branches would equally benefit from the application of the scientific method. This would imply that the objects of interest are at their heart the same, but evidence is increasingly being offered that the two areas of interest are quite different (Shneiderman, 2008).

The scientific method has repeatedly demonstrated it inefficacy when it comes to addressing social issues. Transport models fail to predict future road usage. Traditional economics models… well… they just don’t seem to predict anything much (Economist, 2002)!3 Social models fail to predict episodes of rioting (Economist, 2003). Scientific knowledge of social systems seems to be utterly useless and worthless and creates more trouble than it’s worth. Our principle assertion of this article is that science has thus far developed to deal very well indeed with complicated systems, whereas many of the problems that seem intractable to reductionist scientific methods are in fact better described as ‘complex’. We also suggest that the prevalent notion of expertise today is again associated with knowledge of complicated systems rather than complex. It is to this complicated/complex distinction that we will now consider.

A Brief Introduction to Complexity Thinking4

What if human organizations were really just complicated rather than complex? The simple answer to this question is that the possibility of an all-embracing “Theory of Management” would almost certainly exist. This would make management very easy indeed as there would be a book of theory that would tell the practicing manager what to do in any given context. The means of achieving effective and efficient organizational management would no longer be a mystery. But what is it about the concept of ‘complicated’ that makes this scenario plausible? Why has the possibility of a final (scientific) management theory not been realized yet, given the millions of man-hours and published pages devoted to the search? Why does approaching organizations as ‘complex’ rather than ‘complicated’ deny us of this possibility?

A very common (but incomplete) description of a complex system is that such systems are made up of a large number of nonlinearly interacting parts. By this definition the modern computer would be a complex system. A modern computer is crammed full of transistors which all respond nonlinearly to their input(s). Despite this ‘complexity’ (sic) the average PC does not show signs of emergence or self-organization; it simply processes (in a linear fashion) the instruction list (i.e., a program) given to it by its programmer. Even the language in which it is programmed is rather uninteresting. Although there are many programming languages, they are commensurable with each other. A line of code in C# can be translated into Visual Basic .NET (VB.NET) relatively easily—the one line of C# code may require more lines of VB.NET code to achieve the same functionality, but it can be done in the vast majority of cases (and when it can’t one of the languages is often extended to fill such a ‘commensurability gap’). The universal language into which all such languages can be translated without loss is called ‘logic’ (more accurately, Boolean, or even binary, logic). More often though, if a programmer wants to use a language very close to the universal language of computing, Assembly is used as this at least contains concepts that are more easily read by mere mortal programmers (although the domain knowledge—microelectronics— needed to program in Assembly is a major requirement). This is then translated (without loss) into machine code (which is based on Boolean logic)—writing sophisticated programs directly in the language of the 0s and 1s of Boolean logic is nigh on impossible. The computer cannot choose the way it interprets the program, it cannot rewrite the program (unless it is programmed to in a prescribed manner), and it cannot get fed up with running programs and pop to the pub for a swift pint! So, what is it about the modern computer that prevents it from being labeled a complex system, but rather a complicated system?

The critical element is feedback. It is the existence of nonlinear feedback in complex systems that allows for emergence, self-organization, adaptation, learning and many other key concepts that have become synonymous with complexity thinking—and all the things that make management such a challenge. It is not just the existence of feedback loops par se that leads to complex behavior. These loops must themselves interact with each other. Once we have three or more interacting feedback loops (which may be made up from the interactions of many parts) accurately predicting the resulting behavior via standard analytical methods becomes problematic (at best) for most intents and purposes. In a relatively simple complex system containing, say, fifteen parts/components there can be hundreds of interacting feedback loops even if there are only a few interconnections between neighboring parts. In such instances the only way to get a feel for the resulting dynamics is through simulation, which is why the computer (despite its rather uninteresting dynamics) has become so important in the development of complexity thinking. We say that the prediction of overall system behavior from knowledge of its parts is intractable. Basically, absolute knowledge about the parts that make up a system and their interactions provides us with very little understanding indeed regarding how that system will behave overall. Often the only recourse we have is to sit back and watch. In a sense the term complex system refers to systems which, although we may have a deep appreciation of how they are put together (at the microscopic level), we may be completely ignorant of how the resulting macroscopic behavior comes about—i.e., complexity is about limits to knowledge, or our inevitable ignorance. Without this understanding of causality, planning for particular outcomes is very difficult indeed. In the computer (which we will now class as a complicated system) causality is simple, i.e., low dimensional—few (interacting) feedback loops (although there are many millions of connections). In complex systems, causality is networked, making it very difficult indeed, if not impossible, to untangle the contribution each causal path makes. It is hard enough to grasp the possibilities that flow from a small group of people let alone the mind-boggling possibilities that might be generated from a large multi-department organization. Maybe this is why a major part of management tends to be suppressing all these possibilities so that one individual might begin to comprehend what remains—departmentalization is an obvious example of a complexity reduction strategy.

Another unexpected property of complex systems is that there exist stable abstractions (it is these stable abstraction that science is so adept at formalizing into laws), not expressible in terms of the constituent parts, that themselves bring about properties different from those displayed by the parts.

When recognizing the products of emergence, we are abstracting away from the description in terms of parts and interactions, and proposing a new description in terms of entities or concepts quite different from those parts and interactions. We ignore certain features in favor of paying attention to other features that comprise a recognizable pattern, while retaining our awareness that the ‘lower’-level parts and interactions somehow naturally result in the ‘higher’-level parts and interactions. Regarding an organization as a collection of interacting departments rather than a collection of individual people is a common application of this idea.

“Emergent” entities have novel properties in relation to the properties of the constituent parts—e.g., whole departments do not act just like individual people, and ‘teamness’ is not the same as ‘person-ness’. What is even more interesting is that these supposed abstractions can interact with the parts from which they emerged—a process known as downward causation (Emmeche et al., 2000).

In specially idealized complex systems such as in cellular automata (Wikipedia, Cellular Automata) the parts are very simple indeed, and yet they still display a great deal of emergent phenomena and dynamical diversity. Complex systems which contain more intricate parts are often referred to as complex adaptive systems (CASs), in which the parts themselves are described as complex systems. The parts of CASs contain local memories and have a series of detailed responses to the same, as well as different, contexts/scenarios. They often have the ability to learn from their mistakes and generate new responses to familiar and novel contexts. Because of this localized decision-making/learning ability such parts are often referred to as (autonomous) agents. There is a profound relationship between simple complex systems (SCSs), i.e., complex systems comprised of simple parts, and CASs, i.e., complex systems comprised of intricate agents. The Game-of-Life (Wikipedia, Game-of-Life), a particularly well-known SCS, shows how a CAS can be abstracted, or emerges, from a SCS! Intuition would tell us that a CAS is simply a more intricate SCS. The Game-of-Life5 demonstrates that our intuition is, as is often the case in complexity thinking, too simplistic.

Complexity and incompressibility

Cilliers (2005) introduces the idea of incompressibility:

We have seen that there is no accurate (or rather, perfect) representation of the system which is simpler than the system itself. In building representations of open systems, we are forced to leave things out, and since the effects of these omissions are nonlinear, we cannot predict their magnitude (p. 13)6.

It is this concept of incompressibility that leads us away from a managerial monism—a definitive theory of management—to a managerial pluralism (assuming organizations are complex rather than merely complicated)—in which many theories co-exist each with their own unique strengths and weaknesses. Restating Cilliers, the best representation of a complex system is the system itself, and any representation of the system will be incomplete and, therefore, can lead to incomplete (or even just plain wrong) understanding. One must be careful in interpreting the importance of incompressibility. Just because a complex system is incompressible it does not follow that there are (incomplete) representations of the system that cannot be useful—otherwise how could we have knowledge of anything, however limited? Incompressibility is not an excuse for not bothering. This is rather fortunate otherwise the only option available, once we accept the impossibility of an ultimate theory, is to have no theory at all—not a very satisfactory outcome (and contrary to what experience would tell us). We think it is better to know something that is wrong rather than nothing at all. Knowing something and knowing how it is wrong is even better! Equally useful is knowing something that is wrong, but knowing why it is wrong.

Building on the work of Bilke and Sjunnesson (2001), Richardson (2005a) recently showed how Boolean networks (which are a type of SCS) could be reduced/compressed in such a way as to not change the qualitative character of the uncompressed system’s phase space, i.e., the compressed system had the same functionality as the uncompressed system. If nothing was lost in the compression process, then Cilliers’s claim of incompressibility would be incorrect. However, what was lost was a great deal of detail of how the different attractor basins (regions that describe qualitatively different system’s behavior) are reached. Furthermore, the reduced systems are not as tolerant to external perturbations as their unreduced parents. This evidence would suggest that stable and accurate—although imperfect—representations of complex systems do indeed exist (and hence explains why and how science can work at all). However, in reducing/compressing/abstracting a complex system certain significant details are lost. Different representations capture different aspects of the original system’s behavior. We might say that, in the absence of a complete representation, the overall behavior of a system is at least the sum of the behaviors of all our simplified models of that system. Richardson (2005a) concludes that:

Complex systems may well be incompressible in an absolute sense, but many of them are at least quasi-reducible in a variety of ways. This fact indicates that the many commentators suggesting that reductionist methods are in some way anti-complexity—some even go so far as to suggest that traditional scientific methods have no role in facilitating the understanding complexity—are overstating their position. Often linear methods are assessed in much the same way. The more modest middle ground is that though complex systems may indeed be incompressible, most, if not all, methods are capable of shedding some light on certain aspects of their behavior. It is not that the incompressibility of complex systems prevents understanding, and that all methods that do not capture complexity to a complete extent are useless, but that we need to develop an awareness of how our methods limit our potential understanding of such systems.

In short, all this is saying is that we can indeed have knowledge of complex organizations, but that this knowledge is approximate and provisional. This may seem like common sense, but it is surprising how much organizational knowledge is acted upon as if it were perfectly correct.

The suggestion that there are multiple valid representations of the same complex system is not new. The complementary law (e.g., Weinberg, 1975) from general systems theory suggests that any two different perspectives (or models) about a system will reveal truths regarding that system that are neither entirely independent nor entirely compatible. More recently, this has been stated as: a complex system is a system that has two or more non-overlapping descriptions (Cohen, 2002). We would go as far as to include “potentially contradictory” suggesting that for complex systems (by which we really mean any part of reality we care to examine) there exists an infinitude of equally valid, non-overlapping, potentially contradictory descriptions. Maxwell in his analysis of a new conception of science asserts that:

Any scientific theory, however well it has been verified empirically, will always have infinitely many rival theories that fit the available evidence just as well but that make different predictions, in an arbitrary way, for yet unobserved phenomena (Maxwell, 2000).

The result of these observations is that to have any chance of even beginning to understand complex systems we must approach them from many directions—we must take a pluralistic stance. This pluralist position provides a theoretical foundation for the many techniques that have been developed for group decision making, bottom-up problem solving, distributed management (Richardson et al., 2005); any method that stresses the need for synthesizing a wide variety of perspectives in an effort to better understand the problem at hand, and how we might collectively act to solve it.

Now that we have explored the essential differences between what we mean by complicated and complex, we can now explore the implications for the nature of knowledge and the notion of expertise.

The Problematization of Knowledge

The ‘modern’, some might say ‘linear’, concept of knowledge assumes that the systems of which we claim to have knowledge about are ‘complicated’. As such, models/representations of such systems can be built (maps can be constructed), that although may not be complete, do not lead to radically incorrect understanding. This is because small differences between what actually exists and what we think exists are assumed to be irrelevant, i.e., small mistakes in the map making process, lead to small mistakes in our understanding and so can be easily corrected for when the time comes. It also follows that the objects of our representations map neatly to the objects of reality, albeit with some small (readily ignored) discrepancies. This is also related to the assertion that knowledge of one context is valid in another similar context. Although two similar contexts are not identical, it is assumed that the differences are irrelevant; that an abstraction exists in which the two contexts are exactly the same as in the rock being pushed and the coin being dropped examples earlier. In this way knowledge of one context can be extended to many contexts, which leads to the notion of domain expertise. Such an expert is able to transfer their knowledge to many different contexts because of the assumption that there exists an abstraction that makes them all equivalent to each other in a way that allows the application of the same knowledge.

‘Modernist’ knowledge also assumes that causality is simple, i.e., if a change in object A results in a change in object B we have a tendency to assume that such a correlation points to a causal mechanism—‘A caused B to...’ So not only do the objects A and B exist as such—because our model says so—they also effect each other directly (and trivially).

Perhaps a more important implication of ‘modernist’ knowledge is that because there is such a close relationship between reality and our models of reality, we can, through the application of rigorous methods, determine unambiguously which model best describes any circumstance. This results in the claims for ‘objective’ knowledge, i.e., the right, absolutely true way to understanding some feature of reality. If truth can be argued to be absolute then any knowledge an ‘expert’ has is unquestionable. This is at the heart of the ‘modernist’ notion of ‘expertise’; an expert is someone who has spent considerable time and effort in learning/discovering unquestionable (and universal) truths about a particular domain of interest. As such, if you have a problem that falls into the domain of a particular expert then you simply give that person a call, and after some time during which the expert has determined the appropriate abstraction for your particular problem, he simply regurgitates his knowledge of that particular abstraction. You then use that knowledge to design an action (or simply choose from a set of predesigned actions that worked in similar scenarios). Then, after successfully performing that action, you move on to worry about the next problem… that always seems to follow so closely behind the first! It is as easy as that.

So ‘modernist’ knowledge is fixed, absolute, and transferrable. And, ‘modernist’ expertise is the ability to map contexts to such knowledge. This particular paradigm for knowledge and expertise is far from ineffective. There are many contexts that one can assume to be well-described as linear and complicated, for which such knowledge is an invaluable tool supporting our ability to shape and effect such contexts such that they lead to desired outcomes.

But what if a context is best described as complex, or the manifestation of nonlinear causal processes?

Neo-Expertise

As already indicated above in the discussion of incompressibility, if it is assumed that the system of interest, or the context of interest, is complex, or is the emergent result of underlying complex (nonlinearly fed-back) processes, then there is no one abstraction or description capable of capturing all the details required to make perfect predictions about how the ‘affair of interest’ will unfold, or how our interventions might effect that ‘unfolding’. Furthermore, even a nearly perfect description may result in completely imperfect understanding, or at least understanding that is only useful for a certain restricted length of time. This is the result of small differences growing to dominate—small changes cannot simply be averaged out like they can for complicated systems. Our best understanding can be no more than approximate and time-limited. As already proposed, approaching a ‘complex affair of interest’ from more than one direction, a strategy known as ‘perspective-based pluralism’, can be used in conjunction with critical reflection to synthesize a problem-specific time-limited map, rather than overlaying an existing map and forcing-fitting the ‘complex affair of interest’ to that map.

If we reduce the notion of reductionist expertise to mean no more than overlaying a limited number of pre-existing maps known to the ‘expert’ on to a particular context, then we can begin to see what neo-expertise might be in the light of complexity. A ‘neo-expert’ is an expert in custom map-making (rather than just map-mapping), who recognizes that potentially useful maps are not only those s/he’s aware of. The word “making” in the previous sentence is most significant. The term highlights that a neo-expert is really a process expert, not a content expert—the process being the mechanism by which multiple perspectives are gathered, critiqued, and synthesized to inform decision-making. This process also includes the mechanisms in place that recognize that the understanding informing any decision-making is limited, and that implementation of any decision-taking must be monitored in order to facilitate recognition of when the decision taken might be wrong, or when the usefulness of the ‘synthetic’ map has come to pass. So neo-experts are not only concerned with the process of producing context-specific understanding, but also with the care that must be taken in applying such understanding in the real world. This still means that the neo-expert has a central role to play in complex problem solving. But rather than being the source of the relevant domain specific knowledge, they are there to bring together the ‘expertise’ of the many organizational stakeholders in a coherent fashion to facilitate the definition of the problem space, and the development of strategies to guide an organization, or department, or individual in a particular direction—a rather harder proposition than just supplying text-book like knowledge. ‘Modernist Experts’ do our thinking for us, whereas ‘Neo-Experts’ help us think for ourselves.

Some readers may think that our “neo-expert” is the type of consultant who “borrows your watch to tell you the time”. This could not be further from the truth. As conceived, our neo-experts would bring a range of skills to a client organization, including:

Whereas ‘Modernist-Experts’ attempt to replicate successful patterns, ‘Neo-Experts’ attempt to create new successful patterns (or behaviors) for each intervention. The neo-expert may employ “modernist” expertise in the course of an intervention, but only in isolated pockets. These new patterns will be determined through close engagement with the client organization, and neo-experts will need to focus on the transfer of skills to their clients. As the organizational context is in continual flux, the “solution” must be continually monitored in case environmental changes render it impotent—or even dangerous. If the consultant fails to provide the organization with these monitor skills, the client will become dependent on him.

This concept of “neo-expertise” brings to light one of the major weaknesses of management consulting. Many interventions are conceived as “one-shot” projects. The consulting organization comes in, suggests some changes, these are adopted and the client presses on. However, the recommendations are invariably made within the context of a given business climate. Rarely are the assumptions underlying a corporate strategy regularly and formally tested. However, the neo-expert, with his focus on the context, is constantly butting up against these assumptions—questioning the efficacy of a strategy as soon as it is put into practice. While this may be seen as creating continuous instability, it is, in fact, recognizing the realities of doing business in the twenty-first century. Good neo-experts will attempt to minimize the adverse impact of change while accepting that businesses must evolve to survive.

It is be tempting to propose a methodology that would systematically determine how the neo-expert should go about this process of multi-perspective synthesis. However, there are an endless number of ways to exploit pluralism, each with their own idiosyncrasies, so we prefer to point out that many good frameworks and methodologies already exist that can support the work of the budding neo-expert. Suggesting just one would leave us open to the charge of masquerading as ‘experts’ in the process of knowledge production! It is really quite remarkable that these existing frameworks and methodologies have been largely ignored by the complexity community: see for example Jackson & Key, 1984; Flood, 1995; or Midgley, 2000. Complexity thinking and soft systems thinking, for instance, have a great deal in common.

Final Remarks

So does an increasingly connected world signal the death of the expert? Certainly not. There is still a major role for reductionist knowledge in the development of strategies for the management of complexity. However, exploring complex problem spaces requires a different kind of expertise than that, that has traditionally been given priority. This neo-expertise is built on the skills to allow a group of stakeholders to ‘emergently’ arrive at a context-specific, limited but useful, understanding of their circumstances to enable them to act in order to achieve certain preferred outcomes more often than not. This facilitative role is very challenging as anyone familiar with the process of facilitation will tell you—one article discusses this process as midwifery (McMorland & Piggot-Irvine, 2000). It is an approach to the development of understanding, and decision-making that also has profound implications for how any organization may operate; we have barely scratched the surface of these implications in this article. The traditional expert can still be a major contributor in this critical and pluralist process. The main change to their role is that their special type of knowledge is no longer regarded without question as the most important source of understanding in an evolving landscape of interactions and variations.

Footnotes

References

Allingham, M. (2002). Choice Theory: A Very Short Introduction, ISBN 9780192803030.

Bilke, S. and Sjunnesson, F. (2002). “Stability of the Kauffman model,” Physical Review E, ISSN 1063-651X, 65: 016129.

Cilliers, P. (2005). “Knowing complex systems,” in K.A. Richardson (ed.), Managing Organizational Complexity: Philosophy, Theory, and Application, ISBN 9781593113186, pp. 7-19.

Cohen, J. (2002). Posting to the Complex-M listserv, 2nd September.

Economist (2002). “The Crystal balls-up,” Economist, ISSN 0013-0613, Sep 26th.

Economist (2003). “Predicting riots,” Economist, ISSN 0013-0613, Aug 7th.

Emmeche, C., Köppe, S. and Stjernfelt, F. (2000). “Levels, emergence, and three versions of downward causation,” in Downward Causation: Minds, Bodies and Matter, P.B. Andersen, C. Emmeche, N.O. Finnemann, and P.V. Christiansen (eds.), ISBN 9788772888149.

Flood, R.L. (1995). “Total systems intervention (TSI): A reconstitution,” Journal of the Operational Research Society, ISSN 0160-5682, 46: 174-191.

Horgan, J. (1997). http://www.edge.org/documents/archive/edge16.html.

Jackson, M.C. and Keys, P. (1984). “Towards a system of systems methodologies,” Journal of Operational Research Society, ISSN 0160-5682, 35: 473-486.

Kelly, G. (1955). The Psychology of Personal Constructs, ISBN 9780415037976 (1992).

Maxwell, N. (2000). “A new conception of science,” Physics World, ISSN 0953-8585, August: 17-18.

McMorland J. and Piggot-Irvine E. (2000). “Facilitation as midwifery: Facilitation and praxis in group learning,” Systemic Practice and Action Research, ISSN 1094-429X, 13(2): 121-138.

Midgley, G. (2000). Systemic Intervention: Philosophy, Methodology, and Practice, ISBN 9780306464881.

Richardson, K.A. (2004). “On the relativity of recognizing the products of emergence and the nature of physical hierarchy,” conference paper presented at the Second Biennial International Seminar on the Philosophical, Epistemological and Methodological Implications of Complexity Theory, January 7-10th 2004, Havana, Cuba.

Richardson, K.A. (2005). “Simplifying Boolean networks,” Advances in Complex Systems, ISSN 0219-5259, 8(4): 365-381.

Richardson, K.A., Tait, A., Roos, J. and Lissack, M.R. (2005). “The coherent management of complex projects and the potential role of group decision support systems,” in K.A. Richardson (ed.), Managing Organizational Complexity: Philosophy, Theory, and Application, ISBN 9781593113186, pp. 433-458.

Shneiderman, B. (2008). “Science 2.0,” Science, ISSN 0036-8075, 319 (5868): 1349-1350.

Weinberg, G. (1975). An Introduction to General Systems Thinking, ISBN 9780932633491 (2001).

Wikipedia (Cellular Automata). http://en.wikipedia.org/wiki/Cellular_automaton.

Wikipedia (Game-of-Life). http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life.

Wilbur, K. (2000). A Theory of Everything: An Integral Vision for Business, Politics, Science, and Spirituality, ISBN 9781570628559 (2001).

Kurt A. Richardson, PhD is the CEO of ISCE Publishing, a publishing house that specializes in complexity-related publications, and is the CEO of Exploratory Solutions, a small company set-up to develop software to support decision making in complex environments. Kurt also designs and develops application specific integrated circuits for Orbital Network Engineering. He was also a Senior Systems Engineer for the NASA Gamma-Ray Large Area Telescope (now Fermi). Kurt’s current research interests include the philosophical implications of assuming that everything we observe is the result of complex underlying processes, the relationship between structure and function, analytical frameworks for intervention design, and robust methods of reducing complexity, which have resulted in the publication of over thirty journal papers and book chapters, and ten books. He is the Managing/Production Editor for the international journal Emergence: Complexity & Organization and is on the review board for the journals Systemic Practice and Action Research, Systems Research and Behavioral Science, and Tamara: Journal of Critical Postmodern Organization Science. Kurt is the editor of the recently published Managing Organizational Complexity: Philosophy, Theory, Practice (Information Age Publishing, 2005). Kurt is a qualified spacecraft systems engineer and has consulted for General Dynamics and NASA.

Andrew Tait is currently cofounder and Chief Technology Officer of Idea Sciences, a Virginia-based software and consulting firm specializing in the creative use of technology to improve organizational decision-making. During his career he has designed commercial, off-the-shelf, solutions for strategic planning, performance improvement and conflict management. This has led to numerous consulting and training relationships with major commercial and government organizations. Prior to forming Idea Sciences, Andrew held various commercial (technology consulting), government (defense) and academic (business) positions. Andrew’s research interests include: decision-making, performance improvement, electronic voting, virtual communities; conflict management; visualization and; improving understanding of complex socio-technical systems.


1 Exploratory Solutions, US
2 Decision Tools, UK
3 As economists are fond of saying, “Prediction is difficult—especially when it’s about the future.”
4 This section and the next (“Complexity and incompressibility”) are slightly amended versions of similarly named sections which previously appeared in Richardson (2008: 14-17). They have been included herein keep the current article relatively self-contained.
5 The Game-of-Life offers an entertaining way to learn a great deal about complex systems dynamics, and to begin to develop a deep appreciation for the systems view of the world.
6 This statement risks conflating the concept of incompressibility with the problem of identifying a bounded description of a complex system. These two concerns are not equivalent; just because a particular system cannot be bounded easily is not what incompressibility is all about. Incompressibility derives from the interacting nonlinear feedback loops that exist even in well bounded complex systems, i.e., a bounded complex system is still incompressible.