Journal Information

Article Information


Emergence is not always ‘good’


Abstract

The concepts of emergence and collective intelligence are fascinating, and from their study might come good things. But neither is ‘good’ by definition and we ought to be careful not to let our enthusiasm and interest lead to us into speaking too casually about the benefits of ‘encouraging emergence’ or ‘developing collective intelligence’. We can find ourselves battling the emergent properties of a system, and working against its collective intelligence. This article explores an example to illustrate this from the field of social care. It also discusses some tentative ‘laws’ and some issues resulting from the positive nature of popular perspectives on emergence.


Introduction

The ideas of emergence and collective intelligence seem to be inherently attractive ones. When we read about ants solving a problem, or people being wiser as a group, we think in positive terms. That Wikipedia can even exist, never mind that it can sometimes be the best source of information on a subject, is surprising and wonderful. The thought that collective intelligence might be useful in finding a way forward on global warming is something worthy of detailed study.

From here it takes only one small step to a place where we are talking about how to encourage emergence and how to develop collective intelligence. We find ourselves thinking about these things as being inherently positive attributes of a system, particularly of a human one. Two of the four introductory paragraphs in Wikipedia’s entry on collective intelligence (20 August 2008), to which I’m referring for obvious reasons, specifically present this positive slant (the other two are neutral).

But to take this step is, I think, a huge mistake. The fact is that emergence and collective intelligence aren’t ‘good’ by definition. If good things can emerge, so can bad; and intelligence can be put to beneficial or to detrimental use. This article explores these points in more detail, examining some specific examples, suggesting some general conclusions, and discussing the consequences that arise from it being popular to take a positive view of emergence.

I’m aware that some readers might resent any implication that they didn’t already know that emergence and collective intelligence can result in ‘bad’ as well as ‘good’—so I should be clear that I’m not necessarily presenting new knowledge here. We’ve known about the awkward ways in which systems work for a long time. I’m simply reacting to the manner in which emergence and collective intelligence tend to be discussed.

A Simple Example

I find this much simplified example helpful as an introduction to this discussion.

An advice centre is staffed by passionate specialist workers. They individually reply quickly and efficiently to telephone queries. If they can answer the query directly they do so, and if not they immediately say so and refer the matter quickly to their colleagues. They each care deeply about getting the right replies sent to people.

We might expect that the emergent properties of this system—which is made up of passionate and efficient workers—would be positive. Unfortunately we all know that this isn’t how emergence works. Putting a group of efficient and passionate people together doesn’t necessarily create an organization which, in our dealings with it from outside, is efficient and passionate. When we look at our interaction with individual workers, we find we are dealt with efficiently and the worker’s passion is clear. But we may also find that we are passed repeatedly around the system, that our query is never actually answered, and that it takes a long time for us to work out that the centre does not have the expertise we need.

The property of being inefficient at replying to queries is an emergent one. It is one that arises at, and is best observed at, the organizational level. It’s clearly not a ‘good’ property.

A More Informative Example

A more in-depth example is required if we are to look at this properly—and I’ll refer to the area of work in which I specialize, which is in supporting change within ‘care’ organizations. I believe that this example has much to teach us about collective intelligence and emergence more generally, and about what we miss if we tend to view these as inherently positive.

On the whole people who on the receiving end of care and support work need this because of some great difficulty in their lives. For a few, the provision of particular expertise (for instance pain relief), or help with the accomplishment of a physical task (for instance getting out of bed), is sufficient. Solving this technical matter, on its own, is sufficient to free them to have a full and rich life. But for many people the key issues aren’t these technical matters. Rather, they have problems like a lack of people in their life, low self-esteem, or narrow life experience. These things might have been caused by the specific technical problem (e.g., disability, illness, or addiction), but ‘solving’ this technical matter has little lasting effect on their quality of life. The effects of exclusion, devaluation and disempowerment are at least as debilitating as the original issues ever were—and the person becomes trapped in a situation where exclusion, disempowerment and devaluation positively reinforce each other.

Fortunately it is quite possible to solve this problem for individual people. Much has been written about how to do this, and on the whole the problem isn’t a particularly technical one. There are obvious difficulties to overcome—as in all work with people—and the work may well require creativity, skill, and tenacity, but even people who have never worked in this field can think of a host of solutions.

Unfortunately, when we look at how to make sure that our care and support systems do this for everyone, we come across much greater challenges. I want now to describe the way in which emergence and collective intelligence sit at the heart of these challenges.

On the whole our care and support services base everything they do on the technical issues a person faces—their disability, illness, or addiction for example. For as long as anyone can remember discussion has been taking place about how to change this. There have been repeated changes of legislation and government policy, and within organizations many rounds of consultation, policy change, structural change, training, and so on. Somehow it seems that the problem is intractable. There are, as there have always been, examples of good practise to be found if we look hard, and always the promise of real change on the horizon. Words change, structures change, stated values change, there are overall small improvements, but on the whole everything works in pretty much the same way as it has for decades.

Inevitably a system/emergence approach is useful to understanding what is going on. In particular, we notice what for many is a startling idea; that it isn’t really in the interests of a care organization to tackle devaluation, exclusion and disempowerment, and that this fact influences how such an organization works. In simple and bold terms; the organization needs people to be lonely, devalued, and disempowered. These problems are why it exists, and the more people who have these problems the better. Of course at this point it is important that I acknowledge that individual staff and managers within an organization, and even individual projects or departments, don’t think this way. They are often hugely dedicated, and they dream of making a real difference, and they do their very best within the system in which they work. However, it should be clear to readers who understand systems that what is in the interests of the system will be influential none the less.

Many people might believe that the individual goodwill of staff within these organizations will enable them to overcome the interests of the system. It is important, therefore, to spend a moment considering quite how powerful the interests of the system actually are.

Some of the main external pressures on care organizations arise from the public image of those people who are supported. If ‘people with mental health problems’ are generally seen firstly in terms of being a threat to society, then it is in the organization’s interest to respond to this. If people with Downs Syndrome are generally understood to be a bit like children, then organizations will find it difficult to work in such a way as to allow those they support to take real risks or have sexual relationships. The existence of the organization will be more secure if it works in a way which doesn’t ‘frighten the horses’—and this will be true even if society’s perceptions of the people it supports are utterly incorrect. Whatever other pressures or policies influence a care/support organization we can’t get away from the fact that most of their income comes from society at large, through taxes (or an equivalent), and is dependent on political decisions (or an equivalent). They are, in a very direct way, working on behalf of society at large. It is unlikely that competing pressures—for instance an official policy demanding that people are included—will be anywhere near as powerful as the fundamental fact that it exists to do whatever society as a whole is expecting of it.

Unsurprisingly, in real world terms, the result is often an organization which ‘talks the talk’ of inclusion and empowerment, staffed by people doing their best to work on these issues, but which somehow manages to completely undermine its own efforts.

The point of providing this example should be clear. The system, the organization, works in a way which results in disempowerment, devaluation, and exclusion. This is an emergent property of the system. We can go further I think. The organization must have a collective intelligence. Why should we assume that this intelligence would be directed towards anything other than the interests of the system? To me it is clear that the collective intelligence of a care organization has this focus—and that it can do a very good job in working on the system’s behalf. Many of those who write about changing these organizations mention the need to work ‘under the system’s radar’ (or something equivalent). Some of these writers may be visualizing ‘the system’ as synonymous with ‘those in charge’ but I think that what they are responding to is the way in which such a system seems to be particularly creative and adaptive in fighting change. Personally, when I write in this context about ‘the system’ I’m thinking in terms of the collective intelligence of the whole system.

It is clear, at least in terms of particular desirable outcomes, that emergence and collective intelligence are not ‘good’ attributes of organizations like this.

Generalizing

I find that thinking about emergence and collective intelligence in this way is hugely helpful. It also bridges a gap between what I understand about complexity and the work on ‘change’ I have encountered in systems theory. It becomes particularly powerful once also connected to ideas like those about the ‘dangers’ inherent in attempting to lead ‘adaptive’ change. These are outlined by Ronald A. Heifetz and Marty Linsky (2002) in their book ‘Leadership on the Line,’ which provides a comprehensive list of the different ways in which a system can neutralise a change effort.

While these ideas are very helpful for me, they are much more difficult to convey in a short time to those with whom I work. One approach that can help is to try to summarize what we know, almost as a set of rules:

  • Emergence, and collective intelligence, can be powerful and are very interesting to study, but are not inherently ‘good’ or ‘bad’;

  • In human systems the system’s interests, and therefore its behavior and collective intelligence, may well be against us. This is particularly true in situations where we are facing an ‘adaptive’ challenge in trying to lead system change;

  • It seems that the emergent behavior of a human system (or systems) tends to maintain power imbalances between different groups of people. It is worth noting that it is in the interests of a group of powerful people to maintain that power, and that the group’s behavior (viewed as separate from the behavior of individuals) will respond to that.

Wider Issues

I have written this article because I have struggled to find other people writing in an easily accessible way on these specific matters. It may be that they are debated deep within the ‘complexity’ community—but from the outside I haven’t been able to detect this. That is unfortunate for my specific work—I imagine that at least to some extent I’ve been re-inventing the wheel—but I think that other people must also be excluded and that there are probably wider consequences of this.

The simplest problem is that people who would find ideas around change, emergence, and collective intelligence to be of immediate practical use are much less likely to encounter them—or that they only do so in a fleeting way when with someone like me for an hour or two. It is difficult to find any appropriate writing to recommend on this subject to a middle manager busy with day-to-day issues.

I believe that there are also deeper problems resulting particularly from the positive bias from which much of the discussion views emergence and collective intelligence. If we repeatedly use one particular set of examples to speak about collective intelligence, such as Wikipedia or prediction markets, these problems will be inevitable. We find ourselves creating theories based only on the perspective these provide. James Surowiecki, in The Wisdom of Crowds (2004), discusses the way in which diversity leads to a greater level of crowd wisdom. Others interested in collective intelligence make a leap to state that collective intelligence depends for its existence on diversity.

To some extent language begins to fail us here, but my understanding of collective intelligence is very different. Diversity may well add to the wisdom of a collective intelligence—but, in somewhat clumsy language, un-wise collective intelligence can still exist in a group without diversity.

In a similar vein, I recently came across an influential academic discussing how emergence is visible on the Internet specifically because of the large number of people involved. I see this as another example of such confusion. While it may be true that Google or Wikipedia may depend for their accuracy on the number of people involved, that is not proof that emergence only appears when many people are involved in a human system. Even an extremely small human system can have emergent properties. For instance if I put only three ‘shy’ people in a room together it is impossible for me to predict the nature of the character of the group which will emerge. As a consultant I am very familiar with the way in which a small group can exhibit a character very different from that which one would predict from the characters of the individuals who are members of it.

Perhaps we should state another ‘rule’ as follows:

  • The wisdom of the collective intelligence of a human system can depend on the large number of diverse of views within it.

Conclusions

The first key point I have made in this article is that both emergence and collective intelligence seem, at least to me, to always be interesting attributes of a system, but not always to be positive ones. The second point is that it is a mistake for us to forget this. I have also laid out a number of ‘rules’ which seem to me to be useful to those working with a certain set of problems caused by emergence and collective intelligence.

To conclude, I would like to suggest that the closer involvement of practitioners like myself in discussions about emergence and collective intelligence would be a useful step forward. We have very practical experience of dealing with human systems at a system level, and useful ideas to contribute toward theoretical discussion as a result.

I would be delighted to receive any correspondence on this article (at enquiryRW@capacitythinking.org.uk or see www.capacitythinking.org.uk). Readers might also consider contributing accessible writing on these themes to the ‘change’ section of the collection of articles at www.isja.org.uk/.

References

ref1?

Heifetz, R.A. and Linsky, M. (2002). Leadership on the Line: Staying Alive through the Dangers of Leading, ISBN 9781578514373.

ref2?

Surowiecki, J. (2004). The Wisdom of Crowds, ISBN 9780385503860.

ref3?

Wikipedia (20 August 2008) “Collective Intelligence,” http://en.wikipedia.org/wiki/Collective_intelligence.


Article Information (continued)


This display is generated from NISO JATS XML with jats-html.xsl. The XSLT engine is Microsoft.