Hugo Letiche
University for Humanist Studies, NLD
The role of knowledge workers in our society is an increasing focus of press and academic attention. Letiche suggests that knowledge workers often both work in and create “McDonaldized” simulacra, i.e. spaces for action that are less than real. He argues that the very concept of organizing is challenged by the tensions implicit in the semi-ness of the semi-reality of subspaces. The arena for his argument is that of information technology. The language of his argument is that of identity, self, logic and activity—terms more often found in European academic debate than in American management practice. Forgive Letiche's use of academic literary forms. This world of emergence and cyborgs and of warfare with cognitivist (social) Darwinism may be a bit alien to some readers, but the argument and message will not be. In the semi-real spaces of managing, creativity is bought only at a large cost to others and managers find themselves needing to determine when that price is worth paying.
Do new forms of information and communication technologies (ICT) jeopardize humanism? The new technology leading to new (social and individual) identity thesis sounds liberating to some and dangerous to others. A
utopian debate centering on the effects of new information technologies on self and identity has been argued in both popular and academic media. The debate was primarily conducted by cultural historians and theorists of (the sociology of) science (Benedikt, 1994; Crary et al., 1992; De Landa, 1991; Gray, 1995; Haraway, 1991; Turkel, 1995; Slouka, 1996; Stone, 1995; Virilio, 1996). Its rendering here will be formulated from the perspective of organizational studies. Cyborgism—i.e. computer- and multimedia-supported practices of knowledge acquisition, sharing and exploitation—defines an alternative model of interaction. On the one side there is the risk to humanism and on the other the new technology or cyborg model. In cyber utopia traditional linear “scientific” reductionist analysis does not suffice because action is supposedly over-determined, identity is characterized by multiplicity, and events emanate from complexity. Society is thought of as a process of mutual adjustment, creative indeterminacy and cooperative behavior. It is a positive redefining of the possibilities for emergence. The assumption(s) of logical predictability and social economic competition are set adrift.
The new technologists and cyborg theorists attack the dominant thought structures of linear rationality (cognitivism) and utilitarian economism (social Darwinism). Cognitivism has been based on the assumption that a “soup” of complex factors stands in opposition to the ordered systems of rational symbolic thought. The “world” has to be reduced to symbols to be “know-able” and these symbols have to be manipulated (processed) to achieve “technological” predictability. The cognitive “elements” of both the analysis and its application(s) do not resemble phenomenal experience. The “lived-world” does not come in discrete elements, ready for logical sequential treatment. Cognitivists can thus only create and manipulate the mental representations of their “rule-based” activity by destroying the experiential emergence of events. Likewise, (social) Darwinism makes short shrift of emergence, reducing it to a struggle for limited resources. For the neo-Darwinist sociobiologists an overriding competitive logic dominates human existence
(e.g. Alexander, 1987; Wilson, 1978, 1992). The cyborg theoreticians have claimed emergence for themselves—the self in interaction with its social and cultural surroundings achieves “identity” in an unconstrained trueto-itself manner. ICT, for the new technologists and cyborg utopians, makes a liberated self possible. By means of ICT, cognition can be socially constructive and aesthetically generative. Consciousness is no longer restricted to narrow analytic procedures; the self is not confined to fighting repressive economic interests. Identity can be spontaneous, everchanging jouissance (pleasure).
The new technologism of cyborg utopianism claims to be countervailing to the dominant combination of cognitivism and (social) Darwinism. It asserts that there is something worth experiencing in organizations: new psychological freedoms, changing relationships between technology and humanity, alternative non-bureaucratic ways of organizing, and an involved style of perception. The new technologies of code (ICT, artificial life, biotechnology) are supposedly producing a postmodern culture wherein identity is freed of the restraints of time and space: cyborgs can be and/or appear as whomever they want to be and can organize themselves into virtual (work/social/intellectual) communities. Identity and self, career and work are becoming increasingly malleable; how one positions oneself in terms of gender, ethnicity and social roles has become ever freer. One can be one's own code, text, narrative. Traditional (repressive) social control breaks down as the social potential of the new technologies makes it possible to choose one's role(s), self and human identity.
The new technologists have their work cut out for them if they want to dislodge the cognitivist/social Darwinist grip on management thought. Nowhere do these two mental structures seem to unite more seamlessly than in the study of organization and management. From Ansoff to Porter, from Prahalad and Hamel to Mintzberg, one finds the same realist and (social) Darwinian assumptions. Qua cognition, it is assumed that managers need to make (continually improved) mental representations of a pre-given but changing world. Managers supposedly manipulate these representations in order to solve problems. Thus the criterion of success for a managerial cognitive system is the production of thriving solutions to business problems. The accepted problem-solving process starts by structuring perception into discrete elements and tasks, continues by applying a rule-based structure of modeling to data and concludes with acting “appropriately” in the business environment.
Cognition amounts to a form of rule-based problem solving, wherein managerial cognition of a contingent external business environment is crucial. Beginning with Herbert Simon, the concept of contingency has been posited to be crucial to business analysis—it is assumed that the success of business action can be equated with the rigor of the analysis, accuracy of the representation(s), appropriateness of the model(s), and the closeness of the fit between cognition and action. Different business theories do not propose different models of cognition but merely compete with one another to analyze more efficiently, to model circumstances more powerfully, and then to produce more profitable prescriptions for action. The bottom line for business theories has been competitive advantage, i.e. a struggle of (economic) existence via natural selection (the market), leading to the survival of the fittest. The “genetic” material of the firm—its competencies, strengths, assumptions and/or configurations— supposedly has to be optimally adapted to the characteristics of the business environment to survive. Organizational “fitness”—the company's adaptedness to its market “niches”—is paramount. Of course, cultural or mimetic conveyance of “fitness” differs from the genetic in that learned capacities are transmitted (to future players). Cognitive models leading to more powerful performance supposedly determine organizational survival. The management gurus sanction a (“Darwinist”) tooth-and-claw struggle of cognitivist activity. Because the competitive social reality is posited to be external to us—it is posited to be an objective (social) given—there is no reason to question “the rules of the game.”
For the cyber critics, ICT threatens the combination self—activity— text (i.e. knowledge) by destabilizing all relationships of identity. In ICT, identity is uncertain—the link between the author, his/her text and the reader is jeopardized. The organization of knowledge breaks down and the spaces and places of knowing are virtualized into “non-space”—anyone can do anything, anywhere, with text. The disintegration of the link between being and knowing jeopardizes the primacy of the spirit. It attacks the assumption that “knowledge”—i.e. human culture embedded in text (written, performed, visualized etc.)—can be conveyed from individual to individual and has the power to appeal to the addressee in a radically “humanizing” manner (Kunneman, 1999).
The new technologists argue the opposite: emergence is well served by the changes in identity catalyzed by ICT. Cyborg theory produces a vision of the individual that defies the primacy of organization(s). The cyborg withstands analytic reductionism, political unfreedom and the abbreviation of the person to standard behaviors. Cyborg epistemology transcends the primacy of economic utilitarianism (i.e. social Darwinism) and the dehumanizing effect of rationalist (positivist) dualism by asserting free will. Truth is not defined in terms of a reductionism wherein the goal of “knowing” is to explain the complex on the basis of underlying “simple” mechanisms. Social existence and cooperation are not understood in terms of the “prisoner's dilemma” and/or “fitness maximization.” The richness of human interaction is not reduced to “reciprocal altruism” or an “I'll scratch your back if you scratch mine” principle. Axelrod's titfor-tat has no place. Respect, solidarity and “love” are not defined as the efficiency principles of an information-processing machine.
Cyborg utopians assume that the complex interaction of individuals and contexts produces unexpected and dynamic results that will be interpreted differently by different persons. In the feedback loop of ICT— experiencing is followed by writing (text) and then by (re)reading—shifts, diffuseness, multiple interpretations of meaning increase. There is indeterminacy in emergent complexity and freedom in interpretation. Thus emergent “knowing” involves a dynamic interaction between consciousness and world. There is no human world without consciousness and no consciousness without a world. Perception is a human construction; the organs of human sensemaking are “nature.” To see both the human input, in what we call “world,” and the natural input, in what we recognize as human, we have to transcend reductionist rationality.
In their pursuit of these goals, the new technologists embrace the postmodern critique of (post)enlightenment knowing—i.e. the dominant definitions of knowledge frustrate and repress the emergence of (creative) lived-identity. A “devil's bargain” between cognitivism and social Darwinism has been responsible for a blockade on developing rich, experiential insight—i.e. reductionism curbs the proximal and/or phenomenal forms of knowledge. In the social and human sphere, anti-cyborg modernist ideology assumes that beneath the level of surface variability, all humans share the same sort of reasoning (i.e. principles of problem solving) and forms of motivation (i.e. needs and desires). In modernism, the inherent structures of human existence include progress (socialist or individualist), the state (nationalist, “proletarian” and meritocratic), ethics (an “honest day's wages,” “just” prices), evolution (from universal flux to determinist naturalism), and rational science (subjective versus objective, experience versus truth, analysis versus perception). None of these ideas has disappeared—contemporary “new managerialism” sounds very much like nineteenth-century social Darwinism. Unhampered and unfettered private initiative supposedly will, in free competition, achieve declining prices, costs and wages, leading to increased (client) satisfaction (Bonn, 1931 Vol. 5: 333-44). Although some old ideas do seem to be teetering, there is enormous continuity in conceptual structures. Much of the current mental “here and now” still rotates around the evolutionist ideology of social naturalism, and the cognitive ideology of positivist “realism.”
The new technologists counter by arguing that multimedia really does hold out the promise of experiencing parallel, different worlds. Virtualization can make changes: in work, in practices of conceptualization, in manners of communication, and in the range of expressive outlets. As Baudrillard has argued, perception has been attacked by an excess of information (messages) and the extreme ambiguity of the images. “Realism” produces just one form of the imaginary, but aestheticized, continually shifting consumerist representations that now predominate. We appear to be living in a computer-induced hallucination—a “nonplace” beyond the computer screen.
For example, is business the same since spreadsheets no longer demand mathematical activity and have become automated? The technology has popularized and democratized the MBA's mathematical way of thinking. In principle, everyone can now think and reckon on the basis of the “bottom line.” The ideology of quantitative, number-driven business has received an enormous boost by going virtual. Any business plan and every strategy can, fairly easily, be numerically tested with a spreadsheet. The invitation to forget “externals” (i.e. anything you cannot quantify in your spreadsheet) is everywhere. Increasingly business operates via prostheses: the technological rendering of “reality” is the only version of the “real” that remains. Management “knows” its personnel via surveillance cameras, gains its “facts” via the computer programs that structure its decision making, and realizes its production via robotic tools.
In the cyborg scenario, computing merges with biology (life science) to become a form of “creation activity” or “generative action.” Machine/human interfacing becomes dialogic—the dualism between technology and life shrivels away as combined human-machine action takes over. The person becomes a cyborg—perception, analysis and experience are all combined in a virtual or artificial reality that one could label ICT prosthetics. It is no longer easy to say where the technologies aiding thought, vision and involvement stop and the person starts. Man and technology have become dynamically intertwined and adapted to one another. Intelligent materials, environments and utensils have become normal. Technology is manifesting itself as a dynamic element, coevolving with human existence. Experience is imaged and imagined, is concrete and simulacrum. Is there a “real” left outside of the technology, to be compared to the primacy of liberatory “experience” in cyborg theory? The social universe is “postmodern,” in the sense that the “naturalism” and “realism” of the past are no longer accessible to consciousness. Technology has been absorbed into life science—the “object” has more and more taken on the attributes of evolutionary adaptiveness.
Biological forms and logic are everywhere on the increase. Science has become attuned to life principles and experiences. Simple hierarchical order is being replaced by:
... [a] network-body of truly amazing complexity and specificity. The [object of study] … is everywhere and nowhere. Its specificities are indefinite if not infinite, and they arise randomly; yet these extraordinary variations are the critical means of maintaining … coherence. (Haraway, 1991: 218) The development of contemporary science rejects the metaphors of linear authority, hierarchical order and impermeable boundaries. Life science confronts us with finite situated logic, rooted in partiality and made up of a subtle play of the same and different.
Emergence is crucial to this new logic. Cyberspace is a region where the testing of emergent alternative identity formation is possible—it is a realm of social psychological experimentation. The principles of “identity” are being redefined within a new technology/new identity thesis. The paradigm shift manifest in the new “sunrise” ICT and biotech businesses provides a model of alternative (group) identity formation.
Translated to business practice, the ICT cyborg scenario leads to virtual organization. Virtual organization entails cooperation in the core business processes of two or more agents, for the purpose of developing a product-market combination, without the creation of a (new) fixed (legal or physical) identity. Virtual organization has also been defined in terms of “Net presence,” referring mostly to marketing or selling via e-commerce. Virtual organization creates a geographically spread group of people who with the use of ICT are able to work together just as closely as if they were in one location (all this could take place within one organization as in an organizational network). Virtual organizations are flat, expertise based and flexible. Supposedly, they survive because they add value to members' results and can outperform traditional organizations. What makes the difference is that the virtual organization is people based and networked, rather than company based and bureaucratized.
Virtual organizations may appear to be amorphous and in continuous flux, but to survive they must be tightly nestled in self-regulating networks of relations. It is claimed that the advantages are many: increased scale of operations without tying down capital, access to expertise and new technologies with a minimum of investment, potential diversification without abandoning focus on core competence(s), variable and flexible activity with minimal risk. The self-regulating business network may be new—it may attack traditional principles of hierarchy and corporate order—but it is very (social) Darwinist. The strategy tries to achieve maximum results, with a minimum of waste, risk or danger. The cyborg scenario can shake off Newtonian assumptions, but can it deliver coevolving identity?
A new breed of knowledge workers are making, adapting, using and exploiting cyberspace. They inhabit a world of simulacra and are themselves (most often) cyborgs. They are well-paid, highly skilled consultants and/or (business) experts (normally not CEOs) who make use of virtual/networked technologies to implement their business practices—combining the sciences of code with commerce. Such knowledge workers are cyborgs in the sense that they depend on modems, PCs and laptops; think in “spreadsheets,” “Powerpoint presentations” and computer-assisted analysis (via expert systems, AI, neural nets etc.); and require CAD/CAM, robotics and interactive communications to implement their plans. Furthermore, their phenomenal world is (almost) entirely made up of simulacra: varying from mobile telephones to e-mail, from hypertext to group work, from electronic surveillance to the “nonplace” of airports/chain hotels/hired autos. But their cyborg cosmopolitanism (in all its disorienting uniformity) is financed by clients, who think that the value that the knowledge workers add more than justifies the cost. Knowledge workers supposedly help solve adaptive problems—ones essential to the organization's survival.
The knowledge workers not only work in a universe of simulacra, their jobs often entail creating simulacra for others (Letiche, 1996b). The simulacra they design tend to be thoroughly McDonaldized, splitting tasks up into unintelligent repetitive behaviors with management surveillance ensuring tight control over “quality” and “productivity” (Ritzer, 1993). Designing, implementing and selling McDonaldized systems are creative and innovative actions, however repressive or alienating these systems may be for the people who have to work in them. McDonaldization encodes production and/or service processes, making them predictable, mechanistic and efficient.
The original McDonalds, industrialized “eating out”—the principles are now being applied to auto repair, healthcare, schooling, retailing, banking, law, and so on and so on. Knowledge workers are redesigning organizations and business processes in an ever-accelerating process of McDonaldization; they are also exploring new forms of interactivity, setting up virtual networks and investigating the codes of bio- and information technologies. In how it analyzes work activity, McDonaldization is an extreme application of the traditional realist practice of cognition; and in how it furthers the careers of the knowledge workers and limits the careers of those who have to work in the systems it produces, it is a child of social Darwinism. The knowledge worker's creativity is bought at a large cost to others.
Investigators of the self in cyberspace, such as Mark Slouka and Allucquere Stone, have tried to (re)discover the emergent self (Slouka, 1996; Stone, 1995).1 Their intellectual moves explore the effect that the new experimental forms of communication and information technology are having. What do the technologies of semi-reality and simulacra mean for identity? Simulacra can be understood as prisons but also as possibilities for liberation, i.e. “How is the psychological identity of the knowledge (simulacra) workers influenced by what they do?” Are knowledge workers really different? Are the engineers of modernist rationalism, who design their bridges according to linear pre-structured procedures, any different from simulacra-dependent knowledge workers? Has an identity shift, grounded in the new life sciences and applied by the up-andcoming bio- and information technologies, led to virtual organization? And if so, do we have any reason to be celebratory about it?
The logic of the cyborg began when modern society entered the world of prostheses via medicine, communications and engineering. New layers of technology were imposed between consciousness and emergence. The phenomenal world was less and less self-emerging and increasingly mitigated. Medicine brought pacemakers and hearing aids, artificial organs and transplants. The quality and longevity of life were improved by building bits and pieces of machinery into people. Via simulacra, created by Xray, echoscopy and scanners, machine-produced images of the body have became more “real” to the doctor than anything the naked eye ever sees.
As for the predecessors of ICT, mass communications began with the photograph and telephone. Telephones destroyed the direct physical link between the speaker and hearer:
... the earliest users of telephones were uncomfortable with the idea of a “voice” coming out of a handset, which one held to one's ear. ... the original designers of the home phone constructed it with a deliberately anthropomorphic appearance. The quasi-human look of the instrument was meant to help ease the transition to electronically prostheticized speech by giving early users the sensation they were speaking to the telephone instead of through it. (Haltmann, 1990)
Telephoning demands a particular understanding of proximity and agency: Is there really someone else on the other end of the wire? Are they really who they claim to be? Does it matter that we cannot see if they are near or far? In the world of the 1990s we speak to someone on the telephone without worrying about the electronic virtualness of interaction:
With the advent of electronically prostheticized speech, agency was grounded not by a voice but by an iconic representation of a voice, compressed in bandwidth and volume and distorted by the limitations of the ... transducers, so as to be something more than a signature or a seal on a text, but far less than an embodied physical vocalization. Agency was proximate when the authorizing body could be manifested through technological prosthetics. This technological manifestation in turn implied that the relationship between agency and authorizing body had become more discursive. This process of changing the relationship between agency and authorizing body into a discursive one eventually produced the subjectivity that could fairly unproblematically inhabit the virtual spaces of the nets. (Stone, 1995: 97)
Likewise, the photograph added to a world of movement and interaction, one that is unnaturally still. Photographs are formalized and materialized simulacra. They are not “unimpeachable mechanical witnesses” but falsifications of time. Phenomenal reality does not stand still. The human eye (body) perceives by glancing on and about what it sees, in a process of assembling an image of a moving, alive universe. The photograph negates the eye's motility and reduces the visual to “object” (Virilio, 1994). The “objectified” simulacrum of photography matches the cultural needs of science and technology eager to banish human perception and subjectivity to the garbage heap of history. Thus, the modernist simulacra imposed a regime of cognitivist perception—a logic based on machine-guided perception that devalued the “living eye.”
There has always been a Christian urge to escape the world and everything in it—cyberspace offers new and attractive opportunities actually to do so. On the Net one can find “our sort of people” without the street's constant reminders of “social questions,” drugs, race, violence and poverty. Who is excluded, i.e. who gets run over on the digital highway? The victims are those persons too poor or unlettered to gain and/or maintain access to the Net, i.e. the PONAs (people of no account) are excluded from the “virtual escape.” A “wired” elite shares information, privileges and opportunities; its members “empowered” by the Net. Cyberspace is a “free-market utopia”—those who can use it effectively to make money flourish, the rest are made invisible (Slouka, 1996: 119). In essence, cyberspace is: (i) an escape from unsolvable problems; and (ii) a poststructuralist retreat into an exclusive hyperreality.
Why speak of the destruction of real communities in the Balkans when you can inhabit virtual ones? Why bring up the importance of biodiversity and the implications of habitat destruction when you can create your own environment? (Slouka, 1996: 20)
But cyberspace doesn't only offer escape: it's also an alternative route to identity. For example, Mark Slouka, a critic of the digital revolution, described one of his male colleagues as having taken on the Net role of a female avatar and having entered into a very strong emotional/erotic relationship with “another woman.” On the Net, in chat rooms and thelike, whoever logs in can participate in the writing of an on-going interactive fiction where one has control of one's own persona but never knows what the other personae are going to do. The power of taking part in this participative writing is, evidently, considerable—participants report losing themselves entirely in their virtual persona and having very intense personal experiences as avatar. Slouka condemns the whole business in his chapter “Springtime for Schizophrenia: the Assault on Identity.” He concludes:
In the not-so-distant future ... our technologies will insist that we forget the primary and the near in favor of the secondary and the remote. As we grow used to digesting ideas, sounds, images—distant and concocted, ... a door closing, heard over the air, a face contorted, seen in a panel of light—these will emerge as the real and the true; and when we bang the door of our own cell or look into another's face, the impression will be of mere artifice. I see a time ... when the solid world becomes make-believe ... when all is reversed and we shall be like the insane, to whom the antics of the sane seem the crazy twistings of a grig. (Slouka, some quoted from EB White, 1996: 138)
Slouka's alternative to hyperreality—to turn off the TV once in a while, to take a walk with a friend, to get personally involved in community issues and to play more with one's kids—seems pretty inadequate. He has understood that something is happening to identity, but only seems able to cry “Stop the Net, I want to get off!” In effect, Slouka defends traditional cognitivism and Darwinism. His critique of the Net focuses on representation-out-of-control; traditional self-representation supposedly created stable, trustworthy, accounted-for identity. The self was enmeshed in a web of coordinates that determined how that self was to be (self-)represented, and how it was supposed to enact its role(s). The self (re)affirmed the person in their social role(s), pinning the “I” to a position in the social hierarchy and giving it a niche in the competitive stride. If the self actually abandons its role position, in an act of cyber inventiveness (if any such thing is really possible), it in effect withdraws from the system of representation of the (Darwinist) social order.
Allucquere Stone celebrates the emergent IT-based identity that Slouka dreads. She is a phenomenologist of the Net who is responding to the ideas of emergence and complexity (Lewin, 1992; Kauffman, 1992; Goodwin, 1996). She has tried to redefine identity as emergent, dynamic and complex. The metaphor of the “watchmaker” has been overturned— identity is not a lesson in irrevocable blind laws. Complexity can explore identity with the metaphor of the rhizome. In complexity there are negotiation, interaction and emergence—i.e. “information,” “activity” and “stimuli” are exchanged in complex patterns. The ICT metaphor, derived from the Net, e-commerce and the postmodern cyborg culture, is a precondition for the elements of knowledge, of organism, of consciousness, to emerge to one another.
Stone wrote an ethnography of research into emergence in the Atari Research Lab during the early 1980s. To situate the Lab: Alan Kay, who was director of research, had designed one of the first object-oriented programming languages. He was a pioneer in nonlinear programming. He defined the parameters of action for the elements (the “objects” to be programmed) of a computer system and then let the system get on with it—producing “emergent” results. He, and the bright MIT grads he hired to work in the Lab, thought that research was about innovation, taking risks and doing new things. But senior Atari management were “suits ... solid businessmen, ... people with proven management skills but little imagination. Almost to the man they were deeply skeptical, and in some cases actively afraid” of innovation (Stone, 1995: 129). The suits' idea of a “breakthrough,” or of “research,” was producing a spin-off of an already existing product; something like Batman meets PacMan. For management, the “postadolescent terrors” that programmed the games, and the “technoturks” from the Lab, were clearly out of control. The game programmers worked long hours, produced a lot of “code” per day and flaunted their thoroughly scruffy appearance. They knew that Atari's product was fun. But management was profit driven and fixated on figures; for them Atari produced numbers. Management didn't care that Atari was in the entertainment business and had no special feeling for that business. For them, Atari was just another mass-market company that needed to have a good marketing pitch. What the product actually did was of little concern. Management was out to recruit “safe” project managers from companies like Lockheed and General Dynamics, where defense contracts took time and frequently were designed to be expensive. While the programmers and researchers were committed to the IT product, they came from such different backgrounds that they failed to communicate with one another.
For a while, the enormous profits from PacMan meant that the programmers and the Lab could just “get on with it.” But there was never enough money for the “suits”—their motto was “I want my BMW now.” Thus senior management was alienated from the product, thoroughly opportunist project management had no sympathy for the people who actually programmed and designed the product, and research was alienated from everyone and enclosed in a world of its own. It is no wonder that Atari exploded. Its story illustrates how untenable the science-business-self relationship (the “self” at least of the researchers) actually can be.
How the Lab was closed down and the ideas that flourished there are of interest to Stone. The focus of the Lab's work was on interactivity—i.e. on the mutual and simultaneous activity of participants working towards the same goal. The research goal seems to have been to create an immersive world of interactivity, i.e. one wherein people would enter into dialog with a (computer) system. Such a system was meant to achieve and maintain:
Computer illusions of interaction have, of course, been achieved through bots (programs) such as Julia. Julia chats, flirts and keeps track of the movements of other players.
Julia's conversation skills rely for the most part on the development of a comprehensive list of inputs and outputs, the effectiveness of the matching patterns, and the use of humor throughout. .. Wired magazine described .... the program as “a hockey-loving ex-librarian with an attitude.” Julia is able to fool some of the people some of the time into thinking she is a human player. ... her sarcastic non-sequiturs provide her with enough apparent personality to be given the benefit of the doubt in an environment where players “make a first assumption that other players are people too.” (Turkle, 1995: 88)
The problem with such programs is that they imitate conversation without having any consciousness of interaction. Searle's Chinese room criticism applies:
... imagine (someone) locked in a room with stacks of index cards containing instructions written in English. He is handed a story written in Chinese. Then, through a slot in the wall, he is passed slips of paper containing questions about the story, also in Chinese. Of course, with no understanding of Chinese, he does not know he has been given a story, nor that the slips of paper contain questions about the story. What he does know is that his index cards give him detailed rules for what to do when he receives slips of paper with Chinese writing on them. The rules tell him such things as when you get the Chinese slip with “the squiggle-squiggle” sign you should hand out the Chinese card with the “squoggle-squoggle” sign. ... locked in the room, [the man] becomes extraordinarily skillful at following these rules, at manipulating the cards and slips in his collection. (Turkle, 1995: 86)
The point of the story is simple: the man does not understand Chinese, he is only shuffling paper. Likewise, Julia's “knowledge” of what she talks about is not like human knowledge. For her the subject might just as well be “slrglz”, a string of letters that activates the “slrglz” topic—i.e. a set of preprogrammed responses that resemble speech. Julia is a fairly believable agent. She can sustain an appearance of interactivity (on the Net in a MUD) for a few minutes. But she is not really “believable”—her “presence” collapses quickly. The Atari Lab wanted to understand what made an “agent” believable. The notion of believability is very vague. Would a bigger and better Julia suffice? To answer Searle, would the computer agent need to simulate consciousness? Or is presence already achieved when a human audience suspends disbelief? Does computer-human interactivity depend on providing the illusion of life— something like what the Disney animators achieve in their cartoons, or with their robots in the amusement parks? Is such a simulacrum a prerequisite to sustained intelligent human/computer interaction?
In the Atari Research Lab, they shunned half-measures. They created a mythic persona, who was eventually appointed acting director of the lab. This persona was called Arthur Fischell, i.e. Arti-ficial (his wife was named Olivia—O-ficial; his children were Benny—Bene-ficial and Super —Super-ficial). The hoax began as a thought experiment: What would it take to make an artificial persona seem real? Arthur Fischell first appeared when the research team (some five persons) working on presence started to talk about him. It was soon clear that a persona who is talked about, but who never actually does anything, is insubstantial. The team concluded that Arthur Fischell would have to do something, and so he started to appear via e-mail. Since much of the lab's business was done on e-mail, and many leading figures were more often absent than present, Arthur Fischell's e-mails seemed realistic enough.
Arthur Fischell started to develop a character of his own—the collective product of the five persons who could log in as Arthur and write text for him. “Agency”, “dramatic interaction” and “presence” were key fields of research at Atari. Thus the Lab researchers were not too surprised when Kay (who was in on the “gag”) made Arthur Fischell pro tem director. Arthur became known around the lab as “suave, intelligent, smoothspoken ... mature and sexy, slightly rakish in a Victorian way—in fact rather noticeably like the personality of Nick Negroponte” (Stone, 1995: 141). Although Arthur had never been seen, he had a distinctively furnished office and was very active on e-mail, questioning all sorts of projects and offering collegial support. In fact, as the work sphere at Atari worsened and interdepartmental battles raged, Arthur was a voice of reason and moderation. Slowly, he started to receive e-mails from outside the Lab asking for his opinion. In the end he gained a voice (via an Eventide Harmoniser) and even made a video conference appearance (a female scientist was made up to play the role). The Lab group, in fact, succeeded in constructing the illusion of a person.
But Atari was in deep trouble. The “blockbuster games,” which were supposed to rake in even more profits, bombed. The “suits” in merchandising had decided to base their new games on ET and Superman, paying a fortune in royalties. But when the games hit the streets, they were a fiasco. The “product” did not sell. To add to Atari's woes, the game software market had peaked. Rather than running games on “home computers,” small game computers, based on VLSI chips (like Gameboy), were poised to take over. VLSI technology was much more restricted in what it could do than the “home computer” had been. Atari had a VLSI department, but it was run on “bottom-line” and “product” principles alien to the Lab. To make matters worse, the failed introduction of the new games took place parallel to the discovery of a major embezzlement. Atari stock plummeted. The Lab was (more or less) immediately closed down.
Stone's story of Arthur Fischell and of the creation of emergence via artificial presence belongs to an exceptional moment in business when researchers could do whatever they wanted. While the themes of emergence and presence are very important to computer-human interaction, there is no evidence in Stone's report that anything of importance was achieved at the Lab. The researchers seem to have pursued “believability” rather than “artificial life.” The former tries to simulate interactivity (as in Searle's story) and the latter focuses on systems whose elements reproduce, mutate and evolve to form emergent ecosystems. The Atari Lab's research goal seems to have been to produce a system of real, not just virtual, interactivity. But can a system of virtual techno-emergence escape the cognitivist trap—isn't such a system inevitably a complex assemblage of representation(s)?
If the rules for system performance (i.e. the model) are complex enough, the system will fool people (temporarily?) into thinking that emergence is achieved. But if the system's degrees of freedom are prescribed, emergence is (at best) only apparent and is not part of a genuinely open system. But there are more radical options. The concept of “life at the edge of chaos” offers such an alternative vision of emergent interactivity:
... (in the) transition region that separates the domains of chaos and order ... they noticed that ... all the parts of the system are in dynamic communication with all the other parts, so that the potential for information processing in the system is maximal. It is in this state of high communication and “emergent” computation that struck Langton and Packard as a condition that provides maximal opportunities for the system to evolve dynamic strategies of survival. ... The conjecture is that this state is defined by an attractor characterized by a maximum dynamic interaction across the system, giving it high computability, to which state the system continuously returns as it explores its changing world. (Goodwin, 1996: 183-4)
Emergent interactivity requires enormous flexibility. But flexibility can only be achieved in “Julia-like” systems of traditional representation. Julia-like databases threaten to become so unwieldy that nothing at all can emerge from them. Emergent interactivity is caught in a paradox: if there are too few options there is no possibility of complexity; if there are too many options the system loses functionality in data overload. Interactivity could (perhaps) be achieved by designing an emergent agent able to interface “intelligently” with the user. A computer interface that (convincingly) produces interactive presence could produce a cyber experience of emergence. The user would have to “take things at interface value,” leaving how the computer system actually finds and delivers its information (responses) out of the picture. If emergence is achieved, we are released from the Searle dilemma. But emergence requires more than automatic responses triggered by key words or name indexes. Emergence includes taking initiative, understanding concepts and pursuing ideas. A Knowledge Navigator, to use Apple's PR name for it, would have to be able to do all of these things. Users would talk to an intelligent agent, attuned to their interactive/conversational/knowledge needs, that could undertake emergent action for them.
Demand-side interactivity requires a bottom-up system, with which users can obtain the discussion/information/knowledge that they desire. The user defines the interactive need(s) and indicates (at least initially) in what direction the agent is to respond. In contradistinction, a supply-side system depends on someone else determining what sort of interaction (discussion/information/ knowledge) the user is to be treated to. Emergence is much more important to demand-side systems than to supply-side ones. Demand-side interactivity produces emergent action; supply-side systems provide centralized information distribution schemes (with or without several degrees of freedom). Building a supplyside information distribution system requires making complicated selection criteria, which meet the criteria of the powerbrokers who determine in broad terms what (knowledge/information/data) the “users” are to be provided with.
Obviously the more degrees of freedom—defined as the system's ability to tailor itself to the user—the more complex it is. But a logic of social engineering in which someone else—i.e. the “experts” or “knowledge managers”—decides what the “users” are going to get is inherent to a supply-side system. Supply-side “virtual interactivity” avoids, in its human-computer interaction, the problems of multiple identity, différence and complexity. The “system” follows its logic—emergent dialog is not possible, the “user” is provided with answers according to pre-set criteria.
This sort of “virtuality” is what the cyber libertarians dread, with their creed “knowledge wants to be free.” For them, one surfs the Net to find co-evolution. Emergent interactivity is dialogic, i.e. grounded in a relationship of open interaction. The speaker and the spoken, the knower and the known, co-evolve. Emergent interactivity is participative and cannot survive in a stratified, objectified system of responses. Dialog is conceived to be a process of co-authored text. Emergence with a Knowledge Navigator could help “users” to find communities of discourse close to their interests and to make connections with discussants and between ideas, irrespective of time and space. Emergent interactivity will either create an discursive envelope around its “users” and feed them pre-programmed text for their consumption; or it will create exploration and interaction by linking different discourses and opening unexpected links.
Despite their efforts to define issues concerning current changes in identity in terms of the new cyber technologies, both Slouka and Stone returned to discussing their encounters with “flesh and blood” people when they tried to substantiate their claims. Both the cyber utopians and distopians evidently agree that the key processes of meaning creation and its (social) implementation occur in fairly direct social interactions. These interactions can be mitigated by various technologies and this can make important differences. But neither author establishes that there has been a change, in kind, in human interaction. Perhaps there is more play or distortion, more divergence or fluidness of identity, more activity at a distance, less responsibility, more freedom or creativity. But gender bending, plays on identity, fantasy in relationships, the putting on and off of masks are all long known to humanity. There seems to have been a shift in context wherein already known patterns and options have been (ever so slightly) rearranged. The new technologies have provided possibilities for people to (re)assert already known patterns of behavior, some of which had been repressed or had become very unusual, or had even nearly been forgotten.
Is emergence merely the newest battlefield for social Darwinism? Is it doomed to strengthen managerial and/or entrepreneurial power, while it further (economically/politically) disenfranchises the electronically illiterate? Experimentation in simulacra could generate new role models and underscore the role of complexity—ICT “users” could imaginize new social possibilities and design novel organizational forms.
In Stone's description of the Atari Lab, cognitivism and social Darwinism were only transcended when a collective project caught the imagination of the research team. Bonds of community are what provide identity and meaning. A society of multiplicity that is deeply culturally fragmented is probably here to stay. Communities or “tribes” can converse via the Net; informal and flexible relationships can be meaningful. Slouka's attempt to retain relationship(s) in traditional social structures looks back to a social order that has in part broken down. Emergence in virtual networks is a viable alternative. Social Darwinism cannot support multiplicity—if there are many “right answers” then the logic of competitive necessity has been broken. If différence is viable, if bricolage is just as acceptable as centrally orchestrated order, then emergence is freed from the iron law of cognitivist Darwinism.
Emergence released from social Darwinism can be defined in terms of interaction. If one can sustain the Other's “gaze”—both in direct contact and metaphorically speaking in terms of openness or transparency of action—then one has found a reasonably successful way of being-withthe-Other. Via cognitivism, one can analyze the Other's circumstance and social roles, but one is limited to seeing technologized and utilitarian images. The problem-solving logic of social Darwinism may make for efficient production and economically effective action, but it blocks emergence just as managerial power blocks dialog and mutuality. Cognitivist social Darwinism has proven that it can produce wealth, but without emergence thought and action are threatened by entropy and existential poverty. Even business cannot persist without trust, motivation, leadership, cooperation and innovation depend on it.
Alexander, Richard (1979) The Biology of Moral Systems, New York: De Gruyter. Alvesson, Max (1995) Management of Knowledge-Intensive Firms, Berlin: De Gruyter.
Aronwitz, Stanley, Martinsons, Barbara and Menser, Michael (1996) Techoscience and Cyberculture, New York: Routledge.
Baudrillard, Jean (1983) Simulations, New York: Semiotext(e) (original edition Simulacrums et simulation, Paris: Galilee, 1981).
Benedikt, Michael (ed.) (1994) Cyberspace, Cambridge: MIT Press.
Bonn, Moritz Julius (1931) “Economic Policy”, The Encyclopedia of the Social Sciences, Vol. 5: 333-44.
Campbell, Colin (1987) The Romantic Ethics and the Spirit of Modern Consumerism, London: Blackwell.
Cohen, Jack and Stewart, Ian (1994) The Collapse of Chaos, New York: Penguin.
Cooper, Robert (1979) “Sketching X”, B in O Research Paper, Lancaster University.
Cooper, Robert and Law, John (1994) “Organisation: Distal and Proximal Views” in S. Bacharach (ed.) The Sociology of Organizations, Greenwich, Conn: JAI.
Cortada, James (ed.) (1998) Rise of the Knowledge Worker, London: Butterworth-Heinemann.
Cosmides, Leda and Tooby, John (1998) “Evolutionary Psychology: A Primer”, WWW: Cogweb.
Crary, Jonathan and Kwinter, Sanford (eds) (1992) Incorporations, New York: Zone.
Dawkins, Richard (1987) The Blind Watchmaker, New York: Norton.
DeLanda, Manuel (1991) War in the Age of Intelligent Machines, New York: Swerve.
Druckrey, Timothy (ed.) (1996) Electronic Culture, New York: Aperture.
Goodwin, Brian (1996) How the Leopard Changed its Spots, New York: Touchstone (first edn 1994, New York: Scribners).
Gray, Chris Habled (ed.) (1995) The Cyborg Handbook, New York: Routledge.
Gusterson, Hugh (1995) “Short Circuit” in Chris Hables Gray (ed.) The Cyborg Handbook, New York: Routledge.
Haltmann, Kenneth (1990) “Reaching out to touch someone?”, Technology and Society, Vol. 12: 333-54.
Harvey, David (1990) The Condition of Postmodernity, Cambridge, Mass: Blackwells.
Haraway, Donna (1991) Simians, Cyborgs, and Women, London: Free Association Books.
Heyligen, Francis (1997) “The Growth of Structural and Functional Complexity during Evolution”, in F Heyligen and D Aerts (eds) The Evolution of Complexity, Dordrecht: Kluwer.
Kauffman, Stuart (1992) The Origins of Order, Oxford: Oxford University Press.
Kunneman, Harry (1999) “Humanistiek als casus”, project outline, University for Humanist Studies Utrecht.
Lefebvre, Eric (1997) The Monk, Leuven: Acco.
Letiche, Hugo (1995) “Fractalization of the Knowledge Worker”, paper presented at Euroconference Lyon.
Letiche, Hugo (1996a) “The Battle of the Rafts”, paper presented on 21 September, Euroconference Oporto, Portugal.
Letiche, Hugo (1996b) “Postmodernism Goes Practical”, in Stephen Linstead, Robert Grafton Small and Paul Jeffcutt (eds) Understanding Management, London: Sage.
Lewin, Roger (1992) Complexity: Life At the Edge of Chaos, New York: Collier Books.
Maffesoli, Michel (1993) La Transfiguration du politique: la tribalisation du monde, Paris: LGF.
Maffesoli, Michel (1995) Au Creux des apparences: pour une ethique de l'esthetique, Paris: LGF.
Ritzer, George (1993) The McDonaldization of Society, Thousand Oaks, Calif.: Pine Forge Press.
Roberts, Ken and Corcoran-Nantes, Yvonne (1995) “The New Training and Industrial Relations”, in Adrian Wilkinson and Hugh Wilmott (eds) Making Quality Critical, London: Routledge
Schiller, Herbert (1992) “Media, Technology, and the Market: The Interacting Dynamic”, in Gretchen Bender and Timothy Druckrey (eds) Culture on the Brink, Seattle: Bay Press.
Slouka, Mark (1996) War of the Worlds: Cyber-space and the High-Tech Assault on Reality, London: Abacus (first edn 1995, New York: Basic Books).
Stone, Allucquere Rosanne (1995) The War of Desire and Technology at the Close of the Mechanical Age, Cambridge, Mass.: MIT Press.
Turkle, Sherry (1995) Life on the Screen, New York: Simon & Schuster.
Virilio, Paul (1994) The Vision Machine, Bloomington, Ind.: Indiana University Press (first published 1988 as La machine de vision, Paris: Galilee).
Virilio, Paul (1996) Cybermonde la politique du pire, Paris: editions Textuel.
Wilson, E.O. (1978) On Human Nature, Cambridge: Harvard University Press.
Wilson, E.O. (1992) The Diversity of Life, New York: Norton.
Woodward, Kathleen (1995) “From Virtual Cyborgs to Biological Time Bombs: Technocriticism and the Material Body”, in Gretchen Bender and Timothy Druckrey (eds) Culture on the Brink, Seattle: Bay Press.