Kurt Richardson, editor
ISCE, USA
At the recent Academy of Management meeting in Toronto, Emergence sponsored a panel on the topic of why management academics do research. The panel consisted of Bernie Avishai, Max Boisot, Michael Cohen, Kevin Dooley, Alan Kantrow, Michael Lissack, Bill McKelvey, Tom Petzinger, and Jan Rivkin. Selected excerpts follow.
Alan Kantrow We're in the process of watching a train wreck, and I would actually like to prevent it rather than spend the rest of my professional life trying to pick up the pieces and wipe off the blood, and see what's broken, and do triage. And the train wreck is very simple, and is the old adage of the drunk searching for the keys he lost in the dark alley but searching under the street lamp, and when asked why, saying “because that's where the light is,” it doesn't matter where the stuff is, that's where I can see. We measure and study—you guys measure and study what you know how to measure and study—it isn't necessarily clear, but that's what I need to know in order to do stuff that I need to do, as what my clients do what they need to do. And the train wreck is that in the absence of you giving me stuff that I can more readily use, I will go do other things to try and satisfy and fill in the gap, and those things are bad. But I will do them, not because I'm trying to cheat, and not because I'm a bad guy, or had a terrible moral or mental lesion, but because I'm trying to do the best I can with all the inadequate tools, given the pressures on me, given the realities I face, given the way I'm asked questions, given the context in which I need to look for answers and recommendations, the force is almost overwhelming in pushing me to do something, that when I step back from it, I say, “This is actually idiotic.”
That is, I will do the heart of what the consulting profession does when it's not doing well. Either I can say I didn't mean to do any harm, and I was doing the best I could, but then you're mad at me because I'm not telling you stuff of the sort that you want to hear, or I can not do anything, or I can ask you for help in figuring out how I begin to start creating in a way that's testable and extendable through experience that assures that a set of practical experimental guidelines and protocols—I don't mean an r squared, I don't mean that level of analytical reference—that will help me begin the work of constructing a practical discipline for making judgments about whether, and the degree to which, findings from Context A apply to Context B. I don't have that; you have not given it to me; I cannot find it. And so I create it as best I can, and use what I create—I mean I, we, all of us— create it as best we can, and use it as the frame to build and design the management systems, and the protocols, and the practices of the firm. That's not good enough; and if I have to reinvent the theory from scratch in every new context, and if the I who is doing the inventing is a different I, and that's not all connected up, the likelihood that any progress will be cumulative and aggregate is not very good. But the notion that Company A did it and won, why shouldn't Company B do it, too?
Don't be the last kid on your block to have the decoder ring; emotional appeals feel exactly the same, when we use data not for argument but for rhetoric. That's what you consign me to, or rather that's the reality you've helped back me into, in the absence of stuff that helps me work with you toward a replicable, robust, experimental, real world. What we need now, not tomorrow, what we needed yesterday, is real help in understanding how to denominate, describe, circumscribe relevant dimensions of context as boundary conditions for what we know about various parts of complexity management. Once we can talk about that in a sufficiently meaningful way, we will know what to do with what we know. Absent that, I don't know how to do management.
Kevin Dooley We're obsessed with r squareds. In the physical sciences, when you run a physical experiment, if you don't get an r squared of 90 percent in a kind of closed system, or maybe 75 or 80 percent in a physical experiment in an industrial setting, then you've done something terribly wrong. Now we're happy with r squareds of 10, 15, 30 percent, and we pat ourselves on the back about that; and in fact we spend all of our time talking about that 10 or 20 or 30 percent of the variance.
Well, I have a little theory of life that I think relates to this notion of r squared, I think if your life is basically pretty good, if you really look at it objectively, and your life is only 5 percent bad, then you should only be allowed to complain 5 percent of the time; I mean, if you've got a rotten life, OK, you can complain all the time, but what about organizational studies? Wouldn't it be interesting if in our study, we found an r squared of 20 percent and we spent 20 percent of the paper talking about our nice linear model that explained that and 80 percent of the paper theorizing about the 80 percent of the variance that we couldn't explain.
Now, as an experiment I've actually tried that, and of course, you can imagine what the reviewers said. And I'm not a determinist, I don't believe that we'll ever explain the other 80 percent; I think it's time for us to start focusing on models that explain the reality of organizational systems as extremely high dimensional systems. It is amazing that anything of order comes out of social organizing, and that truly is amazing, and it's very worthy of study, and it's probably worth about 20 percent of our effort, and the other 80 percent should be aimed at the noise, because I think once we begin to understand the noise and help managers live within the real noise that exists, then we'll be doing something relevant.
Michael Lissack The bulk of the research that we engage in as Academy members has very little relevance to managers, and we all know it, we stick it someplace in the back of our minds, and we go on about our tasks; and the Academy as a whole is in denial. About seven years ago now, Jeffrey Pfeffer stood up at an Academy meeting, and he announced that what we really should so is all agree that there is one way to do research, we should take one perspective, and if we do that we can get influence, and maybe even power, and he compared us to the economists. What was left out of that discussion was an admission that maybe we're not being relevant and we should just forget about relevance and go for influence or power. We're trying to come up with theory and models that lead to prediction, but that whole notion leaves out the fact that we're humans and companies are human systems, and if there's anything that the
complexity perspective suggests, it's that usually—and I'm not going to say always—but usually linear models of cause and effect are not what happens in human systems. Academics are aware that there are second- and third-order effects, and the first they do is announce they're not going to look at them. There's this nasty notion of feedback loops, and purpose, and intention, and the ability to look ahead, and the ability to look behind, and the fact that we have emotions and the fact that we have histories, and all of that usually enters into what really happens and all of that usually defies linear notions of cause and effect.
Now, does that mean we shouldn't be doing the research we're doing? No. This is not what it means. The data collection that we're doing is very valuable, it's what we do with the data that's the problem. If we were to accept a more modest proposal—which is to accept the notion that we're collecting data that could be relevant to managers from the aspect of giving them background, helping them with sense making, giving them an understanding of boundary conditions that are out there, giving them an understanding of the kind of constraints that operate, giving them an understanding of the kind of histories that exist—that would be relevant, but turning it around and turning into boxes with arrows and coming up with supposed variables that mean something, that's usually not relevant.
Max Boisot I take a slightly different perspective on this … that the ratio of the noise of information goes up, and therefore we have a serious signal extraction problem, and it isn't going to be our problem, it's going to be the problem of the receiver.
Bill McElvey If we were in history, that would be good, but if you're in strategy, it's not clear to me what studying the past has to do with what a manager is going to do about finding strategies for the future. We have to understand that that research is only good for analogy and beliefs that we have based on past experience, but that's all it's good for. It really doesn't give guidance to a manager about what to do with tomorrow.
So the question then becomes, what do models do to help managers deal with tomorrow? Managers would really like to know, if I do X, I'll get Y.
I think complexity science has the wrong label. What complexity science is really about is order management. Complexity science is about order creation—you have to remember that —and that's what managers are doing. And the only kind of model that allows you to model order creation is the agent-based model. We're all different. Agent-based models are the only kind of models that allow you to study that directly without throwing the differences away or linearizing them. One of the advantages of a computational model is that you don't have to linearize the variables. All of economics is most times based on linearities and nonlinear variables, and basically getting rid of them. The problem is that changes the rating from one end of the scale to another, and of course it just throws out the window the fundamental problem, which is what do you do with the nonlinearity? The computational model—the agent-based model—will actually help to understand order creation, and you can use the model to model emergent nonlinearity and growth. It's the only kind of model that can do that.
Every pilot who gets into an aircraft starts with a simulator. And that's good. Managers have the power of spending millions of dollars, a billion dollars, without those simulators; models can be a simulator … and what managers don't get from us are the kind of models that they can sit down with …
Jan Rivkin Talking about agent-based simulations of complex adapted systems which come out of the biological and physical sciences … many of us in management science who see some hope in complexity research do so because of these models. In models of complex adapted systems, we have the opportunity to model in rigorous fashion lots of heterogeneous agents who are interacting in a rich and realistic manner. So that’s the hope.
In the early 1990s, pioneering management types realized what was going on in the physical and biological sciences and started to report back to the rest of us. And the hallmark of the papers in this genre are the figures with the caption “Reprinted with the permission of (fill in your favorite biologist or physicist).” But there were very important steps in recognizing that those ideas and those simulations based on agents would be applicable to the management science in the human organization. We are waking up, and we’re putting in our own models. For instance, when you look at how managers get their view of the world and the way they change it, the effectiveness fundamentally changes. Biological and physical models, once adapted to fit the concerns of management powers in there, behave differently. In biological physical models, a natural thing that happens in fitness landscapes is that organisms gravitate toward local peaks. Dick and I are actually finding that organizations with hierarchies, for example, might not do that.
In moments of fantasy, I actually see simulations that combine good managers with computers to give both researchers a better sense of how the real managers behave, in the sense that they can also give managers a visceral sense of what it's like to deal with complex adapted systems. In a talk on Saturday, Kevin Dooley reminded me that we have access, privileged access, to the very most intriguing, most fascinating complex adapted systems on the whole planet, and if we don't move ourselves toward being a net exporter of ideas about these systems, I think we've lost an opportunity, and I think these agent-based models play a central role in getting us there.
Tom Petzinger I submit that complexity can rehabilitate the metaphor. We’ve known all along that metaphors are everything in science; Einstein said our theories determine what we measure. You can’t see anything until you’ve got the right metaphor to let you proceed. Or if your taste runs to the classical, Virgil: We make our destiny by our choice of god. A metaphor is the constraint; you don’t get constraints from metaphors, they are the constraints. Managers, practicing managers who are out there leading companies and making decisions, are major consumers of metaphors. Unfortunately, they’re the wrong kinds of metaphors—they are not scientific, they are practical, and they come less from research, which we need more of, than from testosterone.
I wrote a book once about the airline industry and it was all about the high-powered CEOs in the 1980s who were battling it out during deregulation, and one of my leading characters was Robert Crandall from American Airlines, surely one of the most intelligent human beings ever to lead an enterprise; in fact, he was a complexifier struggling to get out. He didn’t know it, I didn’t realize it at the time, but the first person I ever heard use the phrase “network effects” was Robert Crandall in 1990 talking about evolving the airline industry to a hub and spokes, so that 1 plus 1 is equal to 3 in a hub—an emergent structure. When he went in front of his people, he would dress up wearing a flak suit with war paint under his eyes and growl in front of his sales groups, and talk about Vince Lombardi; this is how he made sense of his world. This is how he integrated all that research and, more particularly, how he communicated it, and in the end he had an extremely hard fall, as did all those testosterone-confused leaders.
I am going to submit that complexity has a certain healing power, or rehabilitative power, for the metaphor. Take, a company called Capital One, probably the biggest credit card issuer you never heard of: you may not know their name, because they private label credit cards to other organizations. Here’s a case where the science becomes a metaphor in structuring their IT department. They looked at extremely complex systems and used models developed by SFI of social insect colonies and then adapted that information to the structure of an information services department. And now they talk about themselves as an ant hill, so the science became a metaphor.
You know Ralph Stacey? I heard Stacey give a talk at a conference of high-powered CEOs five years ago in San Francisco who were getting their first exposure to complexity theory—complexity studies. And, after each of the sessions with these esteemed biologists and physicists talking, these CEOs would say, “Is this science, or is this…?” Stacey got up at the end of all this and said, “You know, the question you should be asking is, is this a metaphor, or is this more science? Who needs more science? What we need are more metaphors.”
Michael Cohen I’m a skeptic about whether complexity is something about which there can be a theory. I don’t think the existing theory of information is actually very valuable as we dig deeper into the information. There’s a lot of effort … and making complexity measurable, property systems, and I’m skeptical about the progress we’ve made so far.
The problem on the metaphor side is that complexity becomes the way of talking about almost anything that happens without enough depth or historical grounding to be able to do that, particularly learning something or even being aware … and that kind of superficial and repeated application of complexity to metaphor is almost virtually certain to lead to a bust. I think in a few years we’ll see an article called “Whatever happened to complexity?” And I’ll still be doing the same research I’ve always been doing.
On the other side, we’re trying to attack harder problems with weaker tools, and a beautiful thing about complex systems research is that it acknowledges that dependency and … dependency theories we all recognize as realistic and relevant, but that reduces the actual number of observations you’ve got, because you really have to look at a whole history of a system as one observation. You’re asking questions about the hardest kinds of dynamics to understand—I think it is reasonable to expect progress to be slower; on the other hand, the spread of metaphor will be fast.
We know from lots and lots of studies of complex systems in various settings that you frequently find exploration/exploitation tradeoffs. There isn’t a right answer to that tradeoff question. Nothing that we're going to be able to say is going to tell people in general what to do about it. There are a bunch of particular things we can train people to ask about that tradeoff in their own circumstances. I suggest there's a cluster of questions that we might think about. One cluster of questions has to do with the origins and nature of the variations in the system. And then there are interaction questions. How do the elements of the system interact with each other, what's the structure of doing that, and what are its effects? And finally, selection questions. What are the systematic processes that cause some things, or ways of doing things, to become more frequent, and others less? Other kinds of questions have got to do with locating the structured patterns of interaction of those agent-based systems … trying to understand the consequences and avoid the consequences of the structure of interactions.
Bernie Avishai The question is an ironic one: does theory benefit from research? Does truth benefit from fact? It’s actually not a bad question if you are living in a time when some of the things we hold dear around the exploration of the truth are shifting under our hands.
We all feel this tremendous state of confusion. What I’ve been hearing so far about complexity theory is that linear positivist research doesn’t help you deal with the confusion you feel. Complex adaptive research doesn’t help you, but it makes you feel better. Look, we are dealing with some very difficult problems in our capacity to research stuff.
I’d say the first problem is a question of precedence. When you’re living under the nails of creative destruction, and you’re all the time aware that what creates rent, what is value creating in the current market, is a convergence of technology, a convergence of business problems that no one had anticipated before. Where do you hold steady the paradigmatic assumptions that allow normal research to continue? How do you set down paradigmatic assumptions such that you can continue to do normal research if you cannot hold any of those assumptions steady? How do you talk about markets, how do you talk about preferences, how do you talk about the technologies of substitution, which are today becoming sort of the daily discourse? So that’s problem number 1.
The second problem has to do with valuation, because of much of what we do in market research is ascribing value and valuation to companies and products. We live in a world where stories are truly more effective than data. When you think about how companies are getting value today, how values are being ascribed, it has much more to do with the stories you can tell about the technologies and the needs and whether Joe at Microsoft really likes you or doesn't, and they have much less to do with hard data about what Coca-Cola sells in India. You can't assume valuations to be based on the kind of rigorous market research you used to have time for and around which you had paradigmatic assumptions that you could really hold steady for a while.
We're living in a world that is increasingly of “expeditionary marketing.” You're living more and more in a world of “pray and see what happens.” That is, spray out the product and see what happens, and what you have to do really is figure out how to be flexible enough to do a tremendous amount of variation so that your exploration of a market, your research of a market, is actually coterminous with your release of products for a market. And your research is done on the basis of the validation that is coming back as a result of people saying “yes”—we now call this “point and click,” but it's basically the same kind of activity.
The venture capital world is increasingly living this way too, so that valuations that are done on companies are much less meaningful than stories that catch people's attention. In the same way that you do expeditionary marketing inside a company, venture capitalists are all the time doing this kind of expeditionary marketing with companies— let's put out a whole bunch of companies, some of them will hit big and we'll see what happens. Now, I'm not saying people don't do research in this context, but it does lead to the third point, which is “Who do you trust?”
If you're about to spin out a company in the spring … I go to Media Matrix, I go to … any number of reputable people telling me what the scope of the knowledge portal market is, and the differences are generally so vast as to make the research useless. And by the way, the investors in our company know it, and know that the real purpose of putting a number in a PowerPoint slide is to make people on Wall Street feel better, because they are also working in their guts. So who do you trust in a world where branded information and branded research is very impressionistic, to be generous, and moreover when you're working in the world of the internet, where when you go into a web search, what comes back is almost impossible to verify and parse. And what's interesting is that you know that's the truth for everybody, so it's not like you can go up and say, “Well, I'm getting this back, but then I can do the filtering.” But you know everyone else isn't doing the filtering, so what good does it do if you're doing the filtering when you know that they're likely to be making decisions on bad data? What you're doing is running a political campaign, you're not doing hard research, what you're doing is trying to stay on message and use the research to help validate the thing you want people to believe. Because you can't trust hard data, and even if you could, you know that they can't.
I'm not making the suggestion that all research is wasteful. I'm truly not—I can't believe how philistine I'm sounding up here. I guess what I am trying to say is that we're all living in a world that's pretty exotic, pretty new, and for which some of the comfortable conditions of life—like tenure and things like that—are not necessarily helping.
I'll just close with this: when I was teaching political philosophy, I once asked the question, “According to Marx, how do you determine the power structure of a society?” and somebody wrote a one-line answer, “You try to take it over.”
I gave him an A. Thank you.