THE BIOLOGIST, THE PHYSICIST AND THE ENGINEER [138a]

W.S. McCulloch

(A Lecture to Communication Engineers, 1957)

Professor Wiener has already made the distinction biology requires between the disparate sciences on which it rests. Strictly physical sciences are necessary for an understanding of the physics and chemistry of living systems in the same sense in which they serve engineering. They are as insufficient for the biologist as for the engineer. Both must distinguish work from energy and signal from noise. Both seek purpose incorporated in mechanism by inverse feedback. Both are concerned with securing reliable output from complex digital computers. The engineers machines are built to his own specification. He knows what they are to do and how they are to do it. The biologist has to find out these things for himself. Nevertheless the engineer has troubles. Despite his knowledge of the theories of entropy, of information, of games — and of multiple closed-loop servosystems he finds it hard to insure a usable output from his most complex digital computers. Then he turns to his puzzled friend — the biologist — and asks How does nature do it?

The answer is neither obvious nor trivial. If you will be patient with me I shall attempt it, beginning with the simplest to describe the most complex of natural objects. You will then see that what it does, what it is, and how it comes to be are inseparably related in the question itself: How does nature stabilize a usable output of complex computers?

The ultimate disparate particles of physics are defined by their trajectories in electric, magnetic and inertial fields. The number of kinds of particles, alike except that one is the mirror image of the other, on meeting, annihilate each other. This reduces the number of kinds of particles that endure in our world. The strong forces that these compatible particles exert upon each other determine configurations some of which are sufficiently stable for the chemist to call them atoms. Atoms again differ discretely in kind. Their complex but weaker fields of force permit only certain configurations of atoms some of which persist long enough to be identified as molecules. The still weaker and still more complicated fields of force of some molecules suffice to promote the formation of more molecules of the same kind. One class of these autocatalytic reactions is Life. Its reproductive molecules are extremely large and correspondingly complex, but its bonds are not strong enough to insure a long mean half-life, and its fields are too weak to compel perfect reproduction on all occasions. A few times in a hundred thousand the product is discretely different. We call it a mutant. When in its environment it is a stronger catalyst than the original molecule it replaces the original. Such is evolution.

Step by step, matching changes in molecular structure to dissimilar environments, there have evolved some hundreds of thousands of dissimilar self-reproducing gigantic molecules. To specify a particular kind, therefore, requires a hundred thousand yes or no decisions or binomial digits (we say 105 bits). This much information has to be transmitted from the parent molecule to its offspring with only three errors in a hundred thousand.

To specify a babe at birth takes only some 107 bits. This is carried in mans genes, which are patterns of fields of these still larger molecules. More mistakes occur, yet the product is often viable a certified suckling. Such is the hard core of our inheritance 107 bits of information computed and recomputed by molecules with a mean half-life of a few hours.

How has nature stabilized this biological computer to insure a usable output? By using discrete ultimate disparate particles to make disparate atoms, by using stable disparate atoms to make discretely dissimilar molecules. Time and chance remove the unstable. The steps are discrete. Intermediate forms or states do not exist or are so brief as to remain negligible. This we imitate in our digital computers. It is the chief source of their stability, the reason we can combine them indefinitely. Nature has her own metric. She has only to count disparate things. The parts fit and stick or the whole does not endure. It is the same in reproduction. All deviants that do not produce a usable output are promptly annihilated. What is left, what endures, what we find living are only those discretely different discrete processes that do work, and only in those places where they do work. When something new works better it supplants them. So much for our molecular computers digital devices selected for stability of output matched to their jobs. Obviously it is an oversimplification.

Molecules determining heredity no longer reproduce except in special environments called cells. Within them the reproductive reaction determines the production of other catalysts called enzymes. These, their substrates, and their products constitute a well-regulated factory. The visible parts of cells, their organelles, consist of ordered molecular layers, liquid crystals, polymers and gels some of which are of remarkably fixed form and size and even position, while others grow, move, stretch, and divide in complicated patterns. Each of these factories is dynamically stable and thus potentially immortal so long as it receives proper stuff and energy. Yet any kind of cell would have been annihilated if, on reaching sufficient size, it did not divide into viable cells some of which reproduced again.

We know comparatively little of the negative feedbacks at these chemical and colloidal levels that determine the dynamic stability of cells. Such things as the ratio of surface to volume determining concentrations of substrates and the accumulation of end products inhibiting reactions are fairly clear. They resemble the stability of cathode coupling or other degenerations in amplifiers or those in well-regulated power packs.

Cells are irritable that is, they make a response which is not proportional to the stimulus. Up to some threshold of applied chemical, electrical, or mechanical stress they remain in a stable state, but beyond threshold they act suddenly differently, even though they later return to their stable state. In this they resemble the nonlinear circuit elements we employ in building digital computers.

When cells divide into irrevocably dissimilar cells but remain together to their mutual advantage they produce metazoa. There is a division of labor so that the products of each promote the other. With increase of size there develop systems of transportation of mechanical stresses, of heat, and of substances. Finally a nervous system arises. Its sole job is the transport of information. It does no mechanical or chemical work. Its highly specialized cells, called neurons, are kept in a closely controlled environment and in us may live as long as we do. When one dies it cannot be replaced and, since no two have just the same connection, no other does its job.

It is the propagated discrete responses of these neurons that are excited by sense organs or by impulses of other neurons. Their arrangement in space and time constitute the signals through the system whose output to muscles and glands controls their reactions. When one speaks of biological computers he is usually thinking of this nervous system most developed in man. It contains about a tenth of all his cells and consumes nearly one-seventh of all his power when at rest. Let us look at its components, its subassemblies and its general organization.

A neuron has fronds, body, and a long, thin taproot, or axon, which ends via rootlets on other neurons or in muscles and glands. The surface of a neuron is a membrane so thin that its capacity is more than a micro-farad per square centimeter. Inside is a conductor with a specific resistance greater than that of sea water. Metabolism keeps its outside seven hundredths of a volt positive to its inside. The resistance of its membrane is high until the voltage through it falls to about one-third its resting value, when it suddenly drops to a small fraction of the resting resistance. Then current flows in freely reducing the voltage of the adjacent membrane, which, in turn shorts out, allowing the impulse to be propagated as a smoke ring of current along the axon. The velocity of propagation is fixed by the distributed resistance, capacity, and battery. Thus a neuron and its axon is a distributed digital relay the size of whose impulse is determined locally, quite independently of the strength of the stimulus that evoked it. The factor of safety for continued propagation along a uniform cylinder is about ten, but at branch points it is far lower, and the continuation of the impulse depends on other impulses in neighboring fibers or cells. Thus branch points permit gating of impulses by impulses. Convergence of impulses on cell bodies is generally necessary for adequate excitation to start an impulse in them, and impulses passing the body to end in the branches generally raise its threshold. This is a second mode of gating impulses by impulses. Either would be sufficient to insure that neurons as coincidence detectors were suitable components for our logical, or digital, circuit actions. Nets of these neurons constitute the subassemblies of the nervous system.

Man starts life as a string of subassemblies called segments. In lower segmented forms of life each segment is virtually autonomous. Its own inputs inform it of the length and tension of its muscles, the position and motion of its joints, and the pressures, temperature, and so forth, of its surface. Its central computer sends forth the orders for its muscles and glands. These closed loops — or reflex arcs are inverse feedbacks, and the whole segment is a multiple closed-loop servosystem. Its gains and phase relations prevent cramping and schizogenic oscillations. It needs no information from adjacent segments that it cannot pick up from the mechanical effects they produce in its own segment — which it detects by its own receptors. In higher forms this is no longer possible, for several segments cooperate in control of a single limb. Signals have to pass to adjacent segments by nervous channels. This sort of organization is like that of a naval fleet from the days of Greece through the battle of Jutland. Each ship of the line is a self-sufficient segment which can just signal to the next in line. Now it is outmoded, yet it is surprising to see a dog whose spinal cord has been divided in the small of his back. The hind legs sit, stand, walk, run, hop, and gallop in harmony with the forelegs although there is no nervous connection. This bespeaks the extent to which those subassemblies are stabilized independently to insure a usable output. They are servosystems whose central circuit actions are of the logical kind. To match the dynamic variables of its own world, each segment codes as repetition rate the logarithm of intensity or its first derivative at each receptor, and by use of parallel channels it enjoys the stability of averages of impulses per second in modulating lengths and tensions of muscles. Both tricks depend on redundancy — one of code, the other of channel. Its other trick is anatomical. By keeping axons essentially parallel, it maps its peripheral organs onto its arrays of central relays so that neighborhood is projected to neighborhood with appreciable overlap. This preserves the topological relations of the input through the calculation. If mistakes occur in local connections or local thresholds, or scattered cells die, the errors are of little importance. In this it resembles an analog device, like a slide rule, where the error is in the last place, rather than a digital device, like an adding machine, wherein any digit may be affected.

Let us return to the organization of the fleet for battle, where the first in line are first to sight the enemy. The front segments of animals develop distance receptors, nose, tongue, eye, and ear. To pull the fleet together promptly they must inform all segments. This the earthworm does by giant axons formed of short thick butt-jointed axons. Our ancestors did it differently. They developed paths from all distance receptors and from all segments to a net in the brain stem, and from it to all parts of the central nervous systems. This central net, called the reticular formation, is organized like a modern battle fleet. Its large cells each receive information from many or all distance receptors and segments and relay information to them. In the reticular formation each cell is like a ship in which information from many ships pours into its communication center. Whatever ship is in possession of most crucial information can commit the fleet to action. Yet no one ship is essential. When it sinks or another has more crucial information the command shifts, but the fleet remains coordinated with a usable output. Let us call this kind of stability redundancy of potential command. It is not enjoyed by any man-made computer except the fleet. This reticular formation controls vigilance, attention, programs automatically associated movements, and commands the structures that bring to rest at the proper place whatever part of the body is put in motion. It sets levels of threshold of cells in the spinal cord and so stabilizes the total complex of multiple closed-loop servosystems which, though singly stable, might jointly cramp or oscillate.

Finally, it seems to be essential for another kind of stability, closely related to, possibly responsible for, conditioning and learning. Ashby was first to describe this in his book Design for a Brain, where he calls it ultrastability. A servosystem for flying a plane straight and level will crash it promptly if the wires to one aileron are crossed to produce the wrong motion. Similarly, if we cross nerves to the flexors and extensors of a leg the reflexes work the wrong way. The reticular system takes care of this by switching its effective connections, by changing thresholds of relays, or making new anatomical connections until it finds a stable configuration that produces a useful output. Though the reflexes are reversed the animal learns to use it normally.

Note how closely ultrastability resembles adaptation of evolution. To date it has scarcely been used in machines to play chess, to fly airplanes, or to control industrial processes.

The search for other means of securing stable performance in complicated computers continues. Shannon found it in hammock nets of switches that close a path through them, but these are not like neurons. Von Neumann, in his Toward a Probabilistic Logic, using a majority organ as output to chains of neurons in parallel, found that, to get a practically perfect performance, the neurons had to be far better than we can expect of those in our brains. He kept prodding me to find another kind and after years of search I now have it. The circuits enjoying what I call logical stability are composed of three logical units, or neurons, instead of one. They receive all-or-none signals from two sources on each of two neurons, each of which signals to the third. Each neuron computes some logical function of its input; thus the output is any desired logical function of the input to the net. This is a redundancy of neurons, for each such function can be computed by a single neuron instead of a triple. The advantage of a proper triple for a prescribed function is that it can preserve the same output for the same input under a common change of threshold sufficient to alter the logical function computed by each component neuron. At least 5 per cent of all triples that compute significant functions enjoy this stability, which is of importance in biological computers like brains, when general shifts of threshold must occur. So far I have investigated logical function of only two arguments, that is, neurons of two inputs. I hope that Walter Pitts will generalize the theory for those of every finite number of arguments (or inputs). Logical stability is too new a discovery for us to say more about it. Dr. Lettvin and I will certainly seek it experimentally in the neuron system.

There is almost certainly another kind of stability to be sought in game theory founded by von Neumann and now pursued by Dr. Schutzenberger in Prof. Wiesners Laboratory at M.I.T. Dr. Schutzenberger, Dr. Lettvin, and I are all psychiatrists. We would like to assure you that by no means do all biological computers have a stable or useful output. Professionally, we are concerned with those that do not. Hence we are happy to be in the Research Laboratory of Electronics. We belong among communication engineers. In fact, that is why I am here today.

Let us now return for a look at biological functions that give us a new digital stability. Many natural actions are organized by the nervous systems so that they occur in toto or not at all. Each such organization has a threshold - swallowing, coughing, sneezing, yawning, defecating, urinating, and sexual orgasm are such acts. Each is triggered by its adequate stimulus and enjoys the stability this entails.

At this point we must pause for an antique distinction. William Occam who would not let us multiply our entities beyond necessity insisted that man thinks in two kinds of terms; natural terms, which he shares with beasts, and conventional terms, enjoyed by man alone. What we have said so far applies to thinking in any terms by man and beast. But language is a convention. Its elements, phonemes and morphemes, are carried by redundant signals, and each has its discrete or digital characteristics. It may be much distorted without losing its identity. The same goes for words and for sentences and for propositions. Moreover, there is a redundancy in their successions, witnessed by Shannons study of transitional probabilities. Yet no increase in the numbers in series related by maximizing the probabilities tends to produce sentences from strings of words. Its compelling unity is its own, like that of a cough or a sneeze, and it has a correspondingly high probability of being a usable output. Yet it is only logic and arithmetic that meet David Humes requirement of one-to-one correspondence to determine equality. Hence it is only in these terms that we may argue correctly through indefinitely many steps. This brings us to man-made digital computers as an outcome of the stable, the usable, and the countable items of language mans conventional terms. One has only to put pebbles in pots to calculate.

These computing machines are merely substitutes for putting pebbles into pots. If man had not invented a radix, and its positional notation that a pebble in one pot was worth 10 or 2 in the next, his errors, and those in his machines, would have tended to be in the least significant place. In handling this kind of information man can only process about 25 bits per second. His machines can do a million times as much, and with fewer errors. They do nothing that he could not do, but faster and better. Thus their action is stabilized for a highly usable output. At this, their simple game, they beat their inventor, because their task is simpler than that of the biological computer that produced them. If you want to see how much more stable and usable their output, you have only to try to multiply two six digit numbers mentally. Surely man has not evolved any decent arithmetical organ within his head. Presumably he never will.

In closing let me rehearse the sources of stability in biological systems that insure a usable output.

From the ultimate particles of physics, through disparate atoms and disparate molecules, through organelles and cells, through cell assemblies in segments and in those for natural acts, through language, logic and arithmetic, even in his digital computing machine, again and again we see the stability insured by thresholds for discrete processes.

These have been selected from discrete mutants by evolution so that only those that work survive.

Many levels are stabilized by negative feedback including the inverse feedback to many or all components of multiple closed-loop servosystems.

Every part of the nervous system has redundancy of information, redundancy of coding, redundancy of parallel channels.

These channels are so organized as to preserve the topology of the input within the computer by projecting neighborhood into neighborhood, thus preserving a usable output under perturbation of signals, perturbation of threshold, local perturbation of connection and even scattered loss of channels. There is no positional notation of a radix, and errors remain in the least significant place.

The quasi-autonomous stable subassemblies are linked through the core of the brain like a modern battle fleet; that is, with redundancy of potential command.

Gross failures and gross misconnections of the parts initiate internal switching in this central net until a usable output is found and persists. This is Ashbys ultrastability — which resembles evolution and may account for the learning of skilled arts.

There exists the possibility of logical stability, which is a retention of the same output for the same input despite a general shift of threshold sufficient to alter the circuit action of every component.

All these devices for securing a stable, usable output underlie thinking in natural and conventional terms by biological computers.

This is how nature has done it.

For further research:

Wordcloud: Action, Atoms, Axon, Biological, Called, Cells, Central, Channels, Closely, Complex, Computers, Connection, Determine, Digital, Discrete, Disparate, Engineer, Enjoys, Errors, Fleet, Function, General, Impulses, Information, Input, Insure, Logical, Machines, Man, Molecules, Nature, Nervous, Net, Neurons, Organization, Output, Particles, Produce, Products, Redundancy, Segments, Servosystems, Signals, Stability, Stable, System, Terms, Threshold, Usable, Work

Keywords: Computers, Wiener, Output, Information, Entropy, Systems, Looms, Servosystems, Symposium, Sciences

Google Books: http://asclinks.live/d407

Google Scholar: http://asclinks.live/ojch

Jstor: http://asclinks.live/2u32