RELIABLE SYSTEMS USING UNRELIABLE UNITS12,3 [170]

W.S. McCulloch

##

Galileo initiated modern physics in terms of hypothetical interactions of postulated entities. These were so constructed as to explain the causal relations of events. They were related to the mind of the beholder only by appearances. Descartes proposed to treat of man similarly, that is, as an automaton governed wholly by the laws of physics. I would do likewise but with better physics and no purely descriptive terms like the sodium pump to hide ignorance of mechanisms. I shall prosecute this with Hermann Berendsen,(1) who worked out the structure of water in tendon.

But, long before the physical picture is complete, we need a theory of how brains treat messages. Biology had only one theory which, in its postulational structure, resembles Galilean physics, namely, Mendelian genetics. It deals with the way messages are transmitted and combined through successive generations. It has served well although we do not yet know how those messages are coded in deoxyribonucleic acid.

Forty years ago, inspired by Morgan's lectures on the fruit fly, I tried to formulate the functional activity of the nervous system in terms of rank upon rank of neurons passing on and combining messages from receptors toward effectors. The hereditary nexus of genetic relations has the right anastomotic structure to model the combination of signals to permit any output to depend upon any and all inputs. My postulated neurons were relays which, on receipt of all-or-none excitatory signals in excess of a threshold, emitted one all-or-none impulse. No inhibitory impulses had yet been demonstrated. In 1927 when we both were interns on Foster Kennedy's service at Bellevue Hospital, Dr. S. Bernard Wortis used to laugh at me for trying to write an equation for the brain. It is on this score that we can now report considerable success.

In 1927 the difficulty was that my theory could not envisage circular activities. They had been postulated by Descartes and demonstrated by Magendie. They explain reflexes, homeostasis and purposive behavior. It was to account for these negative, or inverse, feedbacks that Sherrington, notably in reciprocal innervation, introduced his C. I. S. as a central inhibitory state or substance.

In 1930 Kubie(2) postulated and, in 1938, Lorente de Nó(3) demonstrated regenerative loops which account for that transitory memory of the specious present which makes thinking possible.

Twenty years ago I met Walter Pitts who could handle circularities in terms of modular mathematics, and in 1943 we published A Logical Calculus for Ideas Immanent in Nervous Activity'’.(4) The postulated neurons were the same except for inhibitions, considered absolute, but the nets included circles. Briefly, that paper proved that a net of such neurons could be designed to do with information anything that could be specified in a way that was finite and unambiguous. It could, given some surrogate for memory, such as a tape, compute any computable number. It could deduce any theorems that followed from a finite number of premises. It could detect any figure in time and space given in its input. Von Neuman used that text in teaching the general theory of computing machines. Kleene(5) produced a better text to which we would refer you.

The logic is that of the lower predicate calculus with quantification, but with one limitation for which it has been called threshold logic.(6) In it a neuron can compute all but 2 of the 16 Boolian functions of two inputs A, B. The missing ones are A if and only if B and A or else B. The latter omission is especially startling in view of Lloyd's(7) direct inhibition and Galambos and his co-workers(8) findings in the superior olive. The former leaves no time, the latter no space for an intervening neuron.

Clearly, besides the inhibition that is produced by hyperpolarizing the axonal end of a cell, and so raising its threshold, which we demonstrated to this society, in our paper(9) on our electrical hypothesis, we had to look for inhibition by interactions of afferents. We have shown it as distally as the primary bifurcation of the afferent peripheral axons(10). Thus, real neurons arc not restricted to threshold logic and can be imagined to compute any of the 2(2N) Boolian logical functions of N inputs.

Fifteen years ago Pitts and I(11) began our article on How We Know Universals with the caveat that it was wise to construct nervous nets so that their principal functions were little perturbed by small perturbations in excitation, in threshold, and in detail of local synapsis. Von Neuman(12) was then working on his own computing machine and took our caveat seriously. Ten years ago this led to his famous paper Probabilistic Logic and the Synthesis of Reliable Organisms from Unreliable Components. He was unhappy with his findings, for, to obtain acceptable reliability, he had to suppose neurons better than he could expect in real brains, and he needed far too many of them in every rank. It was then that he asked me to tackle the question, which I did single-handed for 5 years with only occasional corrections by Walter Pitts.

It slowly became clear that von Neuman had been unfortunate in 3 ways. First, by using one and only one logical function he had lost the intrinsic redundancy of logic. Let me make this very clear. By probabilistic logic, von Neuman did not mean a logic in which the component propositions are only probable. That would be the familiar logic of probabilities. He meant a logic in which the functions themselves are only probable. Now suppose we have, as a matter of fact, that A is true if, and only if, B is true, and we should have said just that, but, if by mistake, we said A implied B, or said B implied A, we would still have a true statement. This is due to the intrinsic redundancy of logic and is useful if the required conclusion happens to follow from the accidental variant of the proper function.

Von Neuman's second difficulty came from using neurons with only two inputs. This has two consequences. If all neurons sometimes compute wrong functions and each has just two inputs, the only functions that nets of them can compute with no errors are tautology and contradiction, neither of which depends in any way on its inputs, and consequently both are useless to an organism. It needs what is called an error free capacity(13) which begins with three inputs per neuron and increases rapidly with the number of inputs.

It took me 5 years to develop probabilistic logic sufficiently to handle two things: first, circuits which, like the respiratory mechanism, keep working despite common shift of threshold even under surgical anesthesia, and second, by a proper segregation of the possible errors, circuits which suffered random but limited perturbations of excitation, threshold, and local synapsis, and to show in them a nonzero rate of error-free operation. You will find these tricks described in Agathe Tyche.(14)

Five years ago I was joined by the first of a dozen collaborators, Manuel Blum, who cleared up the first of these problems completely for any number of inputs per neuron. For this he required interaction of signals afferent to the neuron, or interaction on the dendrites, if you prefer.

The second of these problems he solved completely with the assistance of Eugene Prang. This constitutes his Master's thesis, Massachusetts Institute of Technology, Department of Electrical Engineering.(15) Again he required the interaction of afferents to segregate the errors. What he showed was this: given N neurons, with N inputs each, all playing upon one output neuron, the fraction of figures of inputs to which each cell could indifferently fire or not fire increased rapidly with N. When N was about 40, 85 per cent of all configurations was of no importance even for the worst functions to compute. When N was 100, these don't-care conditions reached 92 per cent of configurations in the neurons of the first rank and 98 per cent in the output neurons.

Thus, as the richness of synapsis and of interactions began to approach those of real neurons, the error-free capacity came to be preserved despite enormous fluctuations in strength of signals, in the value of the threshold md even in synapsis.

Clearly we would like to put reasonable bounds on each of these perturbations, but, for obvious reasons, the variations of signal strength must be greatest in those fine fibers that make up the axonal arbor just short of synapsis, and they are so small that variations in the strength of their impulses are below the noise level of the electrodes with which we would measure them. Hence we must lump their effects into the variations of the threshold which their sum must exceed to initiate a transmitted impulse.

I believe that neurophysiologists generally think that the trigger point for the impulse, propagated without decrement, lies beyond the axon hill, and, in myelinated neurons, probably at the first node of Ranvier. In the measurements of Lettvin and José del Castillo, made on single fibers in the dorsal column of the cat, the fluctuations were of the order of 10 per cent of the resting voltage. The interpretation of such a measurement in situ is difficult, for one does not know how much of the variation is useful interaction and how much sheer noise.

By noise I mean the fluctuations of resistance, caused by molecular motion, that increase with temperature and with resistance. The best measurements are those of Verveen(16) made on peripheral nerve under well-controlled conditions. In fibers about 6 µ in diameter, excited at a single node, he found fluctuations of the order of I per cent of the threshold. For smaller fibers it is much greater, and, for the twigs of terminal arbors, it should exceed their thresholds, producing uncorrelated spontaneous impulses. This has been adequately treated by Allanson(17) under the name synaptic noise. We shall regard it only as a minor fluctuation of threshold of the neuron on which such arbors end. The fluctuation of the trigger point itself is more important, but it is least for the largest cells which present the greatest surface for synapsis and hence compute the functions of most inputs.

When we turn to perturbation of synapsis, we are on even more uncertain ground. I have asked many neuroanatomists for estimates of how many axons go totally astray, and have had answers ranging from 2 per cent to 10 per cent. But we are all familiar with gross abnormalities, such as congenital absence of the corpus callosum which are only discovered post mortem.

In this connection, let me report the rewards of sticking strictly to our electrical hypothesis of excitation and inhibition which makes them depend on local geometry. It took 5 years for five of us, especially Maturana and Lettvin,(18, 19) to discover what four varieties of ganglion cells in the frog's retina told the frog's brain and to assign each function to a ganglion cell with a given type of dendritic arbor.

These studies led Lettvin to an algorism(20) which adds excitations on a dendrite, divides them by the next inhibition on that dendrite as one tlescends toward the trigger point and simply adds the effects of dendrites where they join. With this algorism Maturana and Lettvin(21) were so successful in guessing what the six varieties of dendritic trees of ganglion cells in the pigeon's eye could compute that it took them only 6 weeks to find out what each variety told the pigeon's brain. For the simpler problem of foveal vision of the primate eye it should be only a matter of a few hours of successful experiment.

In short, his hypothesis has disclosed a proper relation of form to function, in which the precise details of synapsis, if it is generally of the right kind, are not too significant. The algorism is precise; the synapsis need not be. So much for probabilistic logic!

Let us now return to the second of von Neumann's unfortunate suppositions, namely, that each neuron had only two inputs.

To secure reliability, he replaced each axon by a bundle carrying the same message, and because each neuron had only two afferents he had a rank of as many neurons as axons per bundle, but each could only receive one axon from each of two bundles. Thus he was limited to Boolian functions which have only the truth value, true and false. Hence, if each neuron made an error once in 200 times, he needed 5000 computing neurons, followed by two more ranks of 5000 neurons to reshape the signal in the bundle for the next computing rank, in order to make a mistake only once in a million times, which, in us would be about one mistake in each function every quarter of an hour.

The way out of this difficulty is to use neurons that can each receive impulses from all the axons in each bundle. But this alters the logic. If a neuron has two inputs carrying the same message, it may receive two messages asserting the same thing, which is then certainly true; or it may receive none, which is certainly false; or it may receive only one, which is of intermediate truth value. So our logic becomes multiple truth-valued as well as probabilistic. As long as all our neurons are computing the same thing from the same bundles, these intermediate truth values will appear only as a result of error, so that a correct action made in unison by a rank of these neurons will still be Boolian in the large. Jack Cowan(22) has described this in his Many-Valued Logics and Reliable Automata.

Let us look first at the worst case. Our neurons die thousands per day and, while dying, often emit strings of impulses which depend in no way on their input. Here the errors cannot be segregated but must be treated, as von Neuman did by his third unfortunate assumption, as if they simply appeared on the axon. Leo Verbeek(23) has discussed this in his On Error Minimising Neuronal Nets with the same probability of an error once in 200 times. Instead of 5000 two-input neurons per rank, by connecting every input to each computing neuron, he needs only 10 neurons to achieve the same reliability of one error of the bundle per million times.

This is a vast improvementand the connection of each to all is the best that can be done if all are to compute the same function. The trouble is that to approach complete reliability the amount of computation per component neuron tends to zero. It is like repeating a message over a noisy telephone again and again to be sure it gets through. We know from communication theory(24) that we can do much better by encoding the message. In fact, we know that if we are willing to transmit our messages at a little less than the capacity of the channel we can transmit them with practically complete reliability despite the noise on the line. We would like to do as well in computation.

Now, in communication, since the chief noise is in the channel and not in sender or receiver, we suppose that encoders and decoders are noiseless. Here one has to be careful. Encoding and decoding are in a sense computing and this is the very thing we are doing with noisy neurons. The last rank of our computers are motor neurons and if errors arrive on their axons there is nothing we can do to avoid it; but we should be able to design the nervous system to keep the noise down to that ultimate limit.

Cowan and Winograd(25) have succeeded in doing just that, in their monograph Reliable Computation in the Presence of Noise. It is of immense importance both in helping us to perceive order in the complex anastomotic nets of our own brains and in building the theoretical structures for vastly improved efficiency in the design of reliable computers. The price of this is an increased connectivity of the net.

Let me describe it briefly thus; every rank of neurons is supposed to decode the message it receives, to compute and to encode for the next rank. In this process the functions computed by particular components are diversified and distributed over the ranks with the peculiar result that it becomes impossible, even in the absence of noise, to assign one particular function to a particular neuron. In one computation it is serving in one capacity and, at the next moment, in a different capacity. In fact the coding is in space, and to point at a neuron in the frontal cortex and ask what is its function is like asking what is the function of the third letter of words in the English language. It takes a probabilistic logic which is multiple-truth-valued even in the absence of noise to specify the circuit action of these minimally redundant reliable automata. But when one asks, not concerning the details of particular components, but concerning the tissue what is its logical function? the question is meaningful and the function will have the Boolian ring of true or false.

Gentlemen: in all that I have said, I have ignored all other processes that help to achieve reliability for one simple reason, that a brain is a state-determined system at every instant. Every rank of neurons at any moment must respond, in the state it is then in, to what actually arrives upon it at that instant, and then, in the act of computing, there is simply no time for these other processes to intervene. The act of computation is an irreversible act because it discards information that is then irrelevant to its decision.

Now that we have a proper theory for the synthesis of reliable organisms from unreliable components, I would like to hark back to the origin of my notions in genetics. It may well be that the Cowan-Winograd notions will be important there, for the variations in composition of desoxyribonucleic acid (DNA) seem to be greater than in the ribonucleic acid (RNA) that it specifies, and those of RNA greater than in the ultimate protein. It will be fun if the equation for the brain serves as a model for the equation of its generation.

In closing let me remark that such success as both of these theories have had in biology is due to this, that they rest on the hypothetical interactions of postulated entities, call them messages, which have been so constructed as to explain the causal relations of events. This is in the spirit of Galileo's Two New Sciences, and it treats man as an automaton governed solely by the laws of physics as Descartes proposed.

Discussion

Dr. Joel Elkes (Washington, D. C.): I would like very briefly to comment on Dr. McCulloch's remarks concerning that excitation and inhibition depend, as he said, on local geometry, by which he means, I imagine, not only cellular topology, but also topology at the subcellular level.

There are a number of places on the neuron which may be transactional sites. There is the synaptic knob, the synaptic cleft, a perisynaptic barrier, the subsynaptic membrane, the axon hillock and the node, around which, as Dr. McCulloch emphasized, a great many fluctuations can take place.

Here I think Dr. McCulloch has done us a service in drawing attention to the complexities of inhibition as a state, rather than one attributable to a single substance. There may well be such an inhibitory substance; but the state for substance P is still uncertain, and γ-aminobutyric acid entered a recent conference as an inhibitor and came out, I think, as a metabolite, the role of which is far from clear. There are a number of other substances which suggest themselves as candidates. In other words, inhibition may well depend upon an interaction-in-time of several metabolic events at some transactional sites.

There are various ways of approaching the problem, including the topological application of substances to single cells in small areas of the brain by the multibarrelled micropipette, as Dr. Salmoiraghi is doing in our laboratory. Here, an unevenness of response is clearly showing up.

In other words, we have to consider a topology at cellular and subcellular levels to weigh a topology in time and to regard these subcellular sites as transducers of events in time. The so-called neurohumoral transmitter substances and the inhibitory agents, both identified and as yet unidentified, may well play a role far more pervasive than is implied by such terms.

They may play a part in the organization and encoding of events-in-time and the structuring and building of the molecular models which we mysteriously know as the memory trace. I am thinking here of variable interactions of these small molecules with proteins, or incorporation into proteins, conferring specificity unto such proteins. They may well, in some way, be released when recognition occurs. What I am saying is that the excitatory or inhibitory properties of this or that substance may depend on where, when, in what proportion and in what company this or that substance is released and also that such releases from bound states may be highly specific, depending upon some kind of read-out and recognition. The tremendous structuring power of delay and inhibition in the evolution of complex functions within the central nervous system may perhaps depend upon some such processes.

I think that the message which I heard from Dr. McCulloch is that one has to look at events and transactions in time. Small molecules interacting reversibly with neural or glial proteins may well prove regulators in the encoding or evocation of tracer of events-in-time.

DR. RIOCH: There are two questions from the audience for Dr. McCulloch. The first is:

In the mathematical model you described as proposed for a great number of afferents impinging on a single nerve cell, was the assumption being made that each of these afferents had a similar effect on the receiving neuron? If so, how can this assumption be defended in the light of modern neurophysiological demonstration that different zones of cell body react differently to incoming stimuli?

Now, the second question:

Are the mathematical models you discussed based on the assumption that the relationships are linear? Can this assumption be defended in the case of the brain?

Dr. McCulloch, would you like to answer?

Dr. McCulloch: May I take up the three statements that I think should be covered here. The first was concerning the role of chemical agents. I spent quite a while working for chemical warfare. I am fully aware that chemicals do have actions on nerve cells. My only point is that one does not have to take them into account in the theory that I was presenting; let them be what they will.

Point two! I am well aware that the location of impulses on the cell body and dendrites of a given neuron are important. The question was: Is there a general regularity in this such that one can, by looking at the anatomy and knowing where those impulses are coming from, state what kind of functions that neuron can compute?

Let me say it briefly. If impulses approach a cell body from the dendritic end and end with synapses near the axonal aspect of the cell, they are, as far as I know, in every place where they have been so studied, excitatory. When impulses ascend by an axon passing a cell body to end (to use Cajal's phrase) by fine branches on or among the dendrites or (to use Nauta's phrase) when they ascend the dendrites going out of their way to divide where it divides and ending in such small structures that I cannot see them, in every case where we have this arrangement, whether it is in the bulboreticular inhibitor's final relay to the motor neuron or in the climbing fibers of the cerebellum which come up from the inferior olive or in the U-fiber system of the cerebral cortex, the first and foremost action is always one of inhibition.

What I tried to do was to point out this, that we have now a sufficiently good algorism for the arrangement of synapses on cells to assign a proper function, or a proper group of functions, to that dendritic arborization. And this is where we have come ahead, and the algorism is very simple. You add excitation when they occur together on a dendrite and you divide that excitation that they have produced by any inhibition nearer to the axonal hill. With a simple arrangement one can figure out these functions surprisingly well.

Point three! I am not sure I understood the third question concerning linearity. The neurons that I am talking about are extremely nonlinear in their behavior. The combinations, i.e., the interaction of the afferents, are extremely nonlinear. The very fact that you can have as many as 92 per cent of the configurations of inputs to which a cell can respond indifferently by firing or not firing, shows that it cannot be a majority organ. Now, majority organs do simulate linear affairs surprisingly well when there are enough inputs, but here, certainly, the reverse is true. I would say they were nonlinear interactions, everywhere that I know about them.

Footnotes

References

Berendsen, H. J. C.: An NMR study of collagen hydration. Doctoral thesis, Rijkuniversiteit te Groningen, The Netherlands, 1962.

Kubie, L. S.: A theoretical application to some neurological problems of the properties of excitation waves which move in closed circuits. Brain, 53: 166-177, 1930.

Lorente De Nó, R.: The cerebral cortex: architecture, intracortical connections, and motor projections. In Physiology of the Nervous System, by J. F. Fulton, Ed. I, Chap. XV, pp. 291-325. Oxford University Press, New York, 1938.

McCulloch, W. S. and Pitts, W. H.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys., 5: 115-133, 1943.

Kleene, S. C.: Representation of events in nerve nets and finite automata. In Automata Studies, edited by C. E. Shannon and J. McCarthy. Princeton University Press, Princeton, N. J. 1956.

Winder, R.: Threshold logic. Doctoral thesis, Princeton University, 1962.

Lloyd, D. P. C.: Facilitation and inhibition of spinal motor neurons. J. Neurophysiol., 9: 421-438, 1946.

Galambos, R., Schwartzkopf, J. and Rupert, A.: Microelectrode study of superior olivary nuclei. Am. J. Physiol., 197: 527-536, 1959.

McCulloch, W. S., Lettvin, J. Y., Pitts, W. H. and Dell, P. C.: An electrical hypothesis of central inhibition and facilitation. In Patterns of Organization in the Central Nervous System, Research Publications of the Association of Nervous and Mental Disease, Vol. 30, Chap. 5, pp. 87-97. The Williams & Wilkins Company, Baltimore, 1950.

Howland, B., Lettvin, J. Y., McCulloch, W. S., Pitts, W. H. and Wall, P. D.: Reflex inhibition by dorsal root interaction. J. Neurophysiol., 18: 1-17, 1955.

Pitts, W. H. and McCulloch, W. S.: How we know universals. The perception of auditory and visual forms. Bull. Math. Biophys., 9: 127-147, 1947.

Von Neuman, J.: Probabilistic logic and synthesis of reliable organisms from unreliable components. In Automata Studies, edited by C. E. Shannon and J. McCarthy. Princeton University Press, Princeton, N. J., 1956.

Shannon, C. E.: Zero-error capacity of a noisy channel. Bell Monograph no. 2760, 1956.

McCulloch, W. S.: Agathe Tyche, Of nervous netsthe lucky reckoners, Mechanization of thought processes. In Proceedings of the 10th Symposium, National Physical Laboratories, Teddington, pp. 611-625. Her Majesty's Stationery Office, London, 1959.

Blum, M.: Reliability of biological computers. Master's thesis, Department of Electrical Engineering, M. I. T., 1961.

Verveen, A. A.: Fluctuation in excitability. Drukkery, Holland N.V., Amsterdam, 1961.

Allanson, J.: The Reliability of Neurons. 1st International Congress on Cybernetics Namur, 1956.

Lettvin, J. Y., Maturana, H. R., Pitts, W. H. and McCulloch, W. S.: What the frog's eye tells the frog's brain. Proceedings I.R.E., Vol. 47, no. 11, 1940-51, Nov. 1959.

Lettvin, J. Y., Maturana, H. R., Pitts, W. H. and McCulloch, W. S.: Two remarks on the visual system of the frog. In Sensory Communication, edited by W. A. Rosenblith, Chap. 38, pp. 757-776. John Wiley and Sons, New York and The M. I. T. Press, Cambridge, Mass., 1961.

Lettvin, J. Y.: Form-function relations in neurons. Quarterly Progress Report No. 66, July 15, 1962, Research Laboratory of Electronics, Massachusetts Institute of Technology.

Maturana, H. R. and Lettvin, J. Y.: In Transactions of the 22nd International Congress of Physiological Sciences, Leiden, Sept. 1962. To be published.

Cowan, J. D.: Many-valued logics and reliable automata. In Principles of Self-Organization. Transactions of the University of Illinois Symposium on Self-Organization, June 1960, edited by H. Von Foerster. Pergamon Press, 1962.

Verbeek, L.: On error minimizing neural nets. In Principles of Self-Organization. Transactions of the University of Illinois Symposium on Self-Organization, June, 1960, edited by H. Von Foerster. Pergamon Press, 1962.

Shannon, C. E. and Weaver, W. : A Mathematical Theory of Communication. University of Illinois Press, 1949.

Cowan, J. D. and Winograd, S.: Reliable Computation in the Presence of Noise. The M. I. T. Press, Cambridge, Mass., 1963.

For further research:

Wordcloud: Afferents, Axon, Brain, Bundle, Cell, Cent, Components, Compute, Dendrites, Depend, Dr, Error, Excitation, Fluctuations, Functions, Impulses, Inhibition, Inputs, Interaction, Lettvin, Logic, McCulloch, Messages, Nervous, Nets, Neuman, Neurons, Noise, Number, Organisms, Per, Physics, Pitts, Point, Postulated, Press, Rank, Receive, Reliable, Signals, Structure, Substance, Synapsis, System, Theory, Threshold, True, University, Years

Keywords: Systems, Physics, Galileo, Interactions, Biology, Picture, Memory, Neurons, Looms

Google Books: http://asclinks.live/902i

Google Scholar: http://asclinks.live/59uc

Jstor: http://asclinks.live/qz3d


1 Reprinted from Disorders of Communication, Vol. XLII: Research Publications, A.R.N.M.D., pp. 19-28, 1964.
2 This work was supported in part by the U.S. Army Signal Corps, the Air Force Office of Scientific Research, the Office of Naval Research, the National Institutes of Health (Grants NB-01865-05, 7H-04737-03), and the National Science Foundation (Grant G-16526).
3 With the assistance of M. Arbib, J.A. Aldrich, H.J.C. Berendsen, M. Blum, J.D. Cowan, W.L. Kilmer, A Johnson, N. Onesto, M. ten Hoopen, L.A.M. Verbeek, A.A. Verveen and S. Winograd.