THE BRAIN AS A COMPUTING MACHINE1 [86]

W.S. McCulloch

Introduction

Electrical engineers distinguish between problems of strong currents - or power engineering - and of weak currents or communication engineering. Computing machines, including brains, belong to the latter specialty. Man's brain is much the most complicated of computing machines, and it requires power to keep its relays in the operating range of voltage. It is battery-operated, each relay having its own battery. These are charged by a series of chemical reactions beginning with sugar and oxygen and ending with carbon dioxide and water. When these materials are lacking, or any step in the chemical reaction is blocked, the circuit action goes completely wrong. Depression, mania, delirium, stupor, and coma are called functional psychoses because they disappear when the proper voltage is reestablished. To break up some of these conditions, the brain is stimulated electrically, and an epileptic fit produced. This action discharges the batteries so frequently that it exhausts them temporarily because, in a fit, the brain uses from 8 to 80 times as much energy as it normally requires. After the fit, it takes some time to replenish the batteries and get ready to go again. We know that it is not those chemical reactions which create the voltage but the voltage itself that matters, for a nerve without oxygen, and so without voltage, can be recharged from a common dry cell; and when the nerve is up to voltage, it transmits signals normally. The brain heats the pint of blood that flows through it every minute, one degree Fahrenheit. That is a quarter of a kilogram calorie per minute which, in electrical units, is only 2.4 watts. To get rid of it as heat is all that brains can do with spent power. So long as that power is supplied, whether we are communication engineers or psychiatrists, we may forget it.

Instead of power, let us think in terms of information conveyed by signals. These signals can be divided into two kinds. First of these is the analogical signal in which the quantity of some variable changes continuously with that which it conveys, like distance in a slide rule or current in a telephonic repeater. These require precision of workmanship proportional to precision of performance, scarcely can reach the sixth decimal point even in a beam balance, and cannot be combined to get the next decimal place. The second type comprises the logical, or digital, signals which are divided into a few possible quantities whose number in separate places or times is the message to be conveyed - like pins in a cribbage board or dots and dashes in a telegraphic relay. Such devices require only about ten percent precision. Their signals are sharpened again at each relay. They can be combined to secure any number of decimal places at no extra cost per place.

The brain is a logical machine. Each of some ten billion relays has only two states: pulse or no pulse. Each relay is a living cell, shaped something like a vegetable with leaves like a carrot, body like a turnip, and a long thin tap root like alfalfa. The cells vary in size; the biggest have a bulb, say two-thousandths of an inch in diameter, fronds a tenth of an inch long, and a tap root a ten-thousandth of an inch in diameter and six feet long. Each of these cells keeps its outside less than a tenth of a volt positive to its inside until it is excited. It can be excited anywhere by driving its outside locally negative. When this happens, current from nearby positive parts of the cell flows into the negative region, partly recharging it but extending the negativity to the region whence the current came so that it, in turn, becomes a sink for current from still farther parts of the cell's outer surface. Thus the pulse of current, shaped like a smoke ring, is propagated along the threadlike cell. The membrane surrounding this cell is a leaky capacitor, its voltage supported by the local battery. The rate at which the pulse travels is determined by the distributed resistance, distributed capacity, and distributed source of voltage of the cell so that cells can be thought of as distributed repeaters. The fastest, or fattest, conduct at about 150 yards a second; the slowest, or thinnest, a foot a second. As the fast conductors generally connect distant places and the slow ones near places, the temporal effects of distance are minimized. The pulse itself has a rising phase of approximately a tenth of a millisecond, and it takes several tenths of a millisecond for the cell to be ready to transmit another pulse. It cannot do this often in quick succession without fatigue; however, between the time when pulses are delivered to a relay to excite it and the time its pulse starts, there is a delay of about half a millisecond. To excite one cell, pulses from several cells usually must arrive close together within one- or two-tenths of a millisecond, and a signal to stop a nerve cell from responding to pulses otherwise sufficient to trip it must precede them by about half a millisecond. Nerve cells sometimes conduct trains 200 pulses per second but usually average nearer 20 per second, so that the duty cycle is from 1/10 to 1/100 of their greatest possibilities.

In a comparison of nerve cells and ordinary relays, the nerve cells are faster than electromagnetic devices, about as fast as thyratrons, about one thousandth as fast as vacuum tubes. However, they are so much smaller than, though their voltage-gradients are about the same in cathode to grid to plate, they take far less energy to operate. A large building could not house a vacuum tube computer with as many relays as a man has in his head, and it would take Niagara Falls to supply the power and Niagara River to cool it. Eniac, with some 10,000 tubes, has no more relays than a flatworm. Moreover, nerve cells are cheap. If it cost a million dollars to beget a man, a nerve cell would not cost a mil, and until cathode, grid, and plate can be printed on plastic with only monomolecular films between them, engineers cannot hope to compete with nature. Even then, multiple grids would have to be worked out for gating the signals because on big nerve cells, what determines signal or no signal is special combinations from among as many as a thousand terminations of branches of tap roots of various nerve cells on the body of a single cell. At the present moment, the most exciting parallel is in the configuration of electrodes on transistors. Computing machine designers would be glad to exchange their best relays for nerve cells. One reason for that is their long life since man gets no replacements from the day he is born until the day he dies. Every nerve cell in a man's head is as old as he is, and most of them are still alive and working. Mechanically, they are more stable than other relays - and each keeps repairing itself. Of course, poisons, germs, failure of sugar or oxygen, failure of blood supply, or a bullet through the head may kill enough to make us run amok. These troubles are called organic psychoses because this damage is found in the brain at death.

A Calculus for Nerve Cells

The logician Boole, who first attempted a logical machine, was responsible for a calculus in which the only values of the variables are 0 and 1 and which is applicable to the pulses of nerve cells. Shannon was the first to apply such a calculus to nets or relays. Still, because he was interested in circuits either open or closed, instead of intransients opening and closing circuits, he emphasized the logic of their simultaneous or enduring states rather than the dependence of one signal upon those that tripped its relay. Consequently, time is not made a significant variable. So his is rather the logical relations both A and B,” A and/or B, and A but not B than the sequential relation of implication A at time t only if B at time t-1, wherein the unit of time is synaptic delay or closing time. Walter Pitts and the author took this factor into consideration. They made a complete calculus for these signals by taking the calculus of propositions of Whitehead and Russell from the Principia Mathematica and subscripting the symbol for the signal of a given relay by the time of that signal, measured in synaptic delays from any arbitrary beginning. This calculus is simple, much simpler than ordinary arithmetic, and it is enormously powerful. By it, we proved that a nervous system - even without regeneration - could compute any logical consequence of its input or, in Turing's phrase, compute any computable number. This is the substance of our Theorem II, which von Neumann uses in teaching the theory of all digital machines. The theory is ready and waiting to be used, and there is only one limitation. You have but half the calculus if you have only relays that never fire unless they are tripped; that is, you have signals which imply A and/or B fired; both A and B fired; A and not B fired, but none which implies neither A nor B fired. In the eyes of the common scallop, there are relays which keep on firing unless light falls on the eyes. In man, there are no such relays, but he always has a background of nervous activity which serves the same function. Consequently, we have for him the whole calculus of atomic propositions. However, perhaps this calculus does something more important, for it separates physics, for which the signals are only something that happens or else does not happen, from communication engineering, for which these same signals are also either true or else false. If you press on your eye, you will see a light when there is no light. The signals is just as physical as ever, but because it arose in the wrong way or in the wrong place, it is a false signal, just as false as a ring on the telephone when lightning strikes the wires. It is because communication engineering deals with signals, true or else false, that neurophysiology is part of engineering, not merely of physics. To understand logical calculi, it pays to learn symbolic logic. With it, you can write out at once the specifications for any number of nets to do anything.

Notice its application to sensation. Consider chains of relays from the sense organs to the relay racks in the brain. A signal in the relay rack implies one nearer the sense organ, and it, in turn, backward in time until we get to the sense organ, and so to the outside world. Thus, what happens in our heads logically, materially implies the world impingent on our sense organs. This material implication is not a symmetrical relation. It extends into the past, not into the future. What happens in our heads does not imply what is to happen in our arms and legs. We send down volleys of signals, but these play on the complicated servomechanisms which keep us right side up and adjusted to the world about us. These have their own input from the world, so what happens is in large part determined by them. We intend they act. Because what we intend and what we do are not always the same, we are forced to distinguish between what we will and what we shall do. Hence, the notion of the will. Any computing machine which can detect a discrepancy between what it calculated and its actual output may be said to have a will of its own. The newer machines probably will be able to sense errors and erase mistakes in the output as they must in their memories. Consequently, they may prefer magnetic tapes to punch cards.

The Memory

To compute as we do, a machine must have some kind of memory. Kubie was the first to propose, and Lorente de Nó was the first to prove that brains have reverberating chains of nerve cells. These are just regenerative circuits in which a set of signals patterned after some input can go round and round. The set preserves the figure of the input but no longer refers to one particular pastime. It says that there was some event of that figure. It introduces into our logical calculus the existential operator and the predicated figure, or idea, simultaneously; that is, there was some event such that it was of this figure. Existential operators combined with negation yield universal operators, and the figures, predicables, or ideas, are called universals. Any kind of memory will serve to free universals of the particularity of their origin in time. A victrola record of Yankee Doodle does not tell you when it was recorded. But every other form of memory is only a surrogate for reverberating chains. You may use an acoustic tank, a latticed grill within an iconoscope, a wire tape, or punch cards - but the computers must be able to put in and take out information and so complete the loop round which the information goes. It could be done with flip-flops or with chains of thyratrons; only it would cost too much in space and energy. Nerve cells are cheap, small, plentiful, take little energy, and are not too fast. Our brains have many of these closed chains, enough so that they can run for eight hours on end without much loss of information from this first kind of memory. This becomes important as we grow old. Until then, use leaves a trace whereby the oft-repeated and successful act becomes preferred, and this is also a second kind of memory - also, until then, things seen once leave pictures that fade little, if at all, as years go by. This third kind of memory, in several ways, resembles a series of snapshots of the world, filed in the order taken and punched for recall at a set of points, each corresponding to one of a pair of opposites, whereby we sort out things that seem familiar to us. We may use these punches for recall, or we may ripple through the stack of shots in the order they were taken and filed. But, as with photographs that have to be developed, this memory is not available immediately after it is shot but must go through the processes of recall and recognition. From the work of Stroud, it is clear that this memory may contain shots taken as often as ten per second. From this, it is not unreasonable to guess we have, by the time we die, a series of some 1010 shots stored in our heads. Now, if every shot had 33 places where it might be punched for one of a pair of opposites, we would be able to find any one of 233, which is 1010, shots as is done in a calculating machine. This would account for our ability to recall items on a basis of similarity or dissimilarity, but one would have to be lucky to pick the right 33 pairs of opposites. We probably use far more pairs, perhaps hundreds of them.

No one yet has measured the number of items which a man can store per shot, but it is probably less than a thousand spots, each a jot (a mark), or else a tittle (no mark). So let us allow for code to find a shot and contents of each shot, a total of 1010 shots which is a total memory of 1013 spots. This is more items of information than there are connections of relay to relay. One might solder in the multiplication table, or his skilled acts generally, but not this snapshot memory. It begins to look as if these shots were filed in that small, deep portion of the brain which is spoiled regularly by too much alcohol, producing what is called Korsakoff's psychosis. There we may look to something about as big as a molecule of protein for each spot of memory.

In this respect, Heinz von Foerster (Das Gedachtnis, Vienna, Austria, 1948), by applying modes of reasoning and the data of theoretical physics to the best psychological data on the learning of nonsense syllables, evolved hypotheses which indicate that:

  1. The traces left by experience are probably quantized alterations of protein molecules, some 1021 in number.

  2. The total power required for the maintenance of this memory is of the order of 10-2 watt.

  3. The energy per step is in the near-infrared, say 28,000-angstrom units, which is in the range of enzymic resonance.

His theory requires that traces to survive must be reconstituted in new carriers, and only protein molecules are known to have the property of producing facsimiles of themselves. This theory fits with the known facts of protein denaturants destroying memory.

We have, of course, a substitute for memory, inasmuch as we can make any sort of mark and, at some future time, sense it again. But this, like all the rest, is but a substitute for reverberating chains. We must close the loop when we make use of it.

Negative Feedbacks in the Nervous System

There are other closed paths within the central nervous system, through it and the body, and through both and the world, which are negative feedbacks. Each of these serves to establish some state of the system. They respond to every deviation from the established state by returning the system toward that state, and this state is thus the goal, aim, or end of that operation.

Those that go through the world about us are called appetitive, for by them, like the self-guided torpedo or the gun with automatic fire control, we hunt our prey. Those that go through the body and brain are called reflexes, and they serve to give dynamic stability to our temperature, blood pressure, circulation, water balance, blood sugar, blood carbon dioxide, and so forth, as well as to our posture and motion at rest and under many conditions of acceleration, including gravity.

At the time that Pitts and the author wrote the first joint paper, we could secure only existential operators to free ideas of temporal particularity using regenerative feedback. Then it became clear that every reflex, or negative feedback, might be so used. For example, the pupillary reflex tends to keep constant the excitation of retinal receptors by adjusting the diameter of the pupil oppositely to the brightness of the field of vision. Thus, it tends to free retinal excitation of the gratuitous particularity of the amount of illumination. The number of signals relayed through lower parts of the brain to the cortex, or bark of the brain, has another inverse feedback which also acts as an automatic volume control; for, as more impulses reach the cortex, it decreases the number relayed thither. These enable us to see the same thing regardless of the brightness. The existential operator is now, There was some brightness such that it was this or that thing.

Another reflex is even more important here. The points of the retina are mapped, one for one, on the back of the midbrain, and the cells there are so connected as to compute a vector from the central line of gaze to the apparent center of gravity of anything in the visual field. They then relay this vector to the cells that control the eyes, which then move them so as to reduce the vector. That is, they turn the eyes so as to center the apparition. This reduces the vector to zero, and the eyes come to rest with the form centered. By moving the apparition to the center of the field, this reflex rids the apparition of the gratuitous particularity of the position in which it first was detected. Thus, there was some place such that it was this or that apparition. In a similar manner, every reflex takes some input through a series of values of some variable to some one value of that variable. This terminal value, being fixed in the design of the circuit, seems God-given and consequently is called the canonical position or brightness.

There is still another way of making existential operators and their corresponding universals. Most of our sensations, by nervous connections, map the continuous variables of sense upon a mosaic of relays. Imagine layer upon layer of such mosaics so stacked that the relays form rough columns. Let the incoming signals slant up through the layers. Fix the threshold of each relay so that it will trip only if it receives a signal from the input and a simultaneous signal to all relays of its layer. And finally, let the signal from each relay descend vertically. If, in such a matrix, a stream of signals comes in over a single slanting input and the layers be successively alerted, the output will move step for step in the direction of the projection of the slant upon the basic mosaic. Hence, if there be a spatial pattern of input, its form will be translated horizontally in that direction. In this way, we make, from something given in one position, the same thing in all positions along any line. Now pitch maps on the auditory cortex so that octaves span approximately equal stretches, and the auditory cortex is such a matrix. There is an ascending and descending pulse gating relays of the cortex, sweeping it every tenth of a second. Hence I believe that we detect chords by making the translations along the axis of pitch while we preserve the interval. In fact, we do detect chords regardless of pitch at about ten per second.

The Visual Cortex

We know far more about the visual cortex. Here the visual field maps so that the radial distance on the cortex is roughly proportional to the logarithm of the angle from the line of gaze. This cortex is again a matrix but so specialized that it can be recognized easily. The ascending information, in effect, radiates toward and from the central point. Hence, in effect, this cortex makes the dilatation and constrictions of the centered figure. Consequently, its output is the same regardless of the size of the figure in the input. Every part of this visual cortex is connected to a host or scattered points in a second visual cortex. Hence it comes about that to any figure in the output of the first visual cortex; there will correspond by chance some point of maximal excitation in the second visual cortex. The local clipping circuits suppress activity at all points where it is less than the maximum, so the position of maximum excitation determines the output. Again, the sweep of scansion is about ten per second, the so-called alpha rhythm, and again, this is the number of shapes that can be seen per second without any blurring or noticeably apparent motion.

From these well-established anatomic and physiologic facts, it follows that if one were to stimulate a spot on the first visual cortex of a man's brain exposed at operation he ought to see a blob of light, like a setting sun, somewhere in the visual field and it ought to move when he turns his eyes. That is exactly what men actually report when they are so stimulated. However, if one were to stimulate the second visual cortex with appropriate electric pulses, he ought to see a form, like a hand, a house, a tree, or the like, and it should not seem to be at any one place in the visual field nor should it move if he turns his eyes, nor should it be of any particular size but merely a shape freed of position and size, and that is just what the patients do report.

These last examples show clearly a process of computing invariants, each of which is a sum, for all members of some group of transformations, of the values assigned by an arbitrary functional to the transformation as functions of excitation at points and times in a given matrix. To define universals without any loss of information, one would need a matrix of these invariants, the matrix being of as many dimensions as the original matrix. In practice, a much smaller number of invariants usually suffices.

Here it should be pointed out that, in the case of the reflex that centers an apparition, the cerebral circuits actually assign the value zero to all transformations but the last. Thus it, and probably every other device for securing invariants corresponding to universals derived from particulars, is but an example of this procedure. In short, this is a general description of all coding devices. Inasmuch as in relay nets, one can convert any figure of pulses at one time over a given number of relays into a figure of impulses over one relay at as many times as it had required relays at one time - and vice versa - one cannot guess how a given universal will be represented in an unknown net, for all one knows is that it must be invariant. Moreover, there is no reason why the same invariant can be abstracted in only one part of a net. Each half of the visual field is represented in the other half of the brain, but a man can recognize a straight vertical line in either half equally well. In short, one would have to know in detail the anatomy and physiology of the brain pretty well to guess where to look and what to look for, and he still would not know enough to guess right more than once in a blue moon.

Actually, however, neither does anyone know the entire blueprint of Eniac, and certainly, no one knows whether Eniac actually is wired according to the blueprint. We know from the chemistry of our chromosomes that our genes do not contain enough information to specify all the connections of our nerve cells. These can specify only a moderately complicated secondary machine which must build a tertiary machine, and so on until the last builds the final brain. The next to last is never completely superseded or separated from the last but remains to tend the machine; and when any part is busy or out of commission, it shifts the problem to computers that are free to work on it. In this, it resembles the differential analyzer at the Massachusetts Institute of Technology. Consequently, it is very difficult for us to discover even gross defects or guess the function of a part from what happens when it is destroyed. Brains may seem to be all right when several parts have ceased to function. Any machine like Eniac, which does many things in parallel, is hard to trouble-shoot for the same reason. Parts may fail, and answers continue to be computed. But human brains are incomparably worse in this regard. I believe von Neumann thinks that in a minimal nonparallel machine made of a few thousand vacuum tubes, one malfunction per four hours is tolerable, and this requires that the probability of error in function of a relay must be, say, 1 in 1012 unitary operators, (pip or no pip). The best electromagnetic relays probably fail 1 in 109, that is, a thousand times too often. Nerve cells are not that good, and they cannot be replaced, but they are cheap and plentiful. Hence the worst of our difficulties in troubleshooting.

The Unit of Information

A unit of information, as Wiener has suggested, may be described as the decision whether a relay is or is not to fire in one relay-time. In an ensemble of N nerve cells, they can be equally well in any one of 2N possible states by chance. Every unit of information subtracts one from N. Hence the amount of information is the logarithm (to the base two) of the reciprocal of the probability of the state - or the negative of the entropy of the system. Thus, entropy never can decrease means exactly the same as information never can increase. Actually, entropy increases, and signals become corrupted on passing through any communication system. Corruption may be defined as the ratio of information in the input to information in the output. Now, consider man. Each eye has a hundred million photo receptors, each capable of one decision per millisecond - so input information is at least 2 x 108 per millisecond. Man's output in speech can be estimated by noting that a telephonic device that samples speech once per millisecond and emits one pip or none according to the amplitude of the wave conveys almost all the original information. Two pips per millisecond, corresponding to four amplitudes, are certainly more than enough. So, from human vision to human speech, the corruption is 108. A part of the loss is due to the clumsy coupling of nerve to muscle, but the rest is used to buy security.

The eye contains a hundred million photo receptors which converge by way of bipolar cells on a scant million ganglion cells, which relay the signals to the brain proper. By requiring coincidence at each junction, we ignore signals that do not agree with other signals, and we do it to the tune of a hundred to one decisions. This means that the brain gets the signal only if it corresponds to a statistically significant number of synchronous responses of the photoreceptors. The probability that the signal relayed to the brain is due to chance may be 1 in 2100, which is 1 in 1030; negligible, to say the least. But the process does not end there. It is repeated at almost every step from input to output. Naturally, we make few gross errors of action. Still, so much calculation in parallel with subsequent demand of coincidence makes it almost impossible to detect trouble in the circuit unless it happens to catch most of the parallel paths for some sequential action. Many a nerve cell can, and does, die, and no one knows it till he sees that brain under the microscope. These scattered losses rarely bring us to the doctor.

The Neurotic Brain

It is far otherwise with neuroses. As was noted previously, the dynamic stability of normal living is maintained by negative feedback. When for any reason, say, environmental stress, the gain increases, these change to positive feedbacks, oscillating regeneratively at their normal frequencies; also, when for any reason they are driven at a frequency at which the gain exceeds one, they become regenerative. Kubie was the first to call attention to the reiterative core of every neurosis, and he was right. All evidence now indicates that neuroses begin in some normally negative feedback going regenerative, persisting in activity throughout every minute, day and night, for months and years. It tends to sweep more and more cells into its orbit and so removes them from their normal function as free-floating computers. This enfeebles us for certain intellectual tasks and fixes our behavior in ill-adapted ways. Cells swept into its orbit and fired often suffer fatigue, a rise in threshold, and constitute a wall about the process so that other activities cannot communicate with it. Eventually, by repetition, it affects the cells so that the pattern of their activity reappears even after it has been interrupted, in this, resembling all our motor skills. In early periods it is enough to interrupt the reverberation again and again and sweep the other cells out of its orbit. Still, when it invades the major organs of judgment and becomes obsessive or compulsive, it may require a surgeon's knife to cut the feedbacks of the frontal lobe. Thereby we sacrifice many of the highest traits of character, some kinds of insight into things and men, and, in addition, the ability to create new categories among ideas.

In the neurotic brain, you may find no general chemical reaction gone astray nor any damaged cells, for when activity ceases, regeneration ceases. The most you might expect to find are some changed thresholds or connections - those little invisible differences which each of us acquires by use - the basis of our characters. The more we build negative feedback into machines, the more surely they will have neuroses. These diseases are demons with ideas and purposes of their own. Physicists have been known to curse them, but they cannot be exorcised. If, instead of our variety of psychodynamic nonsense, you wish to think sensibly of them, I would suggest, in all seriousness, that you start now to prepare a dimensional analysis of gremlins.

REFERENCES:

Influence of Suppressor Areas on Afferent Impulses, S.H. Barker, E. Gelhorn. Journal of Neurophysiology. Vol. 10, No. 2, 1947.

Sensorimotor Cortex, Nucleus Caudatus and Thalamus Opticus. G. Dusser de Barenne, W.S. McCulloch. Journal of Neurophysiology. Vol. 1, 1938.

Das Gedächtnis. H. von Foerster. Deuticke, Publishers, Vienna, Austria, 1948.

Effect of Afferent Impulses on Cortical Suppressor Areas, E. Gellhorn. Journal of Neurophysiology. Vol. 10, No. 2, 1947.

Theoretical Application to Some Neurological Problems of Properties of Excitation Waves Which Move in Closed Circuits, L.S. Kubie. Brain. Vol. 52, July 1930.

Repetitive Core of Neurosis, L.S. Kubie. Psychoanalytical Quarterly. Vol. 10, Jan. 1941.

Electrical Properties of Nerve and Muscle, D.P.C. Lloyd. Chapter in Howell's Textbook of Physiology. 15th edition. Saunders.

Analysis of the Activity of the Chains of Intrnuncial Neurons. R. Lorente de Nó. Journal of Neurophysiology. Vol. 1, No.3, 1938.

Sensorimotor Cortex and Thalamus Opticus, W.S. McCulloch. American Journal of Physiology. Vol. 199, 1937.

A Logical Calculus of the Ideas Immanent in Nervous Activity, W.S. McCulloch & W. Pitts. Bulletin of Mathematical Biophysics. Vol. 5. 1943.

Finality and Form, W.S. McCulloch. James Arthur Lecture, American Museum of Natural History, May 1946. Thomas Lecture Series in Psychology.

Machines That Know and Want. W.S. McCulloch. Journal of Comparative Psychology.

How We Know Universal. W.S. McCulloch & W. Pitts. Bulletin of Mathematical Biophysics. Vol. 9, 1947.

The Statistical Organization of Nervous Activity, W.S. McCulloch & W. Pitts. Journal of the American Statistical Association. Vol. 4, No. 2, 1948.

Through the Den of the Metaphysician, W.S. McCulloch. Lecture before the Philosophical Club, University of Virginia, March 1948.

Why the Mind Is in the Head, W.S. McCulloch. Hixon Symposium, California Institute of Technology, September 1948.

J. von Neumann. Hixon Symposium. California Institute of Technology, September 1948.

A Symbolic Analysis of Relay and Switching Circuits, Claude E. Shannon. AIEE Transactions. Vol. 52, 1938. December section, pp. 713-23.

Cybernetics. N. Wiener. John Wiley and Sons, New York City, New York, 1948.


1 Full text of an address presented at the AIEE winter general meeting, New York City, New York, January 31-February 4,1949.
The author, a medical doctor, employs electrical engineering terminology to show how the brain may be likened to a digital computing machine consisting of ten billion relays called neurons. To carry the analogy further, the performance of the brain is governed by inverse feedback, subsidiary networks secure invariants, or ideas, predictive filters enable us to move toward the place where the object will be when we get there, and complicated servo-mechanisms enable us to act with facility and precision. Disorders of function are explained in terms of damage to the structure, improper voltage of the relays, and parasitic oscillations.