Finality and Form in Nervous Activity

Warren S. McCulloch


Empiric philosophers have always maintained that problems in the theory of knowledge and of value must be stated and resolved as questions concerning the anatomy and the physiology of the nervous system. These are inquiries into the a priori forms and limitations of knowing and willing determined by the structure of the nervous system and by the modes of activity of its elements. An exact statement and a partial resolution of these problems is possible thanks to the precision and inclusiveness of modern observations. The brevity of latent addition, the requirement of spatial summation, the irreciprocity of conduction, the occurrence of direct inhibition and the duration of delay, which characterize synaptic transmission, and the all-or-none response with subsequent refractoriness of the component neurons do all ensure that the simple and discrete elementary signals are so related as to conform to a logical calculus of numerable, coexisting, and sequential propositions. From this fact we can deduce the formal properties of cognition and conation in any nervous system that possesses receptors and controls effectors.

Those trains of nervous impulses that cycle in re-entrant paths and the functionally equivalent alterations in the structure of the system issue in memory and learning. Those trains that return to their place of origin, where they diminish or reverse the processes that gave rise to them, establish some state of the system by causing it to behave so as to return to that state. Because some of the neurons composing the circuit are influenced by impulses from other parts of the nervous system, the state sought by the system is conditioned by its circumstances. Some re-entrant paths lie within the nervous system, others pass through remote parts of the body, and still others, leaving the body by effectors and returning by receptors, traverse the external world. The functions of the first are at present ill-defined, the second constitute the majority of our reflexes, and the last, our appetites and purposes. They account for all actions that have a goal, or an end. Values, which rationalize the domination of one such action by another when their incompatibility necessitates choice, and circularities of preference, which have destroyed casuistical and utilitarian hopes of a common measure of all values, exhibit themselves as consequences of the interconnections of circuits.

The foregoing considerations enable us to construct a hypothetical nervous net that will embody innately or induce any particular universal, or idea. Since all of these problems are alike in that each particular solution requires the construction of a net ad hoc, we are presently confronted by the single fundamental problem: What specific properties will develop in an originally chaotic net? Exact physiological formulation of this problem, construction of the requisite mathematics, and design of appropriate instruments are now in hand.

For two thousand years we have tried to make of words, and other notions, an enduring semblance of the ways of our changing world. We are still not satisfied. We want more and better science, but we cannot always tell at once whether we need more facts or lack the proper notions. Our philosophy began when we asked why we had failed, and when we sought for better notions. We had started with terms that were little more than the names of things about us, of their sensed qualities, and of their relations one to another. To make and use logic and mathematics, we needed terms that stood for things that could not be seen and for qualities and relations that are never apparent. This separation of theory from experience, although it served science, made the nature of knowledge so vague as to permit of much idle speculation. To put a stop to it we must catch the knower in the act and mark what is going on in him, to him, and around him.

What is needed in science is needed in ethics. The wants of particular men for particular things and the ways they get them lack generality and enjoy no quantitative order. To turn preferences into ratios and proportions, we seek some common measure of value. But such words stand for no one thing that any man has ever wanted. Again, the separation of theory from practice left every school free to ignore the conduct of life and the nature of desire in setting up its own notion of the "good" as a common measure of all valuesa vain superstition whose only remedy is to watch the desirous in his quest and, in the moment of choice, find out what is going on around him and within him.

Among philosophers the empiricists have always held that this was the only way to treat the problems of the theory of knowledge and of value. Through the centuries we have come to see that these problems can be stated and solved only in terms of the anatomy and the physiology of the nervous system. In those terms, we are inquiring into the a priori forms and limitations of knowing and willing determined by the structure of the nervous system and by the mode of action of its elements. We ask two kinds of questions. Of universals, or ideas, we would know how nervous activity can propose anything concerning the world and how the structure of the system embodies this or that idea. Of values, or purposes, we would know how nervous activity can mediate the quest of ends and how the structure of the system embodies the possibility of choice.

We have long seen dimly what we had to do. But that we can state our problems exactly and in part solve them now we owe in no small measure to the men alive today. It has been my good fortune to live and work among them. They are my witnesses that what follows is the outcome rather of their works jointly than of any one singly. But I alone am answerable for any seeming inference I may foist upon their public statements. I shall not mention their names, for, instead of a rigorous mathematical treatment of their detailed observations, I would give you as elementary an argument as fits the facts.

When I was a child I played with blocks, which I set up as wooden soldiers. I learned several things from those blocks that I have used ever since. I shall call each by its name in physiology.

When a block is struck it falls all the way or it does not fall at all. This is its all-or-none impulse.

It takes time to set it up again. This is its refractory period.

To fell it a blow must be of sufficient strength. This is its threshold.

A blow that fails to fell it tips it, and it will come back to its old position, but it may be felled by a second, equal blow while it is tipping. This is latent addition.

After a block has been struck, it is some time before it falls. This is synaptic delay.

The impulse and the latent addition are so much shorter than the refractory period and the synaptic delay that a block can never fell another one by striking it twice. This is the lack of temporal summation.

One block striking two others may fail to fell them, whereas the two, striking the one together, succeed. This is irreciprocity of conduction.

One block, by falling against the edge of another, may stop a third from felling it. This is inhibition at a synapse.

Two blocks striking a third together may fell it although each alone would have failed. This is spatial summation.

Last but not least, almost all the energy of the falling of a block comes from the energy of position stored in the block when it was set on end. None need come from the blow that fells it. This is irritability.

Blocks never quite live up to this ideal, but other things donotably neurons. These, like telegraphic repeaters, at every relay signal anew, the signal received being but the occasion for the signal sent, the energy of the signal sent coining from the sender, not from the signal received. Such a signal is, in the full sense of the words, an actual proposition. The energy of proposing comes from the proposer, not from the proposed.

These actual propositions, the impulses of blocks or of neurons, are essentially simple. Each either happens or does not happen, and that is all there is to it. The block falls only if it was felled, and all it can signal to the next block is that it was felled. It may have been struck, but too lightly It cannot signal that to the next block, and it cannot signal to the next block how hard it must be struck to fall, but it falls only if it was stuck. In logic, this relation of "only if," here between its fall and its being struck, is called material implication. Among falling blocks and nervous impulses it is only between events separated in time by synaptic delays.

When blocks are arranged in single file, the falling of any block implies the previous falling of the block that struck it, and this, in turn, implies the previous falling of the one before and so, backward in time, to the blow that felled the first block. Again, in the language of logic, implication is a transitive relation, and among actual propositions its domain extends backward in time over intervening propositions to the first member of the series. Thus, it comes about that the action of the central nervous system at a given time implies the earlier, activity originating in its sense organsits receptors. So, just as pebbles give us arithmetic, these blocks give us a calculus of actual propositions. In its terms we can say exactly anything concerning the world.

Next, we seek the way in which the nervous system embodies this or that idea. The structure of the system can be shown in the grouping of the blocks. Three configurations give us all significant contemporaneous functions.

The first has two blocks, A and B, so placed that either can fell a third, C. Thus, when C falls, it proposes that either A or B or both have fallen one synaptic delay earlier. This is the familiar "and/or" of legal documents, signified by "v." The logical function of the fall of A and/or of B at a given time, say 1, is called their disjunction. It is not either in particular but their disjunction at the time 1 which is implied by the fall of C at time 2.

The second configuration is like the first, except that the blocks E and F are so placed that both must strike G within the period of latent addition if they are to fell it. Thus, G in falling at time 2 implies the conjunction of the falls of E and F at time 1.

The third configuration has one block, H, that may fell another, J, if a second, I, does not fall against the edge of J and so prevent its fall. This is the conjunction of an assertion with a negation, indicated by a dot and a tilde, thus: H.~1. Here the falling of J at the time 2 implies that H fell and I did not fall at the time 1.

Clearly, any configuration in which the fall of a block was only prevented could never let it fall. Thus, no conjunction consisting of negations alone could ever be implied by an actual proposition. Finally, it would be difficult with blocks, but easy with neurons, to make a configuration or net in which the only one that could fell another would also prevent its fall, but such a conjunction also would never be implied. That both of these transmit no signals is worth heeding, for their contemporaneous functions would include tautologies and contradictions whose truth or falsity does not rest on that of their component propositions. All other propositional functions are significant, being true or false as their components are true or false. It is only significant functions that can be implied by a sequent action in the net.

Using only the three significant contemporaneous functions and the implicative sequential relation, we can embody in a net the possibility of having some notions concerning the world. Consider any net that has no circular paths. If we can count its receptors and its neurons, we can also count its actual propositions in any time measured in synaptic delays that can be counted. Such a net can be so made as to respond with a singular series of impulses in a single neuron, or a single impulse in a singular neuron, to each possible temporal or spatial figure of reception, or to translate any figure in time into one in space and vice versa. But each of its actual propositions has the same unique reference to one past time that we meet in television, which also translates spatial into temporal and temporal into spatial figures. The reason is the same in both. They lack memory.

Again, the blocks have helped me. When they are in single file in a circle and one is overset, the falling goes around the circle until the last to fall lands on the first. Were they set up as soon as they fell, the falling would go round forever. This is memory of a kind. We have seen it in after-images and felt it in the dizziness that follows spinning. It may last much longer, certainly as long as the specious present in which we have the whole of a tune or of an argument all together for the nonce. It cannot outlast activity, and in deep sleep most of the nervous system comes to rest, yet on waking we remember.

Hence, there must be a way in which the net of neurons or the configurations of the blocks are changed by their impulses. We can make it our rule that if one block, C, is falling at the time a second block, R, is felled away from it by anything, and if C could not quite have felled R, then we will reset C so that the next time it falls it can fell R. This change in configuration with use is like the law of growth with use, which Ramón y Cajal thought neurons ought to obey. For this the anatomic evidence is still to seek. It is but the best guess that we can make today. If it is the way neurons grow, then it pictures learning, as Pavlov would have it, thus: Let a neuron, U, at any time, say 2, propose the unconditioned stimulus, and let C, at time 2, propose the conditioned stimulus, and suppose that C cannot quite trip R whereas U can. Then R is being tripped by U just when C would trip it if it could. Our rule is now that thereafter C can, by itself, trip R.

However physically unlike, the two kinds of memory are functionally equivalent in this sense: Instead of the supposed change in the form of the synapse, we have but to imagine a circle of neurons in which a train of impulses is started by the conjunction of falls of U and C at time 2 and that, at each round, impulses come from this circuit to R, where each may form a conjunction with some impulse of C so as to trip R. Now any formal property of memory or learning, of the kind due to synaptic growth, is to be found in the circling of this train of impulses, due, that is, to an enduring figure of activity within the net.

We have quite rightly used Pavlov's terms, the conditioned and the unconditioned stimulus, for figures of stimulation to each of which we assigned a singular neuron, namely C and U. The net was so designed that the singular neuron was fired whenever the excitation of receptors was of that figure.

Blocks and their configurations, neurons and their nets, abide. The falling of a block and the impulse in a neuron, any actual proposition, in short, any signal, is an event. It happens only once. No part of it can be repeated. To know that learning has occurred we must give the unconditioned stimulus again. From this it is clear that the so-called stimulus is a new excitation of receptors but of the old figure. This figure, or form, or idea, or universal, is but the relation of component events, or excitations. When there are no events so related, there still remains, embodied in the net in abiding fashion, the possibility of responding in the original manner: that is, with a new impulse in the same old neuron. When we have learned, we have so altered the net that the fall of R at time 3 no longer implies merely the fall of U at time 2 but the disjunction of the falls of U and C at time 2. Moreover, we can make a net in which to alter a single synapse will convert one significant function of U and C into any other at a suitable neuron elsewhere in the net. If this is done at several points, a net that originally embodied certain ideas may come to embody any others that are significant functions of those ideas. The enduring activity in circular paths, formally equivalent to these abiding changes in the net, enables our nervous system to think in new ways within a specious present. Such formal equivalence gives us a theory of learning in terms of actual propositions alone, letting the imaginary net stay exactly as it was. The theorist has but to substitute a figure of activity in his hypothetical circle for his imaginary new synapse, and he can calculate as before. Other things than learning change the thresholds of neurons, notably fatigue and the fluctuations of the power they get from the food we eat and the air we breathe. These also have their formal equivalents in terms of enduring activities in circular paths. So the theory can still be formed in a single fashion. We lose neither generality nor rigor by treating the nervous system as an unalterable net of unalterable neurons.

To see what this may mean let us look at two blocks, B and C. If their positions and impacts are fixed so that B in falling always fells C, we have implications in both directions. The fall of C at time 2 implies the fall of B at time 1 as before, but now also that of B at time 1 implies that of C at time 2. This is the familiar "if, and only if," written = and called the "biconditional function," meaning simply that if either is true or false, then the other is the same. If reception were so related to the world, we might know how well we know the world. That would resemble being directly aware of our ignorance, whereas in fact nervous activity implies and only implies our known world. Cognition has just that quality.

Because the nervous system may be held to be unchanged, the activity within it proposes later activity within it, and, step by step, we may follow that activity to the effectors. Each step would be simply an implication but for the possibility of impulses coming by shorter paths from receptors to neurons nearer to effectors. Thus, the proposing is rather an intention than an implication, that is, an implication with the proviso that nothing intervene. Conation has just that flavor. We know what we will do, but not what we shall do.

A nervous system that has effectors may make markssay, put ink on paper. At any time it may see those marks. It has then made for itself thingswe call them signswhich will serve instead of a host of new circles in its net. With plenty of ink and paper it can in time prove or check all provable conclusions from any set of premises. By simple conditioning, marks may become signs for anything of which the nervous system already has an idea. They come to signify the same to other nervous systems by similar conditioning. Thus, by means of signs, the computing and concluding have been shared by many nervous systems at one time and continued into times beyond all measure. This indeed is the story of language, literature, philosophy, logic, mathematics, and physics.

But signs are two-faced things. They are enduring things in our world. We may write or talk or think about them. They are also tools, and we may use them. One has to be forever alert to note whether a sign is being used or only mentioned. Just as a sound must be of some length to have pitch or a thing of some bulk to have shape, so a sign must be of some size to have meaning. Thus, a sign may have discernible parts that have no meaning, like the I in IT. This may be dangerous even to the skillful logician who knows that a phrase, like Russell's Theory of Description, is a description that falls short of being a sign, although it is a complex of signs that can replace a sign without loss of sense but with a change of kind. Moreover, signs usually have no meaning but one learned, and that only to those nervous systems that have learned that meaning. To others they are just things. Finally, signs are not signals. Precisely, they are not actual significant propositions. They are not excitations of a given figure but only things that may shape excitation to their figures. They serve the same function as a figure of activity in a circuit. Thus, they resemble configurations or nets. Among them we may seek tautologies and contradictions.

Finally, let us look but once more at the blocks. Let them stand in single file in a circle, but this time so placed that the last block, instead of falling on top of the first, falls against its edge and so inhibits later felling of that first block. Such paths have been found. Some lie within the central nervous system. Others pass through it. Their function is clearly to stop the process that gives rise to their activities. About a hundred years ago in France, the fathers of physiology coined the term réflexe for a cyclical action which, starting in some part of the body, passed by one way to the central nervous system, whence it was reflected over another to that same structure in which it arose and there inhibited or reversed the process that had given rise to it. The traditional English example is the so-called stretch reflex. When a muscle is lengthened by a load put upon it, impulses arising in certain of its receptors go to the central nervous system and pass, with but a single synaptic delay, to the motor neurons whose impulses excite the muscle to shorten. With a brief delay the muscle is again of nearly its former length. The reflex has thus established a certain length to which the muscle returns, and so helps to keep the body in one position in space.

The traditional American example is the homeostasis of blood pressure, mediated by numerous receptors, which, as pressure rises, send more impulses over several nerves to the central nervous system, whence impulses emerging over other nerves reach the heart, slow its pace, and reduce the blood pressure. Similarly, several reflexes, with receptors sensitive to oxygen and carbon dioxide dissolved in the blood, keep both nearly constant by changing the rate and depth of breathing. For years I have been studying homeostasis of sugar in the blood, but as yet I have not found the receptors. We know more of the reflexes that keep the body at an even temperature. There are doubtless many others of which we have not dreamed, but we know enough now to see that reflexes do severally fix, each in keeping with the kind of its receptors, one pressure, temperature, volume, length, direction, position, velocity, acceleration, concentration of some substance or almost any other kind of quantity by causing the system to move toward that one fixed by the reflex. It is always the difference between the condition of the system at the moment and the state established by a reflex that is answerable for the activity of the reflex at that moment and so for the change toward the established state. Thus the state established by the reflex is the end in and of the operation. In this mechanistic sense we speak of physiological functions as having goals, or ends. So long as we live, we are a bundle of circular systems seeking various ends at every moment.

Every reflex has a path of some length over which it goes at some speed and in which it is delayed at one or more synapses. Thus, there is always a lag between the time it is excited and the time excitation reaches its effector, and there is a second lag in the action of the effector and the return of the quantity measured by the receptor. Any circuit has a natural period of oscillation, which in the simplest case is the sum of its lags. When we change the magnitude of the quantity measured, a reflex may return the system toward, but not quite to, the original state, or it may overshoot that state. The ratio of the return to the change that occasioned it is called the gain around the circuit. When the return is equal to the change that occasioned it, then the gain is one. We can find the gain at many frequencies of excitation. The gain is found to depend upon the frequency, and there are usually frequencies for which the gain is greater than one. If the gain is greater than one at the natural frequency of the reflex, fluctuations at that frequency begin and grow until the limitations of the structures composing the path reduce the gain to one; then, at the level for which the gain has become one, both the measured quantity and the reflex activity go on fluctuating.

By burdening the muscular part of the path, one can increase the gain of the circuit so that, at the natural period, the gain exceeds one. The stretch reflex will then give a series of similar contractions, like the bouncing of a leg when one sits in some positions. Similarly, by altering the lag in the nervous system, blood pressure can be made to fluctuate. Breathing, chewing, walking, and running, even eating and sleeping, are naturally paced in this manner.

Reflex activities and their periodicities give rise to no theoretical problems that are new to the engineer whose business is communication. The mathematician has the tricks for handling them. He can treat reflexes, one and all, as he would the action of an electrical device in which the response to a sum of excitations is not equal to the sum of the responses to those excitations separately. He calls them nonlinear circuits. He has enlarged this theory to cope with those engines in which the magnitude sought is changed at will in almost any manner. Just as we may vary the set of the thermostat that governs the temperature of a room, so neurons in the path of the reflex can be excited or inhibited by impulses from elsewhere. The state to be sought by the system is thus conditioned by its circumstances. The largest of these subservient circuits comes into play when we move a hand to a new place. We contract muscles that set a mass in motion. The circuit shadows the contraction of these muscles by a contraction of their antagonists and brings the mass to rest at the place sought. When this circuit is broken, the hand forever falls short or overshoots.

Electrical filters, first made by cut-and-try to balance distortion of signals in long lines, can be mathematically designed to stop most noise mixed with a signal while passing on the signal. Now, these filters give us the best guess based on knowledge of the past. As yet we know too little about brains to point out or picture our predictive filters. With such filters our guns place their shells so that they burst when a plane arrives there. With such filters the cat jumps not at the mouse but to the place the mouse will be when the cat lands. The filter is designed not on the basis of a single series in time but on a group of them treated statistically. One mouse on one occasion may stop or turn short of the place the cat will land. The cat cannot know the future. She can only jump to the place where most mice on most occasions will then be caught. Cats live because the filter is built to work best when the number of tries is large. Through all the random doings of mice, the filter has come to embody the manner of flight common to most mice. Now, from the wary movements of the single mouse, glimpsed by the cat, her filters forecast their fell conclusion.

When we ask not how but why a cat pursues a mouse, the answer is again in terms of circuits, but this time part of the path lies outside the cat. We will make the simplest assumption: that trains of impulses from the cat's empty stomach start her in quest of mice, and that, when she catches and eats a mouse, this fills her stomach and stops trains of impulses.

So now the chase will cease; so journeys end in lovers' meetings; even so, when our signs shape our signals, our question finds its answer and is no more a question. We asked how nervous activity mediated the quest of ends. It is by trains of impulses that, returning around a closed path, stop or reverse the process which gave rise to them, thus establishing some state toward which the system is to move. Some paths lie wholly within the nervous system; some, our reflexes, have paths through various parts of the body; and others leave the body by effectors, traverse the external world, with or without aid of signs, and come back through receptors. Those are our appetites and purposes. Together they include all actions that have a goal or end.

How can the structure of the system embody the possibility of choice? Clearly, if each such circuit had a path separate from any other path, each would go its own way to its own end. But many paths share nervous parts, and the use of others would result in contrary acts of some effectors. A few, like swallowing and drawing breath, working at once would destroy us.

Conflicts and mortal collisions are barred by inhibitory links from one circuit to another, so that when both are excited only one works. Thus, the net embodies the possibility of these decisions. We might have been so made that if any one circuit was dominant over a second, and it over others, none of these was ever dominant over the first.

This would have set up a hierarchy of dominance. We were not made that way. It is easy to find circularities in dominance.

Imagine three reflexes such that A dominates B, B dominates C, and C dominates A. For this the dominance need not be absolute. It may depend upon the amount of activity of each, yet, with each at its own particular amount, dominance may be circular. Finally, inhibition need not be used. If each response implies summation of its proper stimulation with that of the next around the three, preference will be circular. I have met this circularity in experimental aesthetics even though the ends were much alike.

The notion of value arose from the number of things of one kind bartered for another kind. To rationalize these transactions, traders used some weight of precious metal. The number of the two kinds of things bartered was inversely proportional to the number of unit weights of precious metal for each item. The common measure of value was the unit weight, and their value was the price in terms of this common measure. Eudoxus generalized the notion of ratio, and Plato sought a common measure of other than marketable things. Hence the "good," the "beautiful," and the "true." Every psychologist, and every psychiatrist, who has studied motivation has wanted a general scale of values. He has wanted to know how much sexual pleasure was equal to escaping a given severity of pain, or how much work would be done for these ends. We have tried and failed to make a single scale of value for all motives. Many ends severally necessary to life are dimensionally dissimilar. Their dominance is frequently circular. The dissimilarities of the ends are fixed by the nature of the particular quantities established by the reflexes. The dominancechoice of preferenceis embodied in the interconnection of the reflex paths within the central nervous system, where circularities of their interconnections fix circularities of dominance. Neither the ends nor the dominance can be rationalized by notions like the "strength of desire." Any common measure of all values is a vain superstition.

We have stated our problems exactly and either solved them or seen how they can be solved. But you may have noticed that we have often made nervous nets to fit the facts instead of seeking them in the nervous system. We have shown that a net can be made willfully to embody this or that idea, and to learn to embody this or that idea, but not that these nets are even likely to exist. Artificiality cannot be held against a man-made computing machine, but it is too much to have to suppose that each of ten billion neurons is connected according to any complete set of drawings and specifications.

It is doubtless true, although we do not know how it is brought about, that things seem similar to us which have happened together to all creatures that ever lived on earth, and this is doubtless fixed in us by the coming together of neurons whose precursors have been associated in activity. It may be that chance variants that worked better passed on to us the general form of our organs and some statistical regularities in the arrangement of cells into tissues; but our genes can scarcely order which neuron is to fasten to which throughout the system. It seems, therefore, more reasonable to assume that we all inherit only a few fixed universals like the qualities of sensation, and the common sensibles of position and motion, and those reflexes and appetites without which we and our kind would perish. For all else we must begin with random nets.

The word "random" has about it an unmerited air of simplicity. The word "chaos" exists only in the singular, but the varieties of chaos cannot be counted. No chaos is a single system, but an ensemble of systems whose statistical properties can only be defined by an infinite series of functions of a continually increasing number of variables. These may vary in any manner with place, direction, and distance, and depend upon the relation of things by pairs, by triplets, etc. In our present handling of these random nets, Mr. Walter Pitts has been most active, and what follows is largely his share in our joint undertaking. Choose any neuron in a random net, and imagine about it a nest of surfaces such that the chance of connection with our neuron is the same for all neurons on one and the same surface. If that chance depends upon the distance but not upon the direction or position, these surfaces will be a nest of spheres about our neuron. If directions matter, these may become egg-shaped, and if position matters, these may be displaced in one direction, or the whole nest may be lopsided, "as if it were chopped off with a hatchet." It is easiest to solve the problem for spheres, for other nests are only knottier.

Physics is simple if we have to handle only two things. A few more make it unmanageable. It is simple again for very large numbers, for the chance of large deviations from the laws of chance is too small to matter. The problem of three bodies in astronomy is technically intricate. The mechanics of the galaxies is simple. It is easier to think rightly of a million neurons in a random net than of a couple of score in our specified nets. But there is one stumbling block that, so far as I know, is peculiar to relays. The activity of a neuron is zero except when it emits an impulse; then it is one. At time t the activity is one if the excitatory impulses outnumber inhibitory impulses by more than the threshold at time t-1. Otherwise it is zero. What we want is an approximation to the average frequency of impulses in our neuron. The only general way to calculate such things starts from a trial value of frequency such that we can see how the system will behave close to that frequency. This has to be done for temperature and pressure of gases, but there it is easy to start with the gas so rare that we do not need to take into account the size of the particles and the forces between them. This works well for steam until we come close to condensation. In the case of the neuron it is best to begin with the trial value as the one and only frequency in our neuron that would result from the same frequency in each of the neurons afferent to it if their activities were statistically independent of one another.

The next approximation, which handles faster changes in frequency, uses the standard methods for treating of linear electrical nets. Some of the most useful statistical properties of a nervous tissue can be found by giving to any point of it a suitable series of electrical shocks while recording the impulses at many other points. The apparatus we need is being designed and will be built. This will help us to put some limit to the varieties of chaos of the co1tex, but the varieties remaining will still be infinitely many.

When this has been done, we shall have before us the problem of the origin of universals, or induction. The problem is well exemplified in Klüver's famous study of monkeys without a visual cortex who could distinguish the larger of two equally bright spots, or the brighter of two equally large spots, but not a smaller and brighter from a larger and less bright if the total luminous flux was the same from both. These experiments led to the hypothesis that such a monkey's only visual universal was total luminous flux. Other ingenious experiments confirmed the hypothesis. This is the state of the best hypotheses of science. If I drew all but one corner of a square on the blackboard and asked you to complete the figure, you would make it a square, whereas the line might have gone anywhere else equally well. So the monkey may not have known only total luminous flux, and this may tomorrow appear in some new discrimination. To make the point clear, suppose a man is given the series 1, 2, 3, and asked the next number. He may say 4 or 5 or any other number. If he says 4, we may suppose his idea is the cardinal numbers; if 5, the prime numbers; and if after 5, he answers 7, it would confirm our hypothesis of primes. He might have answered 8, and so refuted the hypothesis of primes. We might now suppose he was giving us the sum of the two preceding numbers. If we think for a moment, it is clear that there are innumerable series which have any finite number of terms in common. No finite number of terms defines the rule of formation. To know the next number does exclude an infinite number of series, but an infinite number remains. Thus, although a single experiment may disprove any hypothesis, no finite number of experiments can prove a single one of the infinite number that remain to fit any set of facts. The strange thing is that we do frame any hypothesis. Each is a new idea, or universal. Of all possible universals, how do we induce any, and, of permissible ones, the particular one that we do?

Let us start with whatever varieties of random nets the experiments depict, and let us suppose learning can be fully stated in terms of the relations of impulses in one neuron to those in another, in the ways we found would work with the blocks. We want a theory of learning that will work when the number of neurons is large, and the net random. Similar problems arise in the physics of magnetizing a bar of steel. It starts as a large number of little magnets left pointing hither and thither by chance. Each of these little magnets can be turned about by an applied magnetic force. The little magnets near a particular magnet contribute to the forces acting on it. The configuration of all the applied forces serves as a pattern for the configuration of the little magnets, any local figure in the forces giving a local structure. Finally, the little magnets stay in the positions they had at the time the forces cease to be applied. To see how much alike magnetizing and learning may be, we may write the analogies in parallel order: the random net composed of neurons with the initial unmagnetized bar; the formation of synapsis with the magnetization; excitation, with the applied field; concomitant activity in nearly connected neurons making for structural changes with the mutual influence of neighboring magnets; enduring things that potentiate figures of stimulation and the consequent local stimulating of the net with the magnetic induction; and finally, the abiding of those links which were last used as we came to our goals and the reflexive or appetitive activity to its end with the final state of permanent magnetism in the bar. It is not too much to hope that with these things in common the mathematics for one may be shaped to fit the other. If it will serve, then we may someday state how random nets may learn by taking on this or that local structure. As we have already seen how, in a net made to order, we can embody given ideas and the possibility of learning any others that are significant functions of the given ideas, it is now clear how the development of local structures in random nets will complete the theory of the induction of all ideas of the types we have imagined. This may be done in some few years.

What troubles my dreams is this: that, when it is done, it will not be enough. No finite series defines its law of formation nor any data the hypothesis. I fear that the nervous system that learns as iron is magnetized, while it will give us a next number or a new experiment, may not give us the law of formation of the seriessay the idea, the cardinals or the primesor the hypothesissay the law of gravitation, or the second law of thermodynamics. To use an idea is a finite act, which begins and ends in time; and when it occurs, it must not merely give the next number or experimentfor an infinite number of ideas or hypotheses give the same next. It must set the law forth in signs of some sort. There is no other way to know that the law exists. In a finite net we seek a kind of finite action that can be repeated as often as desired, and can construct and recognize notions proper to the net. I know of nothing but circular paths that embody the possibility of such actions. They have issued in memory, learning, prediction, and purpose. Each circuit has embodied an idea of a kind not to be had without it. If these circuits can be engendered by learning in a random net, and that learning can be described in much the same manner as the magnetizing of a bar of steel, we may be able to picture to ourselves how a natural nervous system can come to have notions of the cardinals or hypotheses like the laws of mechanics.

It should be clear that we can make a net that will frame hypotheses under any rule which prescribes the induction for every case, but we do not know the rule for one animal, or that there is a single rule for any two. We must not be hoodwinked by any uniformities in the behavior of several animals into the superstition that they frame their hypotheses by one and the same rule; for, just as two hypotheses may agree in prediction of the next experimental datum, or two ideas like the cardinals and primes in the series 1, 2, 3, so in simple cases several rules of induction may concur in the same hypotheses or ideas. What is more, we have reason to believe that the very rules of induction are not prescribed at birth, so that, even if we were born alike, our unlike fortunes would diversify our rules.

All learning, including the process whereby the rules of induction are perfected, orders step by step an ensemble erstwhile chaotic. And whenever this, which is a change of state, happens to an ensemble, the statistical variables that characterize it no longer require merely the first few members of the probability-distributions of monads, diads, triads, etc., of the elements of its component systems, but, instead depend upon the ultimate trends of these distributions of n-ads as n increases without limit. For no task in physics is our mathematics so feeble or so far to seek. Fortunately for us, this change of state in our brains that may happen to water in a moment of freezing goes merrily on in us as long as we can learn, and some of us may live to share the fun of concocting the required mathematics.

Yet, even were this done today, and done in our two-valued calculus of actual propositions so that we were sure no other could be needed to describe all human thinking in terms of nervous nets, every philosopher should know that this confers no primacy on any one among all possible logics of signs nor confines them a priori in any way. Our calculus of actual propositions is but formally similar to a part of the two-valued calculi of statements and of classes. Tautologies, which are the very stuff of mathematics and logic, are the ideas of no neuron. The formal similarity is fortuitous. Had our calculus of actual propositions been three-valued, our nervous systems could have made two-valued logics. Their calculus being in fact two-valued, they have made for themselves, three, n, and infinite-valued calculi, as well as others yet stranger. Non-Euclidean geometry breached Kant's synthetic a priori; metalogic has razed the ruin; and, despite Magnus, no appeal to physiology can restore any part of it. It becomes us to be humble to the facts even in that strange case in which they are the work of our own heads.

In any case it is clear that we may go forward safely, using the logical calculus of actual propositions which we have learned from these wooden soldiers. Like good scientists, they have raised and formulated for us the problem of induction when we had only put to them, and they had solved, the problems of knowledge and of value. I am, therefore, sorry to have to inform you that their exemplary conduct was predicated upon a superstition. Because it is not yet quite vain, I share it with them. The name of that superstition is causality.


  1. Fessard, A., et J. Pasternak: Les mecanismes elementaires de la transmission synaptique. J. Physiol. Path. gen., 2:319-446, 1950.
  2. Lloyd, D. P. C.: Principles of Nervous Activity Chapters 1-IV, A Textbook of Physiology, J. Fulton, ed. Philadelphia, W. B. Saunders, 1949.
  3. Lorente de Nó, Rafael: A Study of Nerve Physiology. Studies from The Rockefeller Institute for Medical Research, vols. 131 and 132, 1947-
  4. Lorente de Nó, Rafael: Transmission of impulses through cranial motor nuclei. J. Neurophysiol., 2:402-464, 1939.
  5. McCulloch, W. S., and W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bull Math. Biophys., 5:115-133, 1943.
  6. McCulloch, W. S.: The brain as a computing machine. Elect. Engineering, June 13, 1949.
  7. McCulloch, W. S., and J. Pfeiffer: Of digital computers called brains. Sc. Monthly, 6:368-376, 1949.
  8. McCulloch, W. S.: A heterarchy of values determined by the topology of nervous nets. Bull Math. Biophys., 7:89-93, 1945.
  9. McCulloch, W. S.: The functional organization of the cerebral cortex. Physiol. Rev., 3:390-407, 1944.
  10. McCulloch, W. S.: Modes of functional organization of the cerebral cortex. Federation Proc., 2:448-452, 1947.

Epi-search analysis

Word cloud:

Action, Activity, Actual, Block, Change, Circuit, Circular, Common, Dominance, Embody, Ends, Ex, Excitation, Fall, Fell, Frequency, Functions, Give, Guration, Idea, Implies, Impulses, Learning, Logic, Magnets, Measure, Nervous, Net, Neurons, Number, Others, Paths, Possibility, Problems, Propositions, Random, Receptors, Relation, Rule, Series, Signal, Signs, Single, State, Structure, System, Terms, Things, Value, Work


Nervous, System, Activity, Neurons, Impulses, Things, Neuron


(final) Neurons, Systems, Addition, Circuit, System, Transmission, Summation, Inhibition, Series, Statement

(full text 1) Neurons, Impulses, Receptors, Neuron, Activity, Re, Propositions, Things, Blocks, Block

(full text2) Addition, Inhibition, Neurons, Statement, Transmission, Summation, System, Values, Delay, Response

Related Articles:
Collective Computation in Neuronlike Circuitsby DW Tank, ‎1987
Neural systems involved in fear inhibition: extinction and conditioned inhibitionM Davis, WA Falls, J Gewirtz, Contemporary Issues in Modeling Psychopathology, 2000
Lesions of the central nucleus of the amygdala block conditioned excitation, but not conditioned inhibition of fear as measured with the fear-potentiated startle effect.WA Falls, M Davis, Behavioral Neuroscience, 1995
Electrophysiological interactions of enkephalins with neuronal circuitry in the rat hippocampus. I. Effects on pyramidal cell activityT Dunwiddie, A Mueller, M Palmer, J Stewart, B Hoffer, Brain Research, 1980
A carbon nanotube cortical neuron with excitatory and inhibitory dendritic computationsJ Joshi, CC Hsu, AC Parker et al., IEEE/NLM Life Science Systems and Applications Workshop, 2009
The endocannabinoid system controls key epileptogenic circuits in the hippocampusK Monory, F Massa, M Egertová, et al., Neuron, 2006
Astrocyte-mediated potentiation of inhibitory synaptic transmissionJ Kang, L Jiang, SA Goldman, M Nedergaard, Nature Neuroscience, 1998
Crossover inhibition in the retina: circuitry that compensates for nonlinear rectifying synaptic transmissionA Molnar, HA Hsueh, B Roska, FS Werblin, Journal of Computational Neuroscience, 2009
Neural dynamics in a model of the thalamocortical system. I. Layers, loops and the emergence of fast synchronous rhythms.ED Lumer, GM Edelman, G Tononi, Cerebral Cortex, 1997
Neural circuit tuning fly visual neurons to motion of small objects. II. Input organization of inhibitory circuit elements revealed by electrophysiological and optical …M Egelhaaf, A Borst, AK Warzecha, et al., Journal of Neurophysiology, 1993
Descending inhibitory reflexes involve P2X receptor‐mediated transmission from interneurons to motor neurons in guinea‐pig ileumX Bian, PP Bertrand, JC Bornstein, The Journal of Physiology, 2000
Neuromorphic analogue VLSIR Douglas, M Mahowald, C Mead, Annual Review of Neuroscience, 1995
Inhibitory synchronization of bursting in biological neurons: dependence on synaptic time constantRC Elson, AI Selverston, et al., Journal of Neurophysiology, 2002
Pharmacological evidence of inhibitory and disinhibitory neuronal circuits in dorsal cochlear nucleusKA Davis, ED Young, Journal of Neurophysiology, 2000
Resonant synchronization in heterogeneous networks of inhibitory neuronsR Maex, E De Schutter, Journal of Neuroscience, 2003
Directional selectivity is formed at multiple levels by laterally offset inhibition in the rabbit retinaSI Fried, TA Mu, FS Werblin, Neuron, 2005
Sleep state switchingCB Saper, PM Fuller, NP Pedersen, J Lu, TE Scammell, Neuron, 2010
Building blocks for electronic spiking neural networksA Van Schaik, Neural Networks, 2001


Related books:

Fundamental NeuroscienceL Squire, ‎FE Bloom, ‎NC Spitzer, 2008
Neural Circuits and Networks: Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the Ettore Majorana Center, Erice, Italy, June 15–27 1997V Torre, ‎J Nicholls, 1998
Neural Communication and Control: Satellite Symposium of the 28th International Congress of Physiological Science, Debrecen, Hungary, 1980G Székely, ‎E Lábos, ‎S Damjanovich, 2013
Neuromorphic PhotonicsPR Prucnal, ‎BJ Shastri, 2017
Neurotransmitter ReleaseHJ Bellen, 1999
History of the SynapseMR Bennett, 2003
Discovering the BrainS Ackerman, 1992
NeuroscienceD Purves, 2012
Representation in the BrainA Roy, ‎L Perlovsky, ‎T Besold, 2018
Information-Processing Channels in the Tactile Sensory System: A Psychophysical and Physiological AnalysisGA Gescheider, ‎JH Wright, ‎RT Verrillo, 2010
Analog VLSI: Circuits and PrinciplesSC Liu, ‎J Kramer, ‎G Indiveri 2002
Neuroscience in the 21st Century: From Basic to ClinicalDW Pfaff, ‎ND Volkow, 2016
Molecular Biology of the CellB Alberts, 2002
Human AnatomyKS Saladin, 2011
From Molecules to Networks: An Introduction to Cellular and Molecular NeuroscienceR Heidelberger, ‎MN Waxham, ‎JH Byrne, 2009
Seizures and EpilepsyJ Engel, Jr., 2013
Neural Control of LocomotionR Herman, 2017
The Principles of Psychology - Volume 1W James, 1890
From Computer to Brain: Foundations of Computational NeuroscienceWW Lytton, 2007
Demystifying the Brain: A Computational ApproachVS Chakravarthy, 2018

Keywords from the related material:

Inhibition, Neurons, Circuits, Brain, Neuroscience, Retina, Hippocampus, Circuitry, Synchronization, Neurotransmitter.