Изображения страниц
PDF
EPUB

BACKGROUND

Moderator

Warren S. McCulloch

Massachusetts Institute of Technology

I intend to discuss a very simple figure. My object in doing it is to diagraph a bit of what we know about neurons and to lay bare those kinds of difficulties that may happen in their circuit action. The figure represents merely a neuron stylized with afferent impulses coming to it. The afferent impulses may be excitatory or may be inhibitory. The connections may be what they are supposed to be or not what they are supposed to be. The threshold of the cell may fluctuate making it fire for reasons for which it is not supposed to fire. The strength of the signal may fluctuate making trouble for us and finally, although the nerve cell may have done a very good job of taking care of computing the right function, there may appear troubles on its axon, peripheral to all these mechanisms. This is known in the laboratory as "trouble with the caretaker's daughter." Figure 1 is a nerve cell with its branches. Signals come to it from various sources. They may end on and excite the cell body or they may end on or among the dendrites, in which case they inhibit. This cell has a threshold. It may be that this threshold is what it is supposed to be or that it fluctuates. It may be that these connections are what they are supposed to be or not. Unless there is a proper synapsis, the proper connection to this cell may not be realized. Finally, the strength of the signals themselves may fluctuate and at the last, after all of these are taken into proper consideration, there may appear false signals on the output, or they may fail to appear where they should have which we call the residual woe, the epsilon.

[blocks in formation]

The men who will follow me will be talking about circuits composed of such neurons and they will go, one after the other, into each of these difficulties and how to design circuits that are correct in spite of the misbehavior of the level of signal strength, connections, threshold, or even spurious signals, or loss of signals in the axon. These are the only conceivable sources of difficulty with real neurons.

A point that may need clarification is the history of the Venn symbols. Beginning in the days of Raymond Lowell, about 1230, 1240, in the teaching of logical affairs and in their investigation, the trick was developed of using a closed curve, usually a circle, to include all of the objects of some class--let's call them of the class A--and another such circle intersecting it, all objects of the class B, then the common area of the two circles is the class of all objects that are both A and B, and everything that is outside is neither A nor B. These symbols came down in a tradition through the days of Leibnitz and formed for him the basis of the universal characteristics and it was he who first tried, in terms of these, to build himself a computing machine. He spent his latter years and most of his money doing it. The symbols, then, proceeded to fade and were brought back about the middle of the last century. No, before that, by Euler who was then teaching logic to a little German princess. From that time on they were put on the blackboard of Venn and all of his followers in teaching logic. The difference in their use here from that of Venn is only that we are concerned with objects which are propositions, or rather are statements, impulses of neurons, which propose their proper excitation. Thus, what we have done is to build in these Venn symbols what is in substance a Wittgenstein truth table for the propositions in question. So, that when one calculates with these symbols what he is actually doing is performing a truth table calculation of the truth of the functions which he is computing. The two are indistinguishable in use. Now, Venn's famous diagram for four affairs looks something like this (figure 2). Here, we have four closed curves, each of which divides the spaces produced by the previous

Figure 2

curves into two parts.

This is all that is necessary to make such symbols. They become very difficult, as Venn himself discovered, the minute one goes to more than four. However, Selfridge and Marvin Minsky have worked out a manner of doing this with sinusoidal curves, always doubling the period and amplitude of the curve. This makes perfectly happy figures. In this manner, one can extend Venn symbols to any number of arguments that one cares to, and no ambiguity arises in their use.

[blocks in formation]

This is the history of these symbols. In practice, for two arguments, it does not pay to draw out the whole of the circles. We simply draw the intersection at the lower limit (figure 4), the jot at the bottom being outside of both and the jot at the top being inside of both, so we have merely left off these lines which are unnecessary in practice. The advantage of these symbols is that one can immediately write any logical functions he chooses of any number of arguments that he chooses and operate it on any others that he chooses, and one becomes enormously adept at them at the blackboard.

X

Figure 4

A Sheffer

A second question is, "What is a Sheffer stroke function?" stroke function is simply an assertion of the incompatibility of two affairs. It has been shown that with this in the simple form of "not both" or in the simple form of "neither, nor," this one logical operation and the arguments are sufficient to generate all logical functions of those two arguments. What we have done, principally, has been to generalize this to any number of arguments and this was the first of what we call polyphecks after Charles Pierce who invented them some 30 or more years before Sheffer.

I have been asked to explain a little bit more about the use of circuits where one considers only the variations of the threshold of the neuron as responsible for the trouble. Under such circumstances, if one deals with neurons having three inputs apiece, as this circuit (figure 5) coming from sources A, B, and C and playing upon this row of neurons, and one brings them on with a strength, four to one, around the cycle. Then, if the threshold of these neurons is set at the value 7, each will fire only when all inputs are fired. On the other hand, if the value of the threshold falls, each of that top rank of neurons, when the threshold reaches the value of 6, will fire per one pair of inputs, but not per any other pair. There are three such pairs. Each neuron is unique in the world to which it makes an error. Consequently, no two can misbehave at the same time. When the threshold has fallen to the value of six, each component being made ready to make a mistake, only one can make such a mistake in any one world. The result is that the output neuron which should fire only when all three fire may be ready to fire, not only when all three fire but These are when any pair fires, and the circuit will still make no errors. the kinds of circuits which are, in this sense, infallible, that though

each component has shifted so that it is computing a wrong logical function, the net as a whole still computes the correct logical function. You will notice here that the Venn symbols, instead of having merely jots and blanks, have in them a p. The probability of the misbehavior. This p, operates in Venn symbols the same way that ones and zeros do in all the computations, resulting in a probabilistic logic, in terms of which we can handle these difficulties.

[blocks in formation]

PROPERTIES OF A NEURON WITH MANY INPUTS

Manuel Blum

Massachusetts Institute of Technology

INTRODUCTION

SECTION I:

A formal neuron is a logical device with well-defined properties. This paper gives some theorems about the neuron which clarify the interesting properties of circuits using them as components.

In his paper on Probabilistic Logics (7), John Von Neumann posed and attempted a solution of the problem of reliability in nets of computer components with 2 or 3 inputs. However, his solution required better components than could be expected in the brain.

The search for a solution was continued by Warren S. McCulloch (1, 2, 3). Dr. McCulloch postulated a formal neuron, which we shall simply call "a neuron," and gave physiological evidence for the choice of this model. He connected the neurons in nets with outputs more reliable than the outputs of the individual neurons. This paper is a mathematical investigation of manyinput neurons such as are contained in these nets.

The FORMAL NEURON is a computer component with the following properties:

1. It received fibers from 8 inputs and has one output.

2.

Each input and the single output may be either ON or OFF.

3. Fibers from an input may divide, but may not combine with other
fibers.

4.

5.

6.

7.

A fiber may excite a neuron with a positive unit (+ 1) of excitation (excitatory fiber) or excite a neuron with a negative unit (-1) of excitation (inhibitory fiber). A fiber may also inhibit a signal which passes through another fiber (fig. 1).

Signals may travel in only one direction through the neuron.

There is a unit time delay in the transmission of a signal through the connection between input fiber and neuron.

If the neuron makes an error, it fires when the arithmetic sum of excitatory and inhibitory signals to it exceeds some specified THRESHOLD (8).

« ПредыдущаяПродолжить »