AUM Conference Transcript- Session One
Monday Morning, March 19, 1973
JOHN LILLY: G. Spencer Brown--enigmatic figure to say the least. His book preceded him. We know less about him than we know about Carlos Casteneda. His book expresses a good deal that is impersonal, universal; and hence, the man is kind of hidden by the book. I began to find the man when I found his second book, published under a pseudonym, James Keys, called Only Two Can Play This Game, and as soon as I read footnote One, I suddenly realized what Laws of Formwas all about. And with that I will leave the discussion to G. Spencer Brown: James.
G. SPENCER BROWN: Well, I don't know what to say. It is a great pleasure, a great honor, to be here. I don't feel that I deserve the honor in any way. It is also--I think I feel rather nervous, as this audience has so many and so different qualifications.
Mathematics and Logic
I don't hope to do anything but really answer any questions that anybody has to ask about the nature of what I was trying to do when I began to write Laws of Form and the very different answer, what I actually found, that had appeared when I had finished the book, which was not what I had set out to do. I guess that is the only way that I can begin to talk about the work, which is as far as I am concerned entirely impersonal. It has as little to do with me personally as anything I can imagine. I have no particular attachment to it. I wouldn't do it again if anybody asked me to. I was conned into writing it by thinking that it would have an entirely different effect from what it did have; and, in completing it, I unlearned what I learned, the kind of values that present-day civilization inculcates into us soon after we are born. And I learned that it was all the same anyway, whichever state one went into. It is only by assuming that some states, or that a state, one State, may be better than another, that the universe comes into being. The universe, as I then discovered, is simply the result of if it could be that some state had a different value from some other state. But that is to start at the end.
At the beginning, what I was concerned to do was--having left the academic world and gone to living in London, I became an engineer. And I was detailed to make circuits for the use of the new transistor elements that were coming into being for making special purpose computers. I was employed by a firm then known as Mullard Equipment, Ltd. a branch of the Phillips organization, and I was employed not because of any engineering qualifications but because I had taught logic at Oxford and it was recognized that, in fact, the study of logic in some form or another-was necessary to designing circuits involving on-off switches. So I began with the very specific task of applying what I knew to these circuits, to see if we could devise rules for designing that would save money.
I rapidly found that the logic I had learned at the University and the logic I had taught at Oxford as a member or the lofty faculty wasn't nearly sufficient to provide the answers required. The logic questions in university degree papers were childishly easy compared with the questions I had to answer, and answer rightly, in engineering. We had to devise machinery which not only involved translation into logic sentences with as many as two hundred variables and a thousand logical constants--AND's, OR's, IMPLIES, etc.--not only had to do this, but also had to do them in a way that would be as simple as possible to make them economically possible to construct--and furthermore, since in many cases lives depended upon our getting it right, we had to be sure that we did get it right.
For example, one machine that my brother and I constructed, the first machine I mentioned in Laws of Form, counts by the use of what was then unknown in switching logic; it counts using imaginary values in the switching system. My brother and I didn't know what they were at the time, because they had never been used. We didn't at that time equate them with the imaginary values in numerical algebra. We know now that's what they are. But we were absolutely certain that they worked and were reliable, because we could see how they worked. However, we didn't tell our superiors that we were using something that was not in any theory and had no theoretical Justification whatever, because we knew that if we did, it would not be accepted, and we should have to construct something more expensive and less reliable. So we simply said--"Here it is, it works, it's 0.K.," and British Railways bought it, we patented it, and the first use for it was for counting wagon wheels. It had to count backwards and forwards, and we had one at each end of every tunnel. When a train goes into a tunnel, the wagon wheels are counted, and when it comes out, they are counted. If the count doesn't match, an alarm goes out, and no one is allowed in that tunnel--at least, not very fast.
This had to be not only a very reliable counter, it had to count forwards and backwards, because--you know what happens when you get on the train: it goes along and then it stops and then it goes backwards for a bit, goes forwards. So, if the train was having its wheels counted, and then, for any reason, ran out of steam and got stuck and then slipped back, then the counter had to go backwards. So all this we had--but we made it in a way which was very much simpler than, and very much more reliable because of being so simple, than the counters in use at that time, which amounted to much more equipment, many more parts. This device was patented. The patent agent of the British Railways, who patented it--of course, we never told him what he was writing out. We just told him to write this down. And it worked, it has been used ever since, and though there have been many disasters in British Railways since that time, not a one of them has consisted of any train running into a detached wagon in a tunnel. Fingers crossed, touch wood.
We made many other devices, and during this time I realized that, unfortunately, it would be necessary for somebody to write up the mathematical basis of the new principles that we were using. And I realized that if I didn't do this, it would be very hard to find anybody who would. And so I started writing it up.
After we had been using the new principles for about a year, most of the discoveries had been made. I wasn't quite sure of the theoretical basis of some of them. For example, to realize that what we were using in the tunnel was imaginary Boolean values to get a perfectly safe, reliable answer, which was quite definite. This I didn't realize for another six years. But most of the principles, by that time, I did realize. They were the whole of the mathematical basis of what we were using, which was switching algebra, commonly called Boolean algebra and the algebra of logic.
I must point out for emphasis at this time that the switching use and the use in checking a logical argument are two entirely different applications from the same mathematics. The same mathematics underlie both, but it is not the same as any one of its interpretations. In other words, the mathematics in Laws of Form is not logic, logic is one of many of its interpretations. Just as, when one does electronics, the electrical application is not itself the mathematics but one of the interpretations of mathematics.
Logic, in other words, is itself not mathematics, it is an interpretation of a particular branch of mathematics, which is the most important non-numerical branch of mathematics. There are other non-numerical branches of mathematics. Mathematics is not exclusively about number. Mathematics is, in fact, about space and relationships. A number comes into mathematics only as a measure of space and/or relationships. And the earliest mathematics is not about number. The most fundamental relationships in mathematics, the most fundamental laws of mathematics, are not numerical. Boolean mathematics is prior to numerical mathematics. Numerical mathematics can be constructed out of Boolean mathematics as a special discipline. Boolean mathematics is more important, using the word in its original sense: what is important is what is imported. The most important is, therefore, the inner, what is most inside. Because that is imported farther. Boolean mathematics is more important than numerical mathematics simply in the technical sense of the word "important." It is inner, prior to, numerical mathematics--it is deeper.
Now at the beginning of 1961, the end of 1960, having set out, first of all, as an exercise in what I thought was logic, I began to write it out. Realized it wouldn't fit. Took it back. Took it back, got it farther and farther back until I got it right back, what we had been working on in engineering and the principles of it, right back to the simplest ground and the simplest obvious statements about the ground one had constructed. And at the end of 1960, I had become conscious that the whole of this mathematical world could be taken to the simplest of all grounds, and the grounds were only that one drew a distinction. The defining of a distinction was a separation of one state from another--that is all that was needed.
* * *
This was all that was needed to make the whole of the construction which is detailed in Laws of Form, and which will suffice for all the switching algebra, train routing, open/shut conditions, decision theory, the feedback arrangements, self-organizing systems, automation--and, amusingly enough, the logic by which we argue, the logic that is the basis of the certainty of mathematical theorems. In other words, the forms of argument which are agreed to be valid in the proof of a theorem in mathematics. To give you a simple one: "If x implies not-x, then not-x." That is a commonly used argument. . .I can be sure that it is valid by the principles of the mathematics itself that underlies it.
The arguments used to validate the theorems in Laws of Form, as we now begin to see, are themselves validated by the calculus dependent upon those theorems. And yet, in no way is the argument a begging of the question. Now this is rather hard to understand, and perhaps it may come up in discussions later. Principia-principii, begging the question, it not a valid argument; it is a common fallacy. In no way is the question begged but in producing a system, in making its later parts come true, we use them to validate the earlier parts; and so the system actually comes from nothing and pulls itself up by its own bootstraps, and there it all is.
Nowhere does this become more evident than in this first and most primitive system of non-numerical mathematics; and I am quite sure--no, I will not say I am quite sure, when one says "I am quite sure" it means one is not quite sure--and I guess, I guess that why it is a branch of mathematics so neglected hitherto is that it is a bit too real. It is a bit too evident what game one is playing when one plays the game of mathematics.
If one starts much further away from the center, then you don't see the connections of what you are doing. You don't see that what comes out depends on what you put in. You can devise an academic system that goes on the assumption that there is objective knowledge, which we are busy finding out. We have come along here with wide-open eyes, and what we see over there--we come along and we give a demonstration, and we write it out, etc., and when somebody says, "But just what is it that gives the formula that shape? Why is it that shape and not some other shape? What is it that makes these things true? What is it that makes it so that when we see this, what makes it so--why isn't it otherwise?" And the stock answer is--"Ah, well, that is how it is, and that is the mystery."
Mystery, after all, doesn't mean that we scratch our heads and look in astonishment and amazement. Mystery means something closed in. A mystic, if there is such a person, is not a person to whom everything is mysterious. He is a person to whom everything is perfectly plain. It's quite obvious. And the person who designates himself a non-mystic, and has nothing to do with that kind of "woolly thinking," is a person, an ordinary academic, who writes down his mathematical formulae, and when people say "Why do they look like that, why don't they look some way else"--"Well, they just are that way--it's perfectly justified by mathematics--if you do mathematics, that's what you have to learn to do." In fact, when one starts from the beginning, there is nothing to learn. There is everything to unlearn, but nothing to learn.
KURT VON MEIER: When you told us about tunnels I saw the great psychocosmic projection of images and tales of the parable of Plato's cave. So I imagine you have provided us with the parable of the tunnel. It is in the shape of the hole of doughnut, topologically, so we could look for the seven-color rainbow with which to color it. See the map of a torus--it is seven colors.
SPENCER BROWN: Do you know the proof of that?
VON MEIER: I think there have been many attempts--
SPENCER BROWN: It has been proved. I haven't actually followed the proof of that, although the question is interesting topologically.
Coloring a Torus
I believe the principle by which you can prove that you can color the surface of a torus with seven colors is wholly different from the principle by which--if it is true that you can color the surface of a sphere with four colors--by which it would be proved. I have a feeling about the general question...have you looked at it like this: in any surface, if you take a small enough part of it, you have again the problem of the plane. Because a small enough part of any surface is, for all intents and purposes, a plane. You take a little bit of a torus and now you have a four color theorem again. As long as you don't go round. . .and round [Figure 1]. The four-color theorem is contained in every theorem about surfaces...And so it is a different kind of theorem.
MAN: Is there a question of which is prior, or that sort of thing?
SPENCER BROWN: Well, I think there is a difference like this: you can prove the color number--like, a torus is seven. It needs a minimum of seven colors to be sure of coloring a map on a torus is that no two bordering areas are the same color. I think that these are all decidable using what is currently allowed in mathematics: Boolean equations of the first degree only. I think that why we cannot, why we never have decided the four-color theorem and a number of other theorems, like Fermat's last, and Goldbach's, is not that they are undecidable. The questions can be asked, and, Wittgenstein was right about this, if a question can be asked, it can be answered. There is a definite answer to all these questions, these theorems are actually true or false. Why we cannot decide them is that they need, in fact, at least equations of the second degree in the Boolean argument, and possibly use of the imaginary values. The answer would be quite definite, Just as the answer to the number of wagon wheels, although the actual logic has used imaginary values. The answer is quite definite.
VON MEIER: We are getting into interesting space. I can see a wagon wheel as something of an iron doughnut, if you like. It's been put on the axle of a train. And if you see the tunnel of the British railway system and consider the space that flows through that tunnel as going around the whole earth and inside the tunnel and then around up to the sky, what we have, in fact, is a superdistended doughnut. And what you were explaining about the wagon wheels passing through the doughnuts, then, would seem to me to be something to do with the space that's ruled by the spirit of inside the doughnut, provided by the doughnut hole. What kind of changes can take place inside that domain? It is in a field--there are analogs in physics that define the inside of the doughnut as continuous, and get, nevertheless, in a special wag distinguished from the space of the rest, the outside, of the doughnut.
SPENCER BROWN: I pass on that one.
ALAN WATTS: A human being is topologically a doughnut.
LILLY: Have you formulated or recommended an order of unlearning?
SPENCER BROWN: I can't remember having done so. I think that, having considered the question, the order of unlearning is different for each person, because what we unlearn first is what we learned last. I guess that's the order of unlearning If you dig too deep too soon you will have a catastrophe; because if you unlearn something really important, in the sense of deeply imported in you, without first unlearning the more superficial importation, then you undermine the whole structure of your personality, which will collapse.
Therefore, you proceed by stages, the last learned is the first unlearned, and this way you could proceed safely. Related to what is in the books, we know they say that in order to proceed into the Kingdom, one must first purify oneself. This is the same advice, because the Kingdom is deep. What we talk of in the way of purification is the superficial muck that has been thrown at us. First of all that must be taken off, and the superficial layers of the personality must be purified. If we go to the Kingdom too soon, without having taken off the superficial layers and reconstructed in a simpler way, then there is a collapse. The advice is entirely practical. It is not a prohibition. There is no heavenly law to say that you may not enter the Kingdom of Heaven without first purifying yourself. However, if you do, the consequences may be disastrous for you as a person.
This is why in psychological, in psychotherapeutic treatment, normally the defenses are strong enough. As the psychiatrists will usually tell you, "If I push in this direction, you will be able to withstand me if you really need to." And it is much the same in all medicine. A rule I learned--I guess one learns it here, John, in the treatment of physiotherapy, manipulation of the limbs, etc.--we are allowed to go and pull them around with our little strength, but not to use machinery, because that may break something. The body can normally defend against one other body, and you don't usually break anything as long as you use one physiological equipment against one other. Usually the same; one mind against one other, the other mind is strong enough to withstand it. Start using other methods, drugs and/or mechanical treatment, and there you may do damage. You may get past defenses which were there in order that the personality should not be broken down too much, too soon.
WATTS: There is a value assumption in here about what is broken down. What is disaster, what does that mean?
SPENCER BROWN: Well, it is a value judgment, true enough. In reality, it is all the same. In reality, it is a matter of indifference, but we are not here in reality. We are here on a system of assumptions, and we are all busy maintaining them. On that system, then we can say, "Well, that will keep the ship afloat, and this will pull the plug out and we will all sink."
Degree of Equations and the Theory of Types
DOUGLAS KELLEY: As we go from second order equations to third order, I imagine you would like to maintain your two, and only two states, the marked and the unmarked. And if that is the case, in going from second to third order, do you get a more generalized concept of time, or a little different- I am just wondering what a third order equation would look like.
SPENCER BROWN: Well, I think you mean "degree" equations first, second, and third degree equations.
KELLEY: A degree of indeterminacy, yes.
SPENCER BROWN: Now, basically, once we have gone into the second degree, and it applies in numerical mathematics as elsewhere, you have added another dimension to your system. In going to higher degrees, you don't really add; because, when you start with the form, the form is just having drawn a distinction. You now have two states. It's the simplest, widest term I could possibly use here--states on earth, any thing, you see, states of minds anything at all. You have two states, which are distinguished. That is all you need.
Now, the whole of the first degree equation in the Boolean form are in terms of these two states. When you do this peculiar thing of making something self-referential that is making the answer go back into the expression out of which the answer comes, you now automatically produce this set of possibilities which are well-known in numerical mathematics and of which everyone's been terrified of looking at in Boolean mathematics And Russell/Whitehead were so frightened of these, that they just had a rule with no justification whatsoever that we Just don't allow it, we don't even allow people to think about this.
Now what nobody saw was that in numerical mathematics we had this going for years. As I showed in the preface to the American edition to Laws of Form, any second degree equation--perhaps, for those of you who don't know it, perhaps I should put it up on the blackboard if nobody has objections to my using chalk.
You see, what Whitehead/Russell didn't allow, was a self-referential statement; they didn't allow to say that this statement is true. "This statement is false" written on board. Suppose that this statement is true, then it can't be true because it safe that it is false. O. K. then, supposing it is false, then it must be true because it says that it is false. And this is so awful, so terrifying, that they said, "Right. We will produce a rule. We call it the Theory of Types to give it a grand name." The Theory of Types says--it is as much unlike what it says as possible, so that when someone says, "Well, what is the rule by which you can't have this?"-"It's the Theory of types," so that the people who are learning think that there is a huge theory, you see, and when you understand this theory you will realize why it is that you can't have such a thing. There is no such theory at all. It is just the name given to the rule that anything like this you must do this to. Erases it. That is the Theory of Types.
What they hadn't done was scratched out something like this. Writes, x^2 + 1 = 0. I'll just put it in the mathematical form. You see, Russell, as a senior wrangler, or second wrangler, in mathematics, should have been familiar with this equation. But he never connected it with what he had done.
Now here is an equation which admittedly had a bad name for years. But it was so useful it was so useful that all of phase theory in electricity depends on it. So let's fiddle with it. Here is our equation. We want to find the roots. We want to find the possible values of x. So let's fiddle with this and have a look for them. Well, here we go. Here we just subtract one from both sides; now we'll divide both sides by x. Well, x-squared divided by x is x, equals minus one over x. Well, now, we see that we have in fact a self-referential equation. Everybody can see that. Let's have a look at this equation x = 1/x, and see whether it is amenable to any form of treatment, psychiatric or something. You have to psychoanalyze it.
The thing that makes the former statement so worrying, so frightening, is that we have the assumption that the statement, if it means anything at all, is either true or false. Here, we have the assumption that the number system runs . . . -1, -2, -3, zero, 1, 2, 3 . . .and it goes on infinitely in an exact mirror both ways. So we assume that the number is not zero--zero is meaningless in the logic form. The statement is not meaningless. It is either positive or negative. We have got to make that analogy here. We equate "positive" with "true, n and "negative" with "false" --it doesn't matter which is which. So here is our number system as defined. Here is our equation from which we are supposed to find the possible values that x can take. Now, we know that the equation must balance...so first of all we'll seek the absolute numerical value of x, irrespective of the sign, whether it's positive or negative. Now suppose x were greater than one--not bothering about the sign for the moment--suppose it were greater than one-then this clearly would be--not bothering about the sign--less than one. If x were less than one, then you have got something bigger over something smaller, this would be greater than one. So the only point at which it is going to balance numerically is if x is a form of unity. Because you can see perfectly well that if this is greater then that would be smaller, if that is smaller, this would be greater.
So we have only got two forms of unity--plus one, minus one. So we'll try each in turn. So suppose x equals plus one, now we'll substitute for x in this equation and we have minus one over plus one equals minus one. +1 = -1/--1 = -1. So you've got plus one equals minus one. So try the other one, there is only one more. x equals minus one. Now we have minus one equals minus one over minus one equals plus one. -1 = --1/-1 = +1. So we have exactly the same paradox this time. Instead of "true" and "false," we have got "plus" and "minus." So using the Theory of Types consistently, the whole of the mathematics of equations of degree greater than one must be thrown out. But we know perfectly well that we can use this mathematics. What we do here is that effectively we have an oscillatory system--just as in the case of Laws of Form, if you put it mathematically, we have x cross equals x, (x) = x, or a cross going back into itself, @x).
Supposing it is the marked state, then it puts the marked state back into itself, and the marked state within a cross produces the unmarked state outside. (()) = . So this rubs itself out and so you get the unmarked state fed back in, and so out comes the marked state again.
* * *
Well, you see here the paradox which was overlooked by Russell, who wasn't a mathematician, although he was senior wrangler, and by Whitehead, who was, although he wasn't well, Russell was a mathematician, he wasn't a man of mathematics. Whitehead was a man of mathematics. Russell knew the forms, but he actually had no instinctual ability in mathematics. Whitehead actually had. But Russell, being a stronger character, was able to program Whitehead, and you will see this if you examine the last mathematical work Whitehead wrote, which is called the Treatise on Universal Algebra with Applications, Vol. 1. I asked Russell where Vol. --I said I had never been able to get Vol. 2, and Russell said, "Oh, he never wrote it." So it's all sort of a mystery. But the mathematical principles of algebra, in the usual complicated way, are set out, including the Boolean algebras, in this volume produced in 1898, an only edition. By that time Russell, who was the stronger of the two characters, had got together with Whitehead to do Principia Mathematica, which nobody was ever going to digest...It was a very ostentatious title, because they had chosen the title which Newton had used for his greatest work.
Incidentally, it is an extraordinary thing in the academic world--people are very silent about these things- but it was a very, very presumptuous title, I think, to take for this work. Inaudible comment, to the effect, "Hasn't 'Laws of Form' been used?" Oh, no, nobody has used that title before--no, sir. If I had called it "Laws of Thought," that was used, many people have used that title, but it was not laws of thought. Oh, no, you are on the wrong track, sir. I am not being presumptuous in taking that title . . . I have called the book what it is, I have not done what Russell/Whitehead did and taken a very great book and called it by the same title. That is totally different.
Now, this is what they overlooked in the formulation of the Theory of Types, which simply says you mustn't do this. However, both Russell and Whitehead had done it to get their wranglerships, get their degrees. But they had not done the simple thing of reducing this equation to this to see exactly what it was.
In fact, if you go to the Boolean forms and use something like this--there's your output--and you take it back in, input there and these are transistors used in a particular way, you have what is called a memory. And, if you put "minus" instead of "plus" there, x^2 - 1 = 0 instead of x^2 + 1 = 0, now what we have here, back in this form here, is our equation. Now we'll put it all in brackets and we'll take out the answer. Now we have exactly the same thing. And Just as this is a memory circuit, if this is the marked state here, that must be the unmarked state . And if this is unmarked state, we've got no marked state here, so this will be marked. And we have a marked state feeding itself back into there, and if you rub that out and this goes unmarked, you still have marked here, so it remembers. Equally, if you now put a marked state here, that must be unmarked, and then you can take that off and it doesn't matter because now since you have got unmarked and unmarked this becomes marked, and this, you remember, is unmarked. Similarly here, if you put "plus one" for x, you get plus one over plus one equals plus one, in x = 1/x, there is no paradox. You can also find a different answer for x, and that is minus-one. You get plus one over minus one equals minus one, so that's all right too. So you have, in effect, a memory circuit, and if you put it this way, you can see that. You have an equation with two roots, and this is similarly an equation with two roots. Whatever root you get out, you put back ins and it remembers itself. If you are getting out "plus one," it feeds plus one in there, and it remembers it's plus one. You have a thing to knock it off and turn it into minus one; it feeds a minus one into there and out comes minus one, here, and it remembers it's minus one. Any equation of the second degree that is not paradoxical--that goes through two stages and not one--are the same, and this is a way of producing a memory circuit electronically. It is exactly analogous to this memory circuit numerically. And where, in fact, you put it back, instead of here, you put it back through an odd number such as one, now you have a paradoxical circuit. Because whatever it gets out it feeds back in and it changes. And if you turn that into "minus," x^2 = -1, you now have a paradoxical equation. It can't remember, it just flutters.
* * *
Suppose it were an electric bell; in fact, here under our very noses all the time we have in mathematics the mathematics of the electric bell. And to show how the human mind works, in all the mathematics textbooks it says there is no mathematics of the electric bell. Here it is, all before our eyes. She simplest and most obvious things are the last and hardest to find because we have to get so awfully complicated before we get there.
MAN: Are those analogous to positive and negative feedback?
SPENCER BROWN: Yes. It's all straight feedback. A positive feedback remembers itself, a negative feedback oscillates. We have got the mathematics of the oscillator.
* * *
How often do you use this operator, i = sqrt(-1), which is derived from the paradoxical equation? Now, why is i used so much? Because i is the state that flutters, is the oscillation. This has been totally overlooked in mathematics, that i is in an oscillatory state. Because in order to get over this paradox of x-squared equals minus one, we see that we can't use any ordinary form of unity so we invent in mathematics another form of unity and we-call it i, which is the root that satisfies that equation. And the root that satisfies that equation is that you have plus one, minus one, and here's a state between; and the root that satisfies that equation, whatever it is, it isn't. And this is why i is so useful in dealing with that kind of curve--because it is, by its very nature, that kind of curve. i is an oscillation.
Time and Space
It is really an oscillation defining time; but it is the first time, and, therefore, being the first time, the oscillations are without duration, so the wave has no shade at all. Just as the space Or the first distinction has no size, no shape, no quality other than being states. This is one of the things that tend to upset people. It is part of the mathematical discipline that what is not allowed is forbidden. That is to say, what you don't introduce, you can't use. And until you have introduced shape, size, duration, whatever, distance, you can't use it.
In the beginning of Laws of Form, we defined states without any concept of distance, size, shape--only of difference. Therefore the states in Laws of Form have no size, shape, anything else. They are neither close together nor far apart, like the heavenly states. There is just no quality of that kind that has been introduced. It's not needed. -
The same with the first time. The first time is measured by an oscillation between states. The first state, or space, is measured by a distinction between states. There is no state for a distinction to be made in. If a distinction could be made, then it would create a space. That is why it appears in a distinct world that there is space.
Space is only an appearance. It is what would be if there could be a distinction.
Similarly, when we get eventually to the creation of time, time is what there would be if there could be an oscillation between states. Even in the latest physics, a thing is no more than its measure. A space is how it is measured; similarly, time is how it is measured. The measure of time is change. The only change we can produce-when we have only two states--the only change we can produce is the crossing from one to another. If we produce an expression, like the ordinary expressions in the algebra, we have to make the crossing. We have to do something about it. We have to operate from the outside. If we produce that cross that feeds into itself, now we don't have to do any thing. It is a clock, just as an ordinary distinction is a rule. A rule makes or defines space, and a clock defines time. In making our first distinction all that we have done is introduce the idea of distinction. We have introduced nothing else. No idea of size, shape, distance, and so on. They do not exist, not here. They can be constructed, and they will be, but not yet. They are what happens when you feed the concept back into itself enough times.
Again, when you first construct time, all that you are defining is a state that, if it is one state, it is another. Just like a clock, if it is tick, therefore it is tock. But this time is the most primitive of all times, because the intervals are neither short nor long; they have no duration, Just as these states have no size.
There were some books written about time by a man called J. W. Dunn that I read when I was a schoolboy. I realized that he must be right. I also was sufficiently aware of the social context to go along with the general opinion that he was off his head. He wasn't. He was dead right. Time is a seriality, and he was quite right. In order to get a time such as the one we experience, you have to put it back on itself, because in our time you have duration, which you can measure; and you can only measure the duration with another time. In the first time, you have no time in which to measure how long your duration is, and so, naturally, you can't have any duration. Time is something you have to feed back into itself several times. Like the space of this room, where you can actually measure it--you have to have space to measure space.
LILLY: Is that frequency of oscillation either zero or infinity?
SPENCER BROWN: It is neither. No, it has no duration at all. Just as you can't specify the size of the states of the first distinction.
LILLY: So that it has no determined frequency.
SPENCER BROWN: No. It can't be infinite, it can't be zero. So, the space determined by the first distinction is of no size.
HEINZ VON FOERSTER: It's just "flippety" and not frequency.
SPENCER BROWN: Yes, just "flippety."
MAN: And that's saying it could be any size you want.
SPENCER BROWN: No--you see, all this is a children's guide to the reality, "as if it had some size. n It is not right to say it could be any size you want. Because you have to learn to think without size. Anything like that is misleading, just as it's misleading to say this can be any duration you want. It doesn't have duration. It just don't have it. Just like the void don't have quality.
GREGORY BATESON: What about the "then' of logic? "If two triangles have three sides, etc., then" so-and-so. The "then" is devoid of time.
SPENCER BROWN: Yes. There is no time in logic, because there can't be time without a self-referential equation, and by the rule of types, which is now in operation in the defining of current logic, there is no feedback allowed. Therefore all equations in logic are timeless.
BATESON: So we add sequence without adding duration.
SPENCER BROWN: If you make a feedback, which Russell and Whitehead disallowed, you have a thing which if it is, it isn't. _s
MAN: A paradox circuit.
SPENCER BROWN: A paradox circuit, yes. In putting it this way, this is the mathematics Of it. I can put it in numerical mathematics, it's the same paradox. Make something self-referential, it either remembers or it oscillates. It's either what it was before or it's what it wasn't before, which is the difference between memory and oscillation.
WATTS: In introducing the word "before, haven't you introduced time? You have a sequence.
SPENCER BROWN: I have to apologize, because you realize that in order to make myself understood in a temporal and even a physical existences as by convention is what we are in, remember I have to use words about the construction of the physical existence in order to talk about forms of existence that do not have these qualities. And if that were easy this is one of the obstacles put in- our way. Basically, to do what I am attempting to do is impossible. It is literally impossible, because one is trying to describe in an existence which has them--one is trying to describe in an existence which has certain qualities an existence which has no such quality. And in talking about the system, the qualities in the description do not belong to what we are describing. So when I say things like, "To oscillate, it is not what it was before; to remember, it is what it was before, n I am describing in our terms, something that it don't have. But, by looking at them, you can see.
This is why in all mystical literature, people say, "Well, it is absolute nonsense." It has to be absolute nonsense because it is attempting to do this. But it is perfectly recognizable to those who have been there. To those who have not, it's utter nonsense. It will always be utter nonsense to those who have not been to where the speaker is describing from.
The theory of communication is absolute nonsense. There is no reason whatsoever why you should understand what I am saying, or why I should understand what you are saying, if I don't recognize from the blah, blah, noises coming out of your mouth, that mean nothing whatever, where you have been. You make the same noises that I make when I have been there, that is all it is.
For example, Rolt, in his brilliant introduction to the Divine Names by Diongsius the Areopagite, begins describing the form at first, and then he actually describes what happens when you get the temporal existence. It is all the same thing, but he is describing it in terms of religious talk, theorems become angels, etc. When he comes to the place, which he says most beautifully, having described all the heavenly states and all the people therein, etc., and he says, "All this went on in absolute harmony until the time came for time to begin." This is quite senseless. But it is perfectly understandable to someone who has seen what happens, who has been there. One cannot describe it except like this. It is perfectly understandable. Re had described the form and then he had done that, and this is the time for time to begin.
Mathematics and Its Interpretations:
Nots and Crosses
There is just one question that I have been asked to answer, and I think it is something that you, Gregory, asked, wasn't it? to do with "not." Was the cross--the operator-was it "not." No it ain't.
If I can, I'll try to elucidate that. I am reminded of one of the last times I went to see Russell and he told me he had a dream in which at last he met "Not." He was very worried about this dream. He had a dream, and he met "Not, n and he couldn't describe it. But by the time we are using logic, we have in logic "not."
We say: a implies b. I am assuming that we know the old logic functions. You can describe this, ~a, as "not a." Now that is not--that is a shorthand for "not" in logic. "Not" in logic means pretty well what it means when we are talking, because after all, logic is only mildly distinguished from grammar. Just as we learn after reading Shakespeare's sonnets that after all they are full of grammar. Some people seem to think that all we have to do is learn grammar to be able to write like that--not so. So, they're full of grammar--they're also full of logic.
Grammar is the analysis of the constructions used in speech, and logic i8 the analysis and the formulation of the structures and rules used in argument. Now in arguments, r there are the variables, "if it hails, it freezes," and the forms; we can say in that case, it means the same thing as "either it doesn't hail, or it freezes," and find this is actually what "implies" means. We can break down "implies" into "not" and "or."
Now when we are interpreting whenever are using the mathematics...we write a for "it hails," and b for "it freezes." If it hails, then it freezes; either it doesn't hail or it freezes. And in the primary algebra we can write, "a cross b," (a) b. The primary algebra does not mean that. We have given it that meaning for the purpose of operation, Just as we may take a whole system of wires, electric motors, etc., and we can put it into a mathematical formula, or we can take some cars and weights, etc., and put them in one of Newton's formulae for findings acceleration. But the formula is not about cars, and so on and so forth; nor is this formula about statements in logic. Just as here we have used a to represent the truth value of the sentence, "it hails," and b to represent the truth value of the statement "it freezes," we are in fact applying, because we recognize the structure is similar, the states of the first distinction to the truth values of these statements. We recognize the form of the thing. And in fact, "not" ; is in this case, although it is represented by the cross, the cross itself is not the same as "not. n Because if it were-well, we can see obviously that it isn't, because, in this form we have represented "true" by a cross and "false" by a space...if you represent "true" by a space and "false" by a cross, then wherewith our "not"? We have swapped over and identified the marked state with untrue this time, and the unmarked state with true. And here we have identified it with untrue. Change over the identification, which we may do, and now here is the statement. And if this were "not," this would now have two "nots--but it is not "not." We have only made it representative of "not" for the purpose of interpretation, just as well can give a color a number and use that in altering an equation. But the number and the color are not the same thing. This is not "not" except when we want to make it so. But it has a wider meaning than "not" in the book.
WATTS: Well, it means that it is distinct from.
SPENCER BROWN: No, no--it means "cross."
Marked State/Unmarked State
If you go back to the beginning of the book, you see--you remember this is not what really happens, because nothing happens. We represent what doesn't actually happen but might happen if it could. We represent it in the following wag: we may draw a closed curve to represent a distinction, say the first distinction. Now we have a form. And we will mark one state, so, in fact. The mark is, in fact, shorthand for something like that, because it is only a bracket we marked it with. If we don't mark it with a bracket, we find that we have to mark it with a bracket, as I show in the notes .
WATTS: Well, you have got it in the frame of the blackboard.
SPENCER BROWN: Never mind about that. Now, let there be a form distinct from the form. Let the mark of the form be copied out of the form into such another form. Bet any such copy of the mark be taken as- a token of the marked state. Bet the name as the token indicate the state. I missed out a sentence. Bet the token be taken as the name, and let the name indicate the state--right . Now, here, this indicates the state. We now derive our first equation from Axiom One--if you call a name twice or more, it simply means the state designated by the name by which you call it. So we have the first equation ()() = ().
Then, let a state not marked with a mark be called the unmarked state, and let any space in which there is no token of the mark designate the unmarked state. in other words; we did away with the second name. This is essential. It's the fear of doing away with the second name that has left logic so complicated. If you don't do away with the second name, you can't make the magic reduction.
BATESON: Are you saying that the name of the name is the same as the name?
SPENCER BROWN: No, no, no. I said here, if you call a name twice, it is the same as calling it once. Your name is Gregory. If I call you twice, it is still calling you.
What people have done is that they have given a name always to both states. There is no need to do that; you have got quite enough to recognize where you are, because you do a search, and if you find the mark, you know you are in the marked state. If you do a search and 90U don't find it, you know you are in the unmarked state. So that, mathematically, is all that is necessary. So you don't do the second thing. There has been already fear, you see, to have a state unmarked.
MAN: How did the printer feel about this. It must have driven him crazy.
SPENCER BROWN: Oh, he didn't like it--he kept putting things in. The printer and the publisher went absolutely haywire because of equations like this, (()) = .
MAN: The American military documents, because of the number of pages that have to be printed, frequently have a blank page, And to be sure that nobody gets confused about it, there is always a statement on that page that says, "this page is deliberately left blank, which, of course, it is not.
SPENCER BROWN: You see, why it has taken so long for the Laws of Form to be written is that one has to break every law, every rule, that we are taught in our upbringing. And why it is so difficult to break them is that there is no overt rule that you may not do this--why it is so powerful is that the rule is covert.
There is no rule that is overt anywhere in mathematics which says this may not happen, it may not be done. And it is because I found no such rule that I gathered that it could be done, and that it must be done. If you don't do it, you are not doing the mathematics properly, and that is why it is all such a mess. This is only a social rule that you may not do it. And there is no mathematical rule that you may not do it; and in fact, you have to do it. Otherwise the mathematics is a mess and you can't get the answers because you are blocked.
* * *
Now to go on to what I was going to say, which is: next we want to use the mark, which could be a circle. We want to use it. We haven't, in fact, discovered its shape. In the second equation, we discover, really, what the shape is. And we'll see it is inevitable. Having marked one side-if there is no mark, then we know we are on the other sides
WALTER BARNEY: Those m's are outside the circle or inside the circle?
SPENCER BROWN: Well, this one is outside.
BARNEY: I wasn't clear on which is the inside.
SPENCER BROWN: In fact, it depends on where you are. This is already beyond what we have said mathematically, because, in fact, this is only an illustration. Just as, when you play Beethoven's music it is only an illustration of what Beethoven wrote. All mathematics in books is only an illustration of what cannot be said. This illustration is misleading because there is no outside or inside when you have drawn the first distinction. You have just drawn a distinction--we can illustrate it with a circle because it happens to be convenient. Then we mark one side, and we know, in this case that it is the outside. But remember that in the mathematics there is no outside. There are just two sides. We have marked one of them, and if we find the mark, we know we're in that state. We call it the marked state because it is convenient to call it by something, which, having marked it, we'll say it is a marked state. Simply for something to call it. And having not marked the other side, we call it the unmarked state. That is all that is needed. We now have every concept we need.
First of all, we have taken the mark as a name. And if you call a name twice, you are simply indicating the same state twice, and indicating the same state twice is the same as indicating the same state once. Now, instead of just calling this m, let us give it certain properties. Bet it be an instruction to cross the first distinction.
Now here is our illustration of the first distinction . Now this is why we've drawn this line on our blackboard, because here is an illustration of the first distinction. Here is a record of instructions referring to the first distinction--right. Now let m, the mark, be taken as an instruction to cross the boundary of the first distinction. So, if one is here, m says go there. If one is here, m says go there. O. E.? m is now not a name, so we can ring that for the moment, don't confuse yourself with that, m is now an instruction. And all m means is "cross." So whenever you hear or see it, you've got to step over the boundary. That is all it means. Now, we will produce more conventions.
* * *
We will say that we have got a number of crosses considered together, and these we will call "expressions." Now suppose you have this. We'll say--right--we'll represent m like that. And we'll say m means "cross" and we'll make a convention so that whatever is represented in here, you'll have crossed to get what is represented out there. So if there is nothing represented here, absence of the mark indicates the unmarked state. You cross when you are in the unmarked state and you find you are in the marked state. So out here by representation will be a value attributed to this mark, will be the marked state, and that is the value we attribute to that expression.
Now let us put the marked state in here, and we can do that simply by putting another cross in here. Now the convention is that wherever you see nothing you-are in the unmarked state. Wherever you see this, you must cross. So, here we are. We hear nothing, we see nothing, we are in the unmarked state. Our instructions now say "cross," so we cross, and then our second instruction says "cross," so we cross. So here we are, we started here and we have crossed, and we have crossed here, and so we can derive our second equation, (()) = . So that all this says in mathematics is "cross." It does not say "not." It says "cross."
(End of first session.)
- Transmission Of Spencer Brown's marks on the blackboard has been absorbed elsewhere in the system. We invite outside constructions. The general discussion concerns re-entry at an odd level and at an even level. If odd, as in (x), we get marked state in and unmarked out, an oscillation. If even, ((x)) we get marked in, marked out, a memory.
- Chapter 2, Laws of Form.
- See p.4 Laws of Form.
- For illumination of what follows, see pp. 82-83 in the notes to Chapter 2, Laws of Form.