Nolarbeit Theory Journal: Part One of Three (1972-1977)

by Arthur T. Murray

Threshold Parameters of Consciousness                            25 NOV 1972
     It should  be possible to ascertain what minimum requirements there are

for intelligent consciousness.   In dealing  with minima,  however, a strict

definition  or  detectability  of  intelligent  consciousness  may  be  all-

important.  Until we see exactly how strict  a definition  is necessary, let

us have  the use of language as our general criterion of the likely presence

of intelligent consciousness.

     We  can  list  what  in  present  theory  are  the  various  functional

mechanisms of  consciousness.  These include an input-sensorium, memory, and

motor-output.    In  Nolarbeit  theory  the  most  important   mechanism  of

consciousness is  the memory.  One might say that an external world to exist

in is also a necessity for  consciousness.    These  four  items,  then, are


          1.  an environment
          2.  an input-sensorium
          3.  a memory
          4.  a motor-output.

     In each  of these four realms there can be variations as to quality and

quantity.  There can also be qualitative and quantitative relationships from

one realm to another within the total system of consciousness.  For example,

the breadth of the input-sensorium might influence or  determine the breadth

of the memory.

     For  each  of  the  four  realms  we  can  consider characteristics and

variables.  For instance:

          I.  The environment realm.

               A.  How many of the four dimensions does it have?

               B.  How much order is in it?

               C.  How much disorder is in it?

               D.  What degrees of complexity are found in it?

               E.  How stable or predictable is it?

         II.  The input-sensorium.

               A.  How many senses are there?

               B.  How many discrete receptors are there for each sense?

               C.  With what speed and with what frequency do the senses    
                   convey information?

        III.  The memory.

               A.  What physical mechanism retains the memory-traces?

               B.  What percentage or amount of information from sensory   
                   input and from conscious activity is retained in memory?
               C.  Can the memory "hardware" be used more than once?

               D.  What, if any, are the limits in time or capacity  of the 

               E.  What aspects of unity and order are present in the   

               F.  Are there divisions of the memory, such as  "short-term  
                   memory" and "long-term" memory"?

               G.  Can the memory be monitored or read out?


         IV.  The motor-output.

               A.  How substantial must the motor-output be for
                   consciousness to exist?

               B.  What forms of energy can or should the motor-output      

               C.  Must the motor-output be attached to or take the form of
                   a single, consolidated physical unit, so as to support   
                   an image of the existence of a unitary, localized being?

               D.  Must the motor output have its own compartment of memory
                   for activation?

               E.  Should the motor memory be only such that its effects    
                   are readily   perceivable by the input-sensorium in
                   immediate feedback, perhaps to further the  "illusion" of

               F.  Can conscious thought be considered a part of the motor 

               G.  Even if motor output is or were necessary to attain      
                   consciousness, is  continued motor-output necessary for  
                   continued consciousness?

     In discussing  "Threshold  Parameters  of  Consciousness,"  we  want to

investigate various limitations and minima involved with consciousness.

     One question  to consider  is whether  or not  the medium or content of

consciousness is delimited by some few and simple aspects of the universe as

an environment.   Perhaps  we could think of these reality-aspects as having

to do with geometry and dimensions.  When we are conscious, we are conscious

of something.

     Another supporting  consideration is that a unitary system or locus, if

constantly affected, even generated, by streams of input and output, can not

be concerned with more than a few simultaneous concerns.
     Any remembered
thought can be summoned by a single associative tag, and

even a thought expressed in language  is  a  serial  string  that  goes from

element to element.

     In the  case of intelligent consciousness, it may be that symbolism, in
the  form  of  words  and  language,  permits  the  manipulation  of  large,

complicated  blocks  of  information,  of  thoughts.  Nevertheless, when any

aggregate is dealt with, it will probably be  dealt with  a part  at a time.

(But we  must not  neglect to consider the idea of simultaneous processing.)

An aggregate can be dealt with as a  whole when  its constituent  parts have

been understood.

     When  an   aggregate  is   dealt  with   as  a   whole  in  intelligent

consciousness, it is likely that the  symbol of  both the  aggregate and the

object,  namely,  the  word,  is  used  as  a  platform of manipulation.  In

intelligent reasoning, it is  essential  to  have  bundles  of associational

tags.   A word with its coded structure provides an extremely economical way

of bundling the tags.

     In intelligent reasoning, it is clear that the consciousness leaves the

primitive domain  of geometry  and dimensions and by means of symbolic words

deals with classes, generalities, and the abstract.

     Perhaps intelligence requires a fifth realm, symbolism.

     All five human senses have a certain sameness in that they all transmit

their information  along nerves.   It is obvious that for each sense we have

very many more receptors  than are  required to  get just  basic information

through the  sense.  Yet it is obvious that one receptor does not constitute

a separate sense and that a million are more than enough.   Besides, when we

use a particular sense we concentrate on a rather narrow point.

     In   consciousness,   memory  traces   are   manipulated  by  means  of

associational tags.  Unitary tags are  used to  distinguish aggregates.   If

the tags  are disjunctive,  then there  must be  disjunctive elements in the

very perceptions that form the memory  traces.   If the  sensory perceptions

were all  the same,  then there  could be  no discrimination by means of the

tags.  But the perceptions do vary and are different.   Yet  the perceptions

have to  be classified  in an  ordered manner  if the tag system is to work.

Classification must be according to similarities  and differences.   But for

the mind  to make  a classification,  or a  distinction, or a comparison, it

must first seize upon some small, uncomplicated feature.   Now,  if we dealt

with the  senses of  sight or  touch, we could deal with shapes or patterns,

with straightness, curvedness, angularity, and so on.  If a being dealt with

just a  single point  in touch,  it would not be able to distinguish between

variations.  But sight is  a  more  refined  sense.    With  sight  the most

intricate  distinctions  and  recognitions  can  be made.  Even in sight one

point as a receptor allows no distinguishing.   If we  imagine a sight-sense

having  just  an  array  of  receptors  on  a field or screen, we can easily

calculate the number of different images that can appear on a screen  with a

given number of points:

        # of          # of different

        receptors:    images possible:

                 1       2 minus 1
                 2       4-1
                 3       8-1
                 4      16-1
                 5      32-1
                 6      64-1
                 7     128-1
                 8     256-1
                 9     512-1
                10    1024-1.

However, in  the physical universe we experience or encounter both order and

repetition.  As the  number of  receptors on  the field  of our hypothetical

sight-sense increases  from just  one, the number of possible configurations

increases exponentially.  Once there are  several, but  not many, receptors,

it becomes possible on the field to represent basic geometric things such as

points, lines and curves.  These geometric things can be classed.   They are

utterly simple,  because two points define a line, and three points define a

curve.   More complex  geometric things  are built  up of  points, lines and

curves.    Perhaps,  therefore,  in  looking  for  threshold  parameters  of

consciousness, we could say that an  automaton can  not become  conscious of

curvature without  having at  least three sight receptors, and probably more

for contrast.  There ought to be two delimiters, a  minimum number  for bare

possibility  and  a  larger  number  above  which  more  receptors  would be

unnecessary.  The lower number should be pretty exact and  the larger number

should be rather indefinite, because functional success of classification or

recognition in between the two numbers will probably be statistical.  With a

more or  less certain number of receptors a classification becomes possible,

and then with increasing numbers of  receptacles the  classification becomes

more and more likely, until the likelihood cannot be increased further. 

     We can  use these  basic geometric  things to  examine the mechanism of

consciousness.    We  postulate  that  memory   traces  of   perception  are

continually being  deposited.   The question  now is  how an associative tag

connects to a memory trace.   (The  process  will  be  different  for simple

shapes and for symbols.)

     It may  be that an interplay of two senses is required to lay the first

tags.  Or it may  be  that  the  first  tags  are  laid  automatically  by a

mechanism incorporated "genetically" into the organism.

     The sense-interplay  idea runs  as follows.  Visually similar items are

likely to exhibit similarities also for touch or hearing.   But the organism

"mentally"  concentrates  on  one  area  of  one  sense.  Suppose there is a

standard tag connecting temporally  successive memory  traces of perception,

from one  moment to the next in time.  There would be a tag from each object

of mental concentration to the next, in a sort of chain.   Suppose that when

a visual point is sighted there is also a tactile sharpness felt.  Therefore

on sighting  visual points  a chain  of tags  that was  going through visual

memory would  move also  into tactile  memory.   A sort  of discontinuity or

differentiation would arise.  Suppose that there were  a sort  of harmony or

oscillation established  in the  mind in  question, such  that, in a certain

ration, memory traces entered the  mind  (from  within  itself) interspersed

with the  actual and  present-time incoming sense perceptions.  That is, the

mind would automatically and continually be experiencing two phenomena:  the

present and  the past.   From  the past  would be summoned whatever was most

readily available  given the  action of  the associative  tags.   Thus in an

incipient mind  the activated  memory traces would be very recent ones.  How

would the mind reach any older memory traces,  not the  ones just deposited?

It looks  as though there would have to be some mechanism which would notice

change from one image to the next.  Suppose the imagery in  the visual field

changes only  occasionally.  Suppose that a number of changes have occurred,

and one more change occurs.   Now,  that  mechanism  of  the  mind  which is

feeding in  old memory traces interspersed with the new perceptions does not

have to be limited to going back just one memory-trace or one frame.  It may

be that  the internal  memory mechanisms function much quicker or many times

quicker than the input-sensorium.  Thus each newly made (newly sensed) frame

could  quickly  be  compared  with  several  or many older frames.  When the

above-mentioned  additional  change  occurs,  its  present-time  frame could

automatically be  tag-linked to  another change  frame seven or eight frames

back in the merely temporal, merely successive chain of frames.  Therefore a

new tag  is attached  which keeps  the significant older frame from receding

into oblivion.  The mechanism doesn't have to work on change, either; it can

work on similarity, which still implies change.

     The consciousness model which has been developed so far today works, if

at all, because each newly incoming frame of  sensory data  is compared with

several or many older frames from memory storage.

     This  line  of  thought  touches  on  the idea of a force towards order

operating in the universe, and it  suggests that  a tabula-rasa  mind can be

self-organizing.   Of course,  the initial order in the mind in question is,

after a fashion, transferred from without.  In ordering itself,  the mind of

the automaton reflects the order which it encounters on the outside.

     In such  a model,  the important  mechanism is  that which compares and

differentiates.  There are a lot  of  intriguing  questions  involved.   For

instance,  does   the  self-organizing   or  self-ordering   mind  need  any

rudimentary order to start out with?  That is to  say, is  the self-ordering

process  self-starting,  or  does  it  have  to be primed?  In a biochemical

organism, it should be easy for a certain  rudimentary order  to be provided

genetically in the brain.

     In machine  hardware, it should be easy to set up an input channel that

compares new and old frames according to various simple criteria.  The fewer

criteria there are, the more we can say that the machine is non-programmed.

     There can  be various  pre-designed, automatic  mechanisms in the mind,

but still the content of the  mind will  be free  and non-programmed.   For

instance,  there   could  be  a  pre-designed  mechanism  of  attention  and

concentration, which normally might oscillate back and forth between present

perception  and  recall  of  memory-traces.   However, dynamic factors could

cause the attention to  swing  in  favor  of  the  external  world  over the

internal,  or  in  favor  of  a  fascinating  internal  line of thought over

external perception, or in favor of one external sense over another.

     The more functional, mechanistic differentiation there is in the mental

automaton,  the  more  capable  it  will  be  of processing and manipulating

complex data.  If there are several senses  at the  machine's disposal, then

one sense,  such as hearing, can be used extensively for processing symbols,

such as words.

     A basic idea for the mechanism that compares and distinguishes with old

and new  data arrays  is that it should have something to do with elementary

phenomena of order in the universe.  For instance, in the case  of sight the

elementary  geometric   patterns  should   be  important.     Perhaps  "pre-

programming" or "quasi-genetic  endowment"  will  give  the  machine initial

capabilities with  regard to elementary phenomena of order.  In hearing, the

elementary phenomena  might  involve  frequencies.    In  touch,  they might

involve texture, geometric patterns or pain.

     Of  course,  there  is  a  certain order already present because of the

physical layout of the receptors of each sense.   An  optical retina  has an

array of  receptors, and  so does the human skin with its tactile receptors.

Obviously the order of the retina of the eye has to remain stable for stable

vision.   It is  highly unlikely  that such a high degree of order as in the

retina or the visual  system can  have been  provided genetically.   No, the

order of  vision must have been developed in the course of experience.  This

ordering may be, however, a function of growth  and development  rather than

of memory.

                                                                 15 JAN 1973

Developments on "Threshold Parameters"

     In evolution,  the probable order of appearance of the four realms was:

1. environment, 2. input-sensorium, 3. motor-output  (perhaps simultaneously

with input-sensorium),  and 4.  memory.   We may  be able to build our model

according to evolutionary lines, maybe not.

     Memory is the essential ingredient for intelligent consciousness.  This

condition  probably  also  applies  to  the  motor  system of an intelligent

consciousness, that is, if  we do  not supply  a memory-bound  mechanism for

motor control, perhaps we then cannot achieve any freedom of action and only

brute-like stimulus-response phenomena will occur.

     Transitory stages.  In setting up our model,  among our  componentry we

may have  to include transitory and initiatory stages.  The item in mind now

is a random dynamics mechanism which would initiate activity of  the "motor-

output"  so  as  to  start  a  circular  chain of information flowing.  (The

process should be like  that of  an infant  lying in  its crib  and randomly

waving its arms.)

                                                            24 JAN 1973


Minimal Thinking Systems

To devise a minimal automaton that functions like a brain, if we progressively reduce the number of elements that we would deem necessary, starting with the totality of elements in a conventional brain, we might arrive at simple submechanisms beyond which we could reduce no further without losing the nature of a brain. An idea: We can give the machine the capability of changing every aspect of its own structure and organization. This idea can be carried out to varying degrees. The idea would allow certain advantageous phenomena. For instance, consider the notion of self-organizing. We might let the machine develop new ways to process its incoming information. No old process would have to be dismantled; it would just fall into disuse. So far we have enumerated such features of the automaton as: - input sensorium - memory - bouleumatic accumulator - random dynamics mechanism - motor output. 23 JUL 1973

Psychological Insubstantiality

The question is, how much being must there be for a volitional intelligence to exist? The amazing thing is that this is not a question of mass or substance but of particularistic numerosity. A rudimentary intellect will consist of a minimum number of active switching elements. It will not especially matter of what substance these switching elements are constructed, but rather it will matter only that they do perform their switching function properly and reliably. Ontologically speaking, therefore, their nature will be only such as to hold, transmit, or change information. Substantially they may be quite chimerical and intangible. Assuming that the processes of intellect are rather simple, it should take rather few switching elements to perform the basic processes. Importantly, the minimum required size of an aggregate of switching elements involved in intellectual processes will probably be quite independent of the numerosity-size of the various sensory channels also involved in the intellectual processes. That is, the size of the sensory channels will probably happen to be much, much larger. It is theorized, anyway, that the intellectual processes function by reducing to simplicity large, complex, or unorderly phenomena. Large-sized sensory channels may be necessary to initially grasp the phenomena, but their simple "handles," their intellectual "distillates," should be simply manipulable. Granted or assumed then that there is a small core of switching elements necessary for the existence of volitional intelligence, we can elaborate its description without either violating its lack of substantiality or contradicting the supposition of its numerically small core. We must elaborate its description to allow it a real-time historical role in the universe. Certain mechanisms, either of the intellect or attached to the intellect, must be capable of great extension with regard to numerosity. Among these mechanisms would be such things as memory, volitional motor mechanisms, and perhaps bouleumatic accumulators. We can conceive of memory in artificial intelligence as an item which can be expanded or even contracted to almost any desired extent. Memory can be expanded to increase the tempo of life or the longevity of the organism, or perhaps to widen the various sensory channels. At any rate, memory is that of which is built the interior cosmos of the organism. An intelligent organism with full and particular knowledge of its own make-up could examine its own memory stores and consciously liquidate any portions which seemed unnecessary or undesirable, perhaps just to save space. In the above description of a volitional intellect, the central idea is that the intellect can probably be quite minimal, that is, both simple in design and small in the numerosity of its intellect-essential switching elements. Note: The above described intellect may seem to be too static, in that all it seems to do is both to process whatever information is available to it and to volitionally effect actions necessary for its well-being and survival. However, such an intellect can never become static until its whole universe becomes static. 27 JUN 1974

The Book of the Nolarbeit

The freedom to build a language-computer is now greater than ever. However, the project involves effort in several directions. This notebook is to be the central log of the project (Nolarbeit) this summer, although I feel free to stray from this notebook at any time. The state of the project is that theory and knowledge have been accumulated, plus some money in the sum of two to three thousand dollars, and this summer is more or less free until September, and so now it is hoped to begin working on some hardware. Since I will be working in several varying directions, I want to follow a documentary regimen in order to be able on the one hand to record progress on all the subprojects and on the other hand to leave off a subproject for a while and then return to it at its furthest point of progress. Budding ideas should be recorded here, too. I feel that my first step will probably be to collect and read through my accumulated theory. Then I will probably rough out a general model of what I want to build or construct with hardware. One problem here is that the theoretical concerns are right down close to the engineering concerns. However, the philosophy and theory can be put through a process of stricture, and the practical engineering can be governed by a policy of keeping things as general, as standard, and as expandable as possible. (Later, around 11 p.m.) I've gotten an idea from what I am doing as almost the first step in the active pursuit of this project. That step is that I am preparing a list of what I call "Suggested Items for Nolarbeit Folder File." Already I have made a second, enlarged version of today's first list. My aim has been just to set up a file box to sort out the various items of information collected or generated. I discovered that doing so is just like setting up my file box for teaching languages this past year, except that the subjects included in this Nolarbeit file really range far and wide. But now I see here sort of a general tool of inquiry in this process of establishing the informational categories for my research. The tool or technique is to take a problem, state it in general terms (implicitly or explicitly), and then divide the problem up into specific subdivisions to be worked on. After all, a problem is like a positive but incomplete complex. It may be incomplete in one of at least the following three ways: something is missing, something is damaged, or the infrastructure is not understood. Somehow I get the feeling that this line of thought is connected with what is in the book on abductive logic which I bought today on Norm's advice at the U.W. Bookstore. However, I have only looked briefly at a few things in that book, I haven't read it yet. At any rate, there is a certain intellectual process at work here. The general problem "language-computer" does not automatically yield a list of fifty subdivisions of the problem. No, I had to start writing down one by one things that came to mind as probably applicable to the general problem. 29 JUN 1974

Suggested Index for Nolarbeit Folder File

Applications (Future)              Instinct
Archive                            Intellect
Attention                          Intelligence
Automata                           Learning
Bibliography                       Linguistics
Biology                            Logic Circuitry
Brain, Human                       Logic Philosophy
Brain, Nonhuman                    Logic Symbols
Clippings                          Mathematics
Coding                             Mechanics
Components                         Memory Technology
Consciousness                      Memory Theory
Control over Machines              Neurology
Correspondence                     Pain and Pleasure
Cost Analysis                      Parapsychology
Dreaming                           People of Note
Ego                                Perception
Electronics                        Philosophy
Embryology                         Pictures
Emotion                            Plans
Engram                             Problems
Entropy                            Problem-Solving
Environment                        Psychology
Evolution                          Randomness
Experiments                        Recursion Theory
Feedback                           Redundancy
Flowcharting                       Robotics
Freedom                            Security
Game Theory                        Semantics 
Genetics                           Serial and Parallel Processes
Geometry                           Servomechanism
Hardware                           Supply Sources
Heuristics                         Switching Theory
Holography                         Terminology
Hypnotism                          Time
Index                              Tools
Input/Output                       Volition

Evolution of Central Nervous Systems

Reading again the paper "Threshold Parameters of Consciousness" from 25 NOV 1972, I can see the possibility of a CNS developing by evolution. The paper says that four things are necessary for consciousness: 1. an environment 2. an input-sensorium 3. a memory 4. a motor-output. The inputs and outputs seem to be a buffer between environment and memory. Well, we can think of memory developing first in evolution. If any cell developed which gave a consistent response to a certain stimulus, then the ability to give that response constitutes a kind of quasi-mechanical memory. Of course, probably any cell that developed also responded to certain stimuli. However, probably cells became differentiated in their responses. Maybe cells developed with the main purpose of just transmitting information. For instance, it is easy to imagine a simple animal where one (muscle) kind of cell serves mainly to cause locomotion and another (neuronal) kind of cell serves mainly to react to an external stimulus by in turn stimulating one or more muscle-cells for locomotion. I can even imagine a neuron-type cell that both stimulates nearby muscle cells and also inhibits muscle cells on the other side of the body lest they oppose motion. Where the motion of a simple animal has to be performed rhythmically, I can imagine a network of neuronal cells developing to control the rhythmic motion. The control cells would exhibit the same rhythm within their own network. I can imagine networks of cells developing, where the network of each specimen has to be trained by experience, for example, as in the learning of a bird to fly. In trying to simplify the problem of designing the language computer, we notice concern in the literature about the processes and the organization of perception. Visual perception presents us with enormous complexity. I am concerned here with at least two questions: how much complexity in perception can be done away with, and how much basic complexity is necessary to produce an intelligent mind? Perhaps too much by just gut feeling, I suspect that geometry is involved here. By geometry I mean those simplest patterns and connections, such as point, line, circle, angle and so forth. The numbers three and seven figure here, because three is so elemental, and yet with three you can distinguish between seven different units. You get the feeling that you can do a lot of slashing and paring of the problem when you reflect that all the rigmarole involved with sight is dispensable. A human being can be blind from birth and still be highly intelligent and just as positively conscious as a person with sight. So I'm not scared when I encounter these complexities with pattern-recognition and special processing involving the retina and the optic nerve. I think that the major facet in a language computer is going to correspond to hearing and speaking. I almost get the feeling now that I would be providing enough non-auditory perception if I just made a simple robot consisting of two touch-perceptive hands mounted by arms to a nondescript body on a small platform moving by electrically driven wheels or pushing/feeling feet over a floor. Whatever the situation with modalities of perception is, I get another feeling, this time to the effect that maybe there is an ultra-basic, simple core to our conscious or intelligent systems. This notion has something to do with the idea of comparison. We already assume about a hypothetical system that we can grant it unlimited memory capacity. We can endeavor to grant it all of the five perception modalities that humans have, and with the unlimited memory storage the whole panorama of the system's perception over time can be stored up. [An idea that crops up here is: how about also granting unlimited associative capability? The intriguing idea here is that the unlimited associative capability is conceptually very simple. We don't at all have to be content with the few tail-end bits of a byte of information. Why, in an artificial mind you could even have a machine that searched around and connected pieces of information that hadn't been so originally but look like they ought to be connected. (For all we know, maybe the human brain does have a function that does just that, perhaps during dreams in sleep.)] Yet total storage and total association do not yet do the trick. It means nothing if pieces of information just shift around in meandering streams within a system. This rationale leads, I suppose, to an idea that a measure of internal operation is going to have to be the production of some kind of motor happening. [The outward communication of internal abstract thought might qualify as a measure of highly advanced internal operation.] When we consider the production of motor happenings, it seems that there is going to have to be an elaborate, practiced, well-known, associated internal motor system, so that inside the machine there will be not only a picture of the outside world but also a sort of trigger-finger picture of all the motor things that can be done to the outside world. Both are learned pictures. The perception picture encompasses both the outside and the inner world, and maybe so does the motor picture, in that we can do things like play music to ourselves in our minds. I get a feeling that the only reason why a human brain can function intelligently at all is because the physical (and maybe the logical) universe seems to come together somehow into that small but manageable group of three (or seven) elements. I get a feeling as though a brain (mind) can really deal with only three to seven elements, but by means of substitution or capitulation a brain deals with fantastically complex things by a sort of proxy. [An image suggests itself here of ten or more men all on stilts and all the men one above the other up into the sky, so that the top man moves around only as all the other men perform the same movement beneath him. The man at the top might then sort of represent a broad amalgam.] Imagine a mind-cathedra of seven elements (or three, however it turns out to be needed). No, let's say three elements. These three elements really represent utter simplicity. Supporting these three elements there could be analytical pyramids which lend structure and significance to anything occupying one of the three elemental positions. For example, linguistic pyramids could be built up to handle subject, verb and direct object. This triad is about as complex a view of the external world as we perceive anyway. We generally perceive the external world in terms of one thing moving or causing another thing. The complexity beneath the "tip of the iceberg" doesn't matter. You might say that all the underlying phenomena or attributes just sort of "go along" subconsciously, meaning that we are somehow quasi-conscious of things beyond the three elements, below the surface of our consciousness. To achieve some completeness now by also dealing with spatial things, we can see that the three elements cover a lot of geometry. If the machine is supposed to be perceiving shapes, with three main elements it can adapt to a line or a curve. Suppose it is presented with, say, a pentagon. Well, then maybe it would just use the three elements to perceive one of the angles. Two further ideas to discuss are comparison and extension from three unto seven. In dealing with a system of from three to seven elements, there are several ways we can look at it. Suppose the system consisted as follows. There are seven elements. Three are operational, and the other four are like reserve. We might say that the three elements are points of attention. From each triadic element association can dart down and up within the pyramid, but there could be a function whereby the three elemental positions obligatorily had to be in either constant or oscillating mutual association. Thus the contents by single associative tag could vary within a triadic elemental position from one moment to the next, but the association of one position to the other two would be unbroken. The whole concept of ego could occupy one triadic pyramid, a semantic verb another pyramid, and a semantic object the third pyramid. Obviously, some verbs (e.g. "anticipate"?, "congratulate"?) have such complicated (yet unitary) meaning that the whole meaning can't possibly all at once be at the unitary focal point of consciousness. If we expand the conscious focal point to encompass some complicated semantic meaning, then the focal point can no longer be incisive or directed or unitarily manipulable. So it would seem that each Gestalt has to be present as a pre-processed unit. The idea here is that maybe intellectual comprehension can only take place at a few cardinal hinge-points. If you don't hinge a Gestalt on a maximum of three points, then maybe it just can't be processed. But what does processing amount to? It would seem that production of any old motor happening is not enough. Plain unintelligent instinct suffices to link up a stimulus with a motor happening. No, I get the feeling that there is some sort of comparison process of a logical nature lying as a basic fundament to the operation of intellect. A comparison mechanism could work with seven elements. The procedure could be such that if you get three and three positions filled, then the seventh position also gets filled and the machine acts upon whatever associative tag fills the seventh position. It could be or become built in to the nature of the seventh pyramidal position that its filling carries along with it a sense that a positive comparison has been made. For comparisons of Gestalten involving less than three pyramids, there could be comparison in sequence, so that the first pyramid is compared with the fourth pyramid, and so on. If an action of comparison does not indicate sameness, then maybe the associative course of the machine would veer off along a tag filling one of the positions where there was a discrepancy with a corresponding position. Indeed, maybe it stands to reason that the course would have to veer along that one of the two positions which did have a tag filling it, because discrepancy would mean that only one out of a pair of positions would be occupied. There may arise a question as to how the machine is supposed to distinguish between an action of comparison and just random filling of the six elemental positions. It could be that the seventh position would play a role in the establishing of a comparison action. Remember, various processes (evolution, trial-and-error, conscious design) permit there to be almost any sort of complicated neuronal system backing up any one of the seven elemental pyramidal positions. We are conceptually dealing here with the building-blocks of logic, nature, and language. A system that can compare groups of zero to three units is a pretty high-powered system. If a system of seven would permit conscious intelligence, then that system could consciously compare groups of more than three units. We seem to encounter here a shade of recursion theory. If we can produce conscious intelligence by dealing with a small number of [30 JUN 1974] elements, then by a bootstrap operation our product takes over for larger numbers of elements. By evolution and embryology, a brain could grow to a point where there were myriad aggregates ready to perform the basic function of intelligence. Then, once one aggregate did perform the function, a rapid organizing process could make the rest of the brain subservient to the original aggregate. Let us assume that the machine mind conceptually starts out with seven elemental positions. Perhaps these positions can float over neurons, but the main things about them are that they are inter-associated and they can compare one group of three units with another group of three units. 1 JUL 1974 Logic or logical operations may perhaps be accomplished by a brain in two ways. One way could be the elemental, fundamental way of comparison and the other way could be a way simply of reference to logical material already learned. 20 JUL 1974

Small-Scale Function

It would be quite easy to build a device which recognized circles or other tripunctual things. All recognition in a visual field hinges on a smattering of cardinal points, in that small area where we always look directly. Our very wide peripheral area is all just extra help; it does not comprise the essentials, the sine qua non of sight. I think that we have such a wide peripheral area because we as organisms exist at the center of an omnidirectional ball in space. Once we have rudimentary sight, which is also at the same time very advanced sight, it is good to have as much additional visual area for peripheral attention-getting as can fit on an orb like the eye or on a retina. Motion-detection and attention-getting can be done quite mechanically and over probably an unlimited area of visual perceptors. So we see here maybe three essential things: 1. Only a small central area is necessary for intellect-coupled sight. 2. Sub-intellectual functions such as triggers and reflexes can be accomplished over almost any desired breadth or area. 3. Spatial concerns such as "the omnidirectional ball" will cause a sensory modality to fill up any available space. This foregoing discussion points up the probability of existence of two tendencies in brains which tendencies probably make for very much larger brains than are necessary for intellect. These two tendencies could be for intellect-coupled perception and for reflex-coupled perception. As opposed to intellect-coupled perception, we can really include reflex mechanisms, involuntary mechanisms such as breathing or heartbeat, and any mechanisms which work because of instinct or genetic design.

Self-Organizing Systems

In simulating a brain, we will probably have to make trade-offs between experiential organization and genetically determined organization. A basic caveat to keep in mind is that an organized system of probably considerable complexity can be brought about by genetics and evolution. By this caveat I mean that we had better not just assume that some complex or far-reaching neuronal mechanism is too much to demand of a genetic origin. I would say that any gross function, however complicated, can probably be achieved genetically. However, there are certain functions which involve the fidelity of processed perceptions, which functions can probably not be managed genetically. Since gross functions are so possible genetically, we therefore have a form of game-rule authorization to employ pre-programmed gross-function mechanisms, however highly contrived, to facilitate self-organization in our language-computer. The reason that the mechanisms might be enormously complex and contrived is that we might call upon them to do some fantastic things. When you deal with neuronal mechanisms and switching circuits, you are really no longer dealing with material objects, but with pure logic. As recall permits, I would gradually like to enumerate some of these fantastic things which we could probably do with (to coin a word) autotactic systems, with autotaxis. One often-thought-of application involves memory. Suppose that visual perception went into a channel of visual memory. Suppose the visual image frames were pulsed into memory at the rate of ten per second. Suppose each frame consisted of a million yes-or-no dots. Suppose that each dot were memory-retained by a single neuronal unit, meaning either a neuron or a connective point between neurons. By now we have a choice in our design or simulation. Will the visual channel, with its cross-sectional area of a million bits, be laid out in advance genetically so that the image-frames just fall in a long series of prepositioned neuronal units, or will the visual channel actually be formed continually to receive data and in such a way that it grows in length along with the flow of data?

21 JUL 1974 Now it would be good to effect a joining of these two topics from yesterday, "Small-Scale Function" and "Self-Organizing Systems." The first of two main ideas is that it seems quite likely that all the intellectual processes, at the height and core of their functioning, can deal only with small-scale, simple material. Accordingly, if we encounter large aggregates (such as the visual field or a whole-body tactile field) it is likely that either most of the sense is peripheral, or highly complex aggregates are dealt with perforce by simplification (an idea which will force us to research parallel processing.) The second of the two main ideas is that very highly complex genetic- type mechanisms can be used to further the above-mentioned simple intellectual processes. An example might be an attention-getting mechanism that makes the conscious intellect attend to some specific tactile area of the body, one hand or the other, for example.

23 JUL 1974

Possible Features of Language-Computer

I. Narrow Input Sensorium. A. Sight. B. Touch. C. Hearing. II. Comparison Ability. III. Memory Tending Away from Restriction. A. Limited time span (but as though unlimited). B. Perhaps self-organizing ability. 1. Ability to organize memory space for whatever data are perceived or generated. 2. Ability to construct and change associative tags. a. Tags through comparison mechanism. b. Tags through frequency of reference. IV. Motor-Output. A. A random stimulator of initial motor output (in pseudo-infancy) so that the machine can become aware of its motor capabilities. B. Some gross pseudo-muscles. 1. For interaction with environment. 2. To engender sufficient variety of action that simple language can be developed to describe the action. C. Locomotion. 1. To impart a concept of identity or apartness from the environment. 2. To enhance language. D. Communicative modality. 1. Perhaps pseudo-speech. 2. Perhaps direct transmission of internal code. 3. Probably communication in a form that feeds right back into the machine so that it can monitor its own communication. E. Bouleumatic accumulators for conscious control of action. 23 JUL 1974

Parallel Processing

Parallel processing might possibly be a key element in the construction of a language computer. Unlike willy-nilly reflex activity, parallel processing can be a process with that sort of freedom which we require in a conscious, intellectual mind. Parallel processing would mean that similar or dissimilar activities are going on doubly or multiply within a system. Parproc might pose difficulties with control and with system unity. We generally think of consciousness as a unitary activity within a brain or mind. Under parallel processing, there might be multiple activities going on, and yet only one of them would be the conscious activity. Problems of control and synchronization might arise if multiple processes are coursing through the mind and some go faster, others slower. Anyway, there is a kernel of a problem here. We are trying to get away from unfree reflex or instinctive action and develop free intellect. At present we are trying to reduce both the initial and the basic processes of intellect to processes of "small-scale function" as envisioned in an elementary-logic comparison system. Should there be just one such comparison system, or should there be "beliebig" many, so as to constitute parallel processing?

Comparison Mechanisms

More and more it seems as though the basis of any system of recognizing and understanding will have to be some sort of comparison mechanism. Our contention here is that comparison has to be done on utterly simple levels. When any one broad mass of data is compared with another broad mass of data, any judgment of similarity will have to be based on an analysis of each broad mass into simpler parts which can be compared with other simple parts. (See Arbib, "The Metaphorical Brain," 1972, pp. 75-78.) If we want to attack this problem from one extreme, that of the utterly simple, we will deal with information in switching theory. If our two comparand data masses could only have one bit of information, then, on the one hand, comparison would be simple, because either both masses would contain one bit and be identical, or not. On the other hand, with masses of only unitary size in bits, there would be no possibility of less-than-total differences. If we go further now and let each comparand mass have room for two bits, then there can be identity or less-than-total differences. For example, each mass might contain one out of two possible bits, and yet the order of their line-up might be experientially significant. (Idea: We may have two eyes for the purpose of initial comparisons while we are babies.) If we let each comparand mass have room for three bits, then we are still within the realm of the absolutely comparable under simple logic, but we have greatly increased the possibilities for less than total differences. Our amplified contention here is that with small numbers of bits we have the capability of comparison that is reliable, simple, accurate, non- exponential, and so on. Small groups of bits can be compared both abstractly (unto themselves) and with experientially fixed reference to order, spatial orientation, and probably time-frequency also. This immediately previous sentence is to say that as long as the main intellectual process is free, then any number of unfree reflex or genetically determined processes can be attached in support of the free process. While it is obvious that comparison can be done surely and accurately with small groups of bits, it is also obvious that groups with thousands of bits can be compared only by analyzing them into small groups capable of comparison. (These considerations probably apply to visual and tactile data, but not to language data which are serially encoded.) Two questions arise here. First, how large in terms of groups of bits should the standard comparison mechanism be? (Second, how will large groups be broken down?) To answer the first question, we now get into an intriguing area of the self-organization theme. It is quite possible that there could be a standard minimal comparison mechanism with self-expandable capacity. Assuming that we are dealing with the utterly basic geometric things, then the standard minimal comparator could be based on a certain minimum level. If it then encountered a somewhat simple aggregate which could not be fitted into the comparison framework, then it might automatically be possible to telescope or enlarge the comparison framework up into a higher capacity. For example, this enlargement might be done to go from simple lines and curves to angles or ovals. Part of the main idea is, though, that you only get at a higher level by going through the standard minimum level. 28 JUN 1975

Language and World Logic

This evening I have been tabulating the vocabulary in a textbook for German One. I check each word to see if it is in Pfeffer's computerized wordlist of the 737 most frequent German words. It is amazing how unnecessary each single one of these 737 words seems to be for the general ability to speak language. I look at each frequent word wondering just how much this word will help a student to develop fluency. In each case I have to realize that the word will probably promote fluency only to the extent that the student needs that word a lot and finds it there in his mind when he needs it. A word promotes fluency if it helps to express ordinary experience and ordinary thought. Linguistic mentation seems to hinge not upon any single word of vocabulary, but just upon the presence of some few words and some few rules of structure. This makes it seem as though the psychic aggregate is reducible to its parts and each part could perhaps stand alone. The number of words available doesn't really matter, because they are just names to deal linguistically with things and actions. The number of rules need only correspond to the number of relationships to be expressed. The amazing thing about language is that within a mind it is so fluid. Objects (or their representations) which are quite inert in the outer world can be manipulated effortlessly in the internal world. Pairs of physical actions which actions could not themselves join to produce an effect can by the mediation of language join to produce a countless variety of effects. For example, a distant person can experience by language both that a train is coming and that there is a cow stuck on the track. He can then by language and radio cause the engineer to stop the train or go to a siding. At any rate, physical states which cannot themselves interact, can, if idealized in language or logic, interact first ideally and then physically as an outcome of the ideation. Seen that way, language becomes a sort of lysis of the physical world into ideational particles. The names of things are an abstraction from them. The rules of grammar are an abstraction from the relationships between things. If things are named and relationships are perceived, then ACTION is potentiated either in the ideational world alone or in both the ideational world and the physical world. A mind could be thought of as the vehicle of potentiation. In a mind, naming and relationship-perception automatically give rise to a flux of thought. The thought does not come from nowhere, but from the inherent logic of the perceived situation. We have here then not deduction or induction, but an interplay of logical quasi-forces. A mind automatically mingles and synthesizes logical inputs in a sort of release of logical tension. Thus it is seen apparently that language generates speech or thought only in a dynamic continuum of constant assertion or readjustment or operation of values held by the mind. Language then is a means of mediating a dynamic equilibrium among the propositions contained in logical inputs. The logic that language functions by becomes a logic of discovery because the logic synergizes the possible or probable relationships which the input elements could mutually enter into. Memory is readily seen to play a role here because the "possible relationships" are suggested forcibly to the logic machinery by the associative memory. When the "possible relationships" throw themselves up to the mechanisms which embody the rules of structure, those mechanisms then automatically generate statements of concern to the value-holding mind. All such new statements are linked to the whole mosaic of previous mentation. Statement generation is a process in which things that are physically inert become logically fluid. What actual activity is there in such mentation? The actual activity is the re-assembling of the various associations in new relationships, or in statements, which are expressions of relationships. Example: A. As the Greyhound bus stops, I see the name of the town. B. I remember that So-and-so lives in that town. C. The notion is generated that I can visit or phone So-and-so. 2 JULY 1975

Geometric Logic

On 28 JUN 75 a treatise on "Language and World Logic" was expounded which meant to show that structure rules in language are more important than any particular vocabulary items. The structure rules, however, are an orderly reflection of the same order (sequence, causation, relationship, etc.) which exists in the external world. The elements that fit into the language structure rules make for very simple sets with quite few members. For instance, a very typical sentence would consist of a subject, a verb, and a direct object - just three elements. It will probably have to be a central feature of any archetypal language computer that all the quasi-mental processes are based on and have at their heart the strictly defined manipulation of aggregates no more complex than the simple geometric items such as point, line, and circle. We might say that a neuronal or switching-circuit mind can "primary- process" only simple aggregates, although we do not decide yet what is the maximum number of possible elements per set - three, or maybe seven, or beyond? We can speculate quite a bit as to how many elements a mind could primary-process; for instance, maybe the prime numbers are involved in some special way. That is, maybe the process can handle three elements or four elements, but not five or seven, because they are prime numbers. But to handle four might require a non-primary division. Of course, once there is a basic process such as geometric logic, it is then easy for a mind to operate recursively or exponentially. That is, a mind can operate with such speed and such pervasiveness that it may generate the deceptive appearance of large monolithic operations. The pseudo-monolithic operation could really be either a great number of very rapid operations or a great number of parallel operations. Let's suppose that there were indeed a mind operating on a basis of three-point logic. This mind can perceive a set of three yes-or-no bits and it "writes down" each perceived set as an engram in permanent memory. (The present discussion "vel" rumination is going to be extremely simplistic, and full of digressions.) The mind can perceive such tri-sets sequentially at some rate, maybe around ten per second, but it doesn't matter. So far we have a mind repeatedly testing three reception-points and then passing the data into a permanent memory. In a previous year we have already done some work on how such a mind might set up associations among data coming in and data already stored in the permanent memory. For the purposes of this present discussion we will probably have to rework or regenerate a theory of how association would work. We can now introduce to our model a tri-set of motor functions. Let's say that by activating any of three motor points respectively it can move forwards, or move backwards, or revolve clockwise. We may or may not elaborate now on the motor output, because in past theory it involved such complicated features as "bouleumatic accumulators," but we should be mindful of its likely existence. We can impute to our mind-model the ability to perform any strictly defined, automatic function or operation upon the data with which it deals, incoming or stored. This notion fits in quite well with the geometric logic theory - in fact, it is the reason for the theory, because we want to reduce elaborate mental operations to a fundament of utterly simple operations. It would be nice if we could devise a way for the machine to build up inside itself mechanisms more complicated than those with which it starts out. For instance, we are minimally going to have to postulate some sort of association system, with which the machine is "born" as it were. But that means there are three or four sets involved with or in the machine: input sensorium; association network; memory; and possibly motor output; plus possibly a fifth "set" which would be the whole external environment, which could teach or "inform" the machine. Our present "for-instance" is the idea that these three or four or five sets could perhaps yield constructive inputs of such a nature as to "transcomplicate" the association network, the "Bearbeitungsnetz." Suppose that the permanent memory is a sequential path going away from the "Netz," the association network. We might by "heredity" give the machine the ability to lay down a secondary or tertiary memory path, or any number of them. Thus the machine might have a secondary memory path which it would use for the products of its internal functioning, its internal "reflection." It might have a memory path associated with the use of its motor output, even in such a way as to constitute a true but versatile "bouleumatic accumulator." The main point right here is, however, that any additional memory path, co-bonded to the mind by means of an associative network, constitutes a device for building up additional processing structures. We digress here now to discuss the methodology of how theory is built up and enlarged from a pre-existing structure. Knowledge can generally be organized. When we learn new knowledge in a field, it fits into a relationship of new knowledge to prior knowledge like foliage to the branches of a tree. When we are just theorizing, not experimenting, how do we develop new knowledge out of a seeming void? Perhaps elements of the prior knowledge suggest further things, extensions of the prior tendencies. A logical structure is extended by a branching-out process, but the selection of the valid branches is dependent upon their valid re-alignment with the universe at-large. A prior structure may suggest all sorts of extensions, but the valid ones are the ones which work. The valid extensions can be found by testing the set of possibilities for valid re- alignment with the universe at-large. Thus even in this discussion it is proper to state many digressions in order to go back through them later for either acceptance or discarding. A system operating under geometric logic, which is absolutely well- defined, should be especially capable of establishing valid extensions to any structure which it holds. We may now digress to discuss the topic of how the human mind handles such wide sensory input channels as sight, hearing, and touch. These vast channels can probably fit under the notion of geometric logic, that is, the perceptions can probably be ultimately based upon simple aggregates of the order of geometric logic. Synthesis and analysis both play roles here. We might say that any synthesis is "superfurcated" over several analyses, or that the analyses are "subfurcated" under a synthesis. When our mind beholds a visual scene, we are conscious of the whole scene before us at once. A skeptic to our theory might ask how we can see the whole scene at once if a neuronal mind is based upon small geometric aggregates. There are several distinctions to be made. Though we are conscious of the whole scene, our attention always is focused on some one point or spot in the scene. Our attention can dart about, but it is always unitary, just as the human mind is considered to be unitary. Yet even while we attend to one spot, our mind is conscious of the whole scene. It seems that here is a case for parallel or simultaneous processing. Every punctual sight-receptor is probably associated in many ways with constellations of past experience which permit it to engage constantly in a flurry of associative activity while the mind beholds the visual scene. Therefore, while the mind is watching the scene, the whole receptive "plate" is seething with neuronal activity which goes ad libitum deep into subfurcated structures. Probably there is even a sort of competition raging as to which spot of perception will successfully agitate for control of the process of attention and for the focus of consciousness. So even though there is a focus of consciousness, the total process can be a vastly broad phenomenon. Still it seems hard to imagine that one can be conscious of so much material all at once. There is a certain deceptive process, though. We attend fortuitously to whatever we want, and our attention darts all about, creating perhaps the impression of simultaneous rather than serial perception. If we define consciousness (elsewhere) as a certain activity, then we should consider the idea that consciousness actually "lives" in the great sea of perception. That is, consciousness creates itself partly out of its own perceptions. We might say that incoming visual perceptions do a sort of flattening-out of our consciousness. We should remember that operations flash many times per second in our mind, so that our wide-channeled consciousness can very well be just as much an illusion as the illusion of motion created by a motion-picture film. It is further important to remember that the very wide channel of sight is altogether unnecessary for the existence of consciousness. Since we can be conscious without any wide perception channels at all, we can be encouraged in this work concerning the "oligomeric" geometric level. 3 JULY 1975 Now we can digress upon "superfurcation." If a mind can deal with three points absolutely at once, that is, absolutely simultaneously and not any number ad libitum of mechanisms for doing the tripunctual operations. That is to say, the limitations of mind must not be thought of as hardware limitations, but as logic limitations. A basic set of three points can have subfurcations under one or more of the three points. By association, an elemental point can really represent a large subfurcated aggregate. We might consider the idea that neuronal, or switching, machinery can work both horizontally and vertically. We might consider that normal neuronoid operation upon a tripunctual set is horizontal. Then any operation involving separate levels of furcation would be called "vertical." An associative network would probably be free to disregard levels of furcation. We might consider that logical manipulation of superfurcated aggregates hinges upon salient features, made to seem salient by associative operation. When three different things made salient rise to the tops of their furcation pyramids, then perhaps a sentence of language in deep structure is generated, with the three different things finding expression as subject, verb, and direct object. Each thing finds a name, and although a name can be long, a name is still a unit. Perhaps in an advanced (non-infantile) mind the distinctions fade between levels of furcation, and there may be such boisterous neuronoid activity that large aggregates such as long sentences seem to be treated all at once and as a whole. But still the operations described by geometric logic theory might be seen as precedent and preconditional to such massive activity. Unhampered by biologic genetics, and returning to our model, we might construct a machine with a capability of expansion (or contraction) of any logic channel involved with it. We might start out with a sensory channel of three points, and then enlarge it by any number of points, as we see how the machine handles the various channels of various sizes. Of course, by our furcation theory, any input can be analyzed down to an oligomeric set governed by geometric logic. But with wider channels there can be a lot of automatic processing (or pre- processing) and we can inaugurate a sort of attention-mechanism within our model. An attention-mechanism must probably be backed up by the whole gamut of horizontal and vertical association networks, if not actually consist of them. Visualizing a mind now as an oligomeric logic machine, we are ready to make a comparison between wide input channels and extremely narrow channels. To start with, regardless of the size of the input channels, a mind will try to deal with large logical aggregates which it encounters in the external world. A vast external aggregate is a given. Whenever the channel is not large enough to cover the external aggregate, then the mind must attack the aggregate in a piece-meal fashion. This might mean reconstructing the aggregate in the mind after studying its parts. The process might be similar to the way we can get to know a whole city without seeing it all at once. If the input channel is wide enough to take in the whole external aggregate, then the mind has a certain advantage. This advantage might be illuminated by a theory of "telescoping" of the furcation levels. That is, with a wide channel of input, it may be possible by organization to select out purely logical features irregardless of their physical dimensions. Every flashing application of processing machinery can generate one or more associations. The association network can climb up and down furcation levels until the most salient features of the aggregate are distinguished. The oligomeric geometric logic can still have operated here, because it is built into all the mechanisms at work. The amphidromic association network allows an aggregate to be understood at whatever levels it is analyzed on. The same network allows abstraction by soaring up to the most generalized levels. We may now digress upon those common optical illusions where our perception of a drawing seems to fluctuate back and forth between two interpretations of the same drawing. The fluctuation could easily be due to oscillation within an associational network. It is the job of such a network to produce a "most-salient" interpretation. However, for the illusory drawings there are two highly "salient" interpretations. The oscillation could take place because when one "salient" result-association is formed, it tends to become unitary with respect to its relative power of commanding associations, and so the other competing result-association, with its multiple "threads," becomes statistically dominant, and so back and forth. If the associative "valence" of an achieved result-association did not tend to sink towards unity or whatever, then we might find it difficult to ever remove our attention from a stimulus. Scratch-Leaf 3 JUL 1975 - superfurcation, subfurcation? - All sense avenues just specializations of tripartitism. - Geometric logic favors "unity of mind." - input channel-width of automatic expansion coupled with memory channels of a width determined solely by the association network. 4 July 1975

Self-Organizing Mechanisms

At this stage in the preparational research, it is necessary to consider three systems: 1. A system to analyze sensory input and do two things with it: A. Associate it with information stored in memory. B. Put it into memory in such a way that to it, too, associations can be made from future input. 2. A way for the organism to build up internally analytic systems more complex than those it starts out with. 3. A way for the organism to use trial-and-error or the process of elimination to accomplish number two above. 5 JULY 1975 The design order or request for #2 or #3 above (from yesterday) might be just that the system has to build up an internal recognition system capable of detecting the second (or higher) reception of a certain input set of a certain complexity. In other words, the organism is self-organizing in an ever increasing way. It is only spurred on to develop increased recognitional capability when it encounters a perception for which it does not yet have the ability. Obviously, such a system might have to obey the law of "Natura non facit saltum." That is, it might have to build up its recognition capability step-by-step, in such a way that certain complexities can be recognized (or compared or analyzed) only if the organism has by chance been led up to them by means of intermediate complexities.

6 JULY 1975

Scale of Input to Self-Enlarging Processor

Information that comes into the mechanical organism will be processed in a way which tries to link up the new information with past experience. Obviously, that link-up must be a two-ended channeling, in the sense that the organism must be able both to direct the whither-goings of new information and it must have a way of specific access to any required stored information. Retrieval or re-use of information is possible only if there is a discriminatory channel of access to that specific information. I see two ways of access that will get straight to specific information. One way would be to be able to find information by the code of what the information actually is. That is, information would be accessed by the logic of what the information actually is. Geometric shapes could be accessed that way, or words of a human language. The information itself would be the code to its "address." This method might compare with what in electronics is called "Content Addressable Memory." The second way of access would be access by means of an arbitrary associative "tag." It could be assigned because two things happened at the same time, or because one closely followed another. With associative tags it might be good to imagine a device which can automatically lay down tags anywhere. One might say that "code-access" is more refined or more manipulable than "logic-access." For example, words are more manipulable than pictures. However, it may be true that raw input data are never perceived directly as code but must always be sorted out first according to logic. That is, sight or sound, I mean, light or sound, is basically a physical phenomenon, not a code one. It is cumbersome and unwieldy before it gets changed into code, but afterwards it is very handy. So we might say, "Logic comes first, and code comes second." The frontiers of perception are logical, and in the heartland of the mind we use code. I think that the major sense we must tackle is sight. Sight can be two- or three-dimensional, but the other senses are rather lineal. Through sight we can garner a lot of information to work with using code. Through sight we can know objects more directly and we can observe myriad relationships. The great think-tanks are working on pattern-recognition, so obviously it is a field of much research. If we work on sight as a major sense-channel, we can keep everything simple and work with minimal parameters. We can get to the topic in the above title, scale. For sight we basically need a two-dimensional field of punctual receptors. The main question of scale is how many points we need. According to our theory of "telescoping" from 3 JULY 1975, above a certain unknown size a visual field just permits the treatment of greater physical dimensions, not greater complexity. XXXXX OOOOO x x x x x XXXXX OOOOO x x x x x XXXXX OOOOO x x x x x XXXXX OOOOO x x x x x XXXXX OOOOO x x x x x x x x x x . . . . . x x x x x . . . . . x x x x x . . . . . x x x x x . . . . . x x x x x . . . . . Suppose our model from 3 JULY 1975 had a visual perception field of twenty-five points as type-written above. That field would be its window upon the external world, and it could see shapes or figures that passed into that field. According to our theorizing from 4 & 5 JULY 1975, we might be able to add on peripheral points to that first visual field as we went along. Having a 25-field like this entails having a memory channel consisting of many slices, each with at least twenty-five points. Or does it? What can we say about this visual field? First of all, its activation is pulsed. It is not just steadily transmitting the state of all its receptors, but rather it sends individual pictures at the rate of a certain interval, perhaps every tenth of a second, or whatever the experimenter wants. It is pulsed so that the pictorial logic of one discreet point in time does not merge with the logic of another time. Of course, it doesn't matter where the pulsing takes place, either at the sender or the receiver. Let's say that the utterly middle receptor were the main point of visual perception. Numbering from left to right and from top to bottom, we can call it point thirteen. From point thirteen we might begin our process of building up the internal perception or logic mechanisms. Of course, we want to supply a minimum organization by "heredity," and then let the machine carry on from there. If we like, we may let the laying-down of memory engrams occur only when a significant internal logical function transpires. That is, if the machine achieves something in sorting out an input frame, then we could have the machine record the result. Of course, if we had a constantly erasing memory loop, then we might let just anything get engrammed. If we can devise our sought way for the machine to construct its own internal order. then we would not need to pre-order the perception of, say, three points, such as, say, numbers 12, 13, and 14. We have to have a field, a comparator, and a memory. It is up to the comparator to decide whether any image, total-frame or part-frame, is the same as any image in memory. Only then can other associations enter into play. A rather potent comparator could consist of only five points, say, numbers 8, 12, 13, 14, and 18, and it might look like the following typewritten image: x x x x x Such a five-point comparator would allow the detection of lines or breaks in lines along four directions: horizontally, vertically, and two criss-cross directions (or orientations). Examples are: x o o x x x x o x x o o x x x o x x o x x x x x o 7 AUG 1976 - The seven points of geometric computation are the bottleneck conduit through which all ratiocination must pass at all levels. - One facet of ratiocination is the analysis of new data and then comparison with reference data. The ability to compare universally requires the breakdown or analysis of data to that point where they are not yet identical but where utterly simple features can be said to be or not be held in common. Things and classes of things must lend themselves to "isologic" comparison by a process of bringing them within the scope or framework of the simple geometry. It is assumed that, since things in the universe and also ideas have parts, they are to be analyzed in terms of their most basic distinguishing features. Perhaps first they must be analyzed into their most basic natures, such as whether they are a number, an idea, a shape, a quality (such as greenness), or a physical thing. I guess we are trying to say here that things have differences, but that by virtue of similarities things fit into classes, and that the ultimately basic classifications of the universe are not great enough in number to overload or swamp the simple geometric logic of ratiocination. If classes become large at non-basic levels, then a separate process in ratiocination can count and keep track of their elements. Even if there were or are more than seven basic facets of reality, either extra ones could be ignored or else a bootstrapping technique of virtuality could accommodate them all. If we construe ratiocination as meeting all things through a septet, then we can start designing a minimal universe, because where all things are reducible to septets they might as well be septets. 8 AUG 1976 Does a fixed-position eye mechanism add extra data? In a comparator there must be a play-off, somewhere, among capacity, speed, and specificity. - It must function all in one step, not in a series of steps, so that its comparison results are instantaneous. - The greater its capacity in numbers of elements, the more logical classes it can compare and process instantaneously. - When the size of an array reaches a large enough number of elements, then classifications cease to hold up with respect to exact configuration of elements. A comparator of a million elements is logically possible, we can imagine it, but it would be totally useless for classifying patterns of up to a million elements, because it could not cope with slight variations. So somewhere a ratiocinator extracts pattern from a massive array. My main thesis here is that such extraction has got to occur by means of an unmistakable comparator mechanism. Although I say "unmistakable," the search for pattern can certainly flit about testing various alternatives, but the final adoption would be a logical structure exact in every element. (Of course, an added mechanism could allow variations within a pattern.) My thesis is furthermore that with a very few elements a comparator can handle all basic logical structures and patterns, so a ratiocinator might as well apply a simple comparator to perform basic encounters, to classify or "recognize" basic patterns, and then apply the same simple comparator to cope with the finer details and distinguishing features which lurk behind the representative features treated as primal elements in the most basic comparison made at first encounter. The above discussion seems to indicate that the simple comparator must indeed be able to cope with structures or "patterns" in which certain elements may be variable. On the one hand, variation must show up distinctions, but on the other hand variation must not nullify a valid comparison ("recognition"). When we set up a fundamental comparator, we must remember that it is like just a window to the world, and that it does not and must not limit the degree of possible logical complexity within the interior processing structures of the ratiocinator. That complexity will be limited by other factors, such as genetics. The comparator must be viewed as the absolute conduit of information. Even if there are wide perception channels peripheral to the organism, to the ratiocinator, nevertheless an unescapable bridging of pure logic must occur between the outside world and the ratiocinator, and this bridging, since it is the source of all outside information, really lies at the heart of the ratiocinative process. One could almost say that ratiocination, understanding, is an instantaneous and immediate process, and that extended ratiocination is just an extended series of ratiocinative instants, none of which can exceed the "Umfang" or scope of the basic window process. Thus ratiocination is like an acid which is simple in itself but which can eat away even the largest structures. Of course, a ratiocinator can have such great interior complexity and such speed of operation that it becomes able to deal with massive internal aggregates (such as an abstract noun) in a process of virtuality, where an aggregate, after meeting certain tests (by associative tag), is quickly subsumed under a single element or two so that it can participate in a very fundamental logical operation. Of course, certain logical tags will keep an elementalized aggregate from losing its identity during operation, but I want to remind that logical processing can occur only within the simple comparator mechanism. In contrast, there are no restrictions on associative tags, which may be imposed capriciously or arbitrarily from without and are not in themselves logical, although they may convey a logical content ("Inhalt") belonging to a "Gesamtbild." So far today, we have: - simple comparator - memory engram channel - tagging mechanism. Assuming that all input data are pulsed and laid down in memory, how do the comparator and the tagging mechanism differentiate among the engrams and make classifications of engrams, giving them quasi-addresses? It is probably safe to say that only what goes through the comparator becomes an engram, because, remember, the comparator is the absolute conduit of information. When an apparently broad slice of channel gets laid down, really it is an aggregate of comparator-processed structures laid down in hierarchy. It could very well be that associative tags are what intrude to set up the hierarchy. It then follows that associative tags parallel the inherent relationships between the primal elements of the fundamental simple comparator. The effect is like when you blow through a bubble ring and create a series of bubbles wafting through the air. The preceding paragraph is a breakthrough of sorts, because it suggests that tags are created when a slice of data goes through the comparator. Now, the actual physical operating of a tag-system ("tagsyst"?) is a function of the dumb, brute internal hardware and can be completely automatic, even if cumbersome. In the work of previous years, we almost tried to force a vision of the tagsystem as arising spontaneously out of the memory system. Even if the tagsyst does arise from the memsyst, in maturation it has to be a quite discrete entity, an absolute mechanism which can not have its nature or organization changed by the data going through it. (Such a change might constitute hypnotism.) It is doubtful that the same tagsyst hardware unit can be used reiteratively to process all the distinguishing subsections of an incoming aggregate, and so, bingo! you have "parallel processing;" that is, differentiation into any one exceptor screen of a tagsystem must lead on into a whole series of tagsysts. Immediately here a wide variety of possibilities emerges for consideration: - Is there clocking during parallel processing? - Can a tagsyst handle a string of engrams, or must every engram contain a tagsyst? - Do tagsysts form automatically whenever novel data are experienced? - By a process of virtuality, does the frontier of awareness grow broader and broader, so that a ratiocinator can feel consciously aware of a whole panorama at once? 9 AUG 1976 As of yesterday, the Nommulta work is going especially well, now that we have moved into an area (basic ratiocination) where everything that we have been theorizing has remained of a nature easy to produce in actual hardware. That is, as we move along in theory, we keep conscious of how we could build each physical device. In previous years our work was obstructed or came to a halt, when we would come to a point in theory where we could no longer imagine how to build an item. Of course, there's an underlying idea of whatever we can dream up, we can build. Or perhaps we would reach points where we could build the stuff, but we couldn't see where it led to next. For instance, we dreamed up the "bouleumatic accumulator" with all its attendant theory, as a physical device for achieving conscious volition. The "boulsyst" was a nice, intriguing concept all in itself, because it solved the tantalizing problem of how you could think of an action without willy-nilly performing that action with your muscles. Hopefully, the boulsyst solved the problem by subvectoring each volition into two separate memory approaches. If you just think about an action, it won't occur. But if you simultaneously think about the action and about willing the action, it does occur. The boulsyst idea is that, in one of the two approaches, probably the volition track, there is a sort of integrating or accumulating function so that you can begin to think volition while sensing a trigger level. There is not a separate volition observing the trigger level, but rather the whole consciousness stays busily attuned to the approaching act of volition, and so you could say that the entire conscious mind causes an act of will. THere is choice because the possibility of an action can be considered at length in advance. The nitty-gritty of the boulsyst theory is that some memory tracks are actually causative of action, and that to summon or activate such a track in conjunction with an attained boulsyst trigger level is to initiate an act of the motor system. Once we had the boulsyst, we couldn't put it to work yet because we were still working on how to take data from a perception channel and process them into a memory track. Storage of data was not important if we could not figure out how to organize data and how to let them interact with other data. We began to think a lot about associative tags and about a minimal ratiocinative entity. For months and months under our bourgeois cover we contemplated the possible make-up of an absolutely simple, yet intelligent, mind. If minds and universes have parts, then let's make the simplest mind in the simplest universe, was the idea. Now, perhaps temporarily, we have in the area of ratiocination a concept which leads in many exciting directions at once. I go rather personally into such tangential narrations because this Nolarbeit is really a personal, unshared project (I can find no collaborators) and all this writing serves the personal purpose of capturing ideas in order somewhat laboriously to build a structure of ideas. I really rush to get certain things down. Ideally I might write while a tape recorder runs so that I can blurt things onto the tape before I forget the wording that came to me. This is a personal, hobby project, but I would like to achieve some results that would get someone else interested in what I'm doing. Then they might like to read my notebooks, but even so I am continuing to write what to myself is clearest and feels most comfortable, such as my slangy vocabulary and my concealing use of "we." Then let no one complain, because right now this material isn't written for a second party. I don't even have to separate the theoretical from the personal content, because later on I can index the content by date plus or minus paragraph. To indulge further, the beauty of it is that I am generating a mass of ideas in an utterly free process, with no constraints of deadline, money, or practical purpose, and it is not a sand-castle situation, but rather a real- life endeavor because everything is tied in with the idea that the proper study of man is man. Furthermore, it amazes me that I generate all this glib verbiage with slang, termini technici, and neologisms year after year. At any rate, basic ratiocination is leading in certain directions of theory, and it is beginning to look as though soon we might be able to bring together and combine a lot of separately developed subsystems. For instance, the bouleumatic system would be a major part of any complete, integrated model of a mind. (It bothers me to put such words as "model of a mind" into print, because they state so explicitly what this project is so hubristically all about.) After all, one of our ongoing research techniques was to try to list all the major necessary components of a mind. If we devise the components in detail one by one, eventually we will reach a finally problematic component, the solution to which becomes the solution to the whole. I notice that I am delaying a bit before plunging on to pursue the idea of basic ratiocination. On the one hand, I suspect that the theory is going to get pretty complicated now, and on the other hand I want to sort of stand and look around where I am. I suspect that I may be putting forth a theory for parallel processing in a mind. Such an idea warns of getting into extreme complexity either in theory or in hardware, but it also gives a feeling of success, because I was not really aiming at parallel processing, although the idea always floats in my mind as one of the problems of intellect. Rather, I was aiming at a single process and by its own development it took me into the erstwhile scary area of parallel processing. I had been afraid of parallel processing because it never yielded to any dwelling of thought on it. I think I felt that I would be able to figure out the individual ratiocinative process but not the immensely broad process that humans have in things such as vision. Now my theorizing from yesterday suggests that there is one single way and that there is no distinction between individual and parallel ratiocination. After all, didn't we always make a point in our theorizing of the idea that consciousness always focusses on one thing or attends to one thing, but can be distracted by many things? 10 AUG 1976 To see where I am, I want to list what I consider to be my main developments in the whole project since 1965 or 1958: - pulsed memory track - associative tag system consciousness itself - slip-scale virtuality { use of language - the pull-string theory of transformational grammar random-stimulus motor learning - bouleumatic system { bouleumatic accumulator - basic ratiocination by simple comparator. Unless there are other main developments back in my notes, the above list delineates the main areas in which I have both theorized and come up with promising results. 11 AUG 1976 When a slice of pulsed perception comes in, it must go through a comparator system and into a memory engram channel, although perhaps not in dependent sequence. THere is some question as to whether the engram-channel is like slices of the raw perception or contains only the analytic results of the compsyst. How does an Eidetiker recall a whole scene? Of course, either or both methods could be tried empirically. THe purpose of running the perception data through a compsyst is to analyze the slices and find similarities among slices. At this point we are returned to the first sentence of the work from three days ago, asking whether a fixed-position eye mechanism adds extra data. Apparently, the orientation of an image can be fully specified just by reference to its particular analytic configurations, since, after all, every single possibility of configuration is represented. However, there might be advantages in a seven-point compsyst over a three-point compsyst because the central element of the seven-point compsyst could be used as a reference point for orientation. Of course, the "upper" point of a three-point compsyst could also be used for a reference point, and further elements from an array could be associated with it at finer compsyst levels. If an "upper" element of three has an outside significance, then it can assume the same significance on the inside. Now, by what ways will the compsyst establish tags going to specific memory engrams? Offhand, I can think of three causes for tags to be assigned: - temporal succession of two slices - analytic similarity by compsyst - accompanying signal from other perception or ratiocination channel. It must be remembered, during this theorizing about a single, initial compsyst, that the perception channel probably is a large array with one central focus area. Being large, the array will initially have many unitary transmission elements which start out not being associated into any pattern (of orientation), but which probably would gradually be added on to whatever order was created and radiated from the central focus. And so, the nearby existence of additional "wires" has to be kept in mind. Of the above causes for associative tags ("soctags" perhaps), the temporal succession one is probably the easiest to imagine, and it is also easy to theorize that temporal succession supplies a good causation for letting a system self-organize. However, temporal succession is not at all an instance of content- analysis. The second of the above causations, "analytic similarity by compsyst," promises to be more interesting. We may have to re-establish the 1967 notion of a "consciousness area" so as to have a putative mechanism which is at all interested in receiving notice of the formation of mental associations. Scratch-Leaf from 11 AUG 1976 sight-system sound-system comparator system tag system memory engram channel motor memory system motor muscle system bouleumatic system random dynamics system (for random stimulus motor learning) motor/sensor exterior representation of self an environment Scratch-Leaf from "AUG 1976" tagsyst parsyst associative tag system slip-scale virtuality - temporal succession - analytic similarity by compsyst - accompanying signal from other perception channel 8 March 1977 Although we can assume limitless features in our designing of an artificially intelligent computer, there are certain minima to be considered. First we can state that our units are switching-units. Now, suppose we want to establish what is the absolute minimum of hardware necessary for AI. Well, we don't have to think in terms of transistors, or integrated circuits, or ferrite cores, or even material weight. The nature of our components is more logical than physical. But the environmental universe can also be considered to be logical. Therefore the question can be stated: Out of a universe consisting of countless units with logical properties, minimally how many units must join together to form an AI capable of dealing with the universe? Intelligence does such things as analyzing generalities, recognizing identities, and solving problems. Basically it manipulates and processes logical structures. The things in the universe exist not as unique entities but rather as structured aggregates of simpler entities. Idea: I bet you could have side-by-side scales where on one side you would list progressively the number of units in universes and then on the other side the minimal numbers of units necessary to constitute an intelligence capable of doing all possible intellectual operations with the contents of those universes. I don't mean exotic operations, just the typical ones. We would have to establish just what operations the intellect must be capable of. 12 JUL 1977

Organizing the Nolarbeit

It should be possible now to start making the research work accrue along specific lines. For instance, there are the following topics: - basic ratiocination - volition systems - robotics - specific psycho-organism projects. If the theoretical work accrues along specific lines, then it won't be so hard to get back into the spirit and niveau of the work after periods of absence. Record-keeping can become more organized. The following system for keeping track of generated documents is pretty clear. There should be a master file in which are kept dated copies of all documents in the time- order of their genesis. Thus it should be possible to go through the master file and read the serial history of all work on the Nolarbeit, even though the serial entries of documents might present a jumble of work pertaining to many different departments of the total program. This master file is meant to be as handy and useful as possible for the kind of reading which stimulates new ideas, so therefore as much as possible of its verbal content should be type-written. It will be proper to substitute in the master file typewritten copies of what were originally hand-written documents. Handwritten papers which become represented in the Nolarbeit master file by typewritten copies should be collected in a Nolarbeit manuscript storage file. The manuscript storage file can serve several purposes. Basically it is available for a check if any question arises as to the accuracy of a typewritten copy. It also serves as one more copy for insurance against loss of the informational content of the work-documents. Of course, the manuscript storage file will not be a complete history of the Nolarbeit, because some documents are originally typewritten and they therefore go straight into the master file. The master file serves as a source of photocopies for other files. There should be at least one master file insurance copy. That is to say, the master-file should exist in duplicate for protection against irretrievable loss of information through such things as fire or inadvertent destruction. Any additional number of insurance copies as desired can be made by photocopying the main master file. There should be Nolarbeit topical files made up of photocopies or carbon copies of items kept in the master file. It would be in the topical files where the research work would accrue along specific lines. An item from the master file might be photocopied for insertion into more than one place among the topical files. A topical index of the master file should be maintained with informational documents referred to by their dates of genesis and/or titles. 13 JUL 1977

Simplest-Artificial-Mind Project

Introduction This evening I've been reflecting on how we might construct the simplest possible mental organism to show that we have produced artificial intelligence. I want to record some of the ideas so that later they can serve as a springboard for further thought. We would go heavily into providing a language capability for this machine. I propose that we would set up a semi-English language, so that the human experimenter could readily understand and even think in the language used by the machine. We could call the language "Sax," from Anglo-Saxon. Words could consist of not more than one, two, or three letters. We could set up requirements for the use of vowels somewhat as in English. For instance, we could say that the middle letter of any three-letter word had to be a vowel, or we could just say that each syllable had to contain a vowel. The main idea is to preserve a semblance to natural language. For instance, "I TOK SAX AND YU TOK SAX." The language Sax would be a strict code of unambiguous letters. The processes of the use of Sax would be strictly analogous to the use of human speech. Utterances would be introduced to the machine electronically or electromechanically. The SAM (Simplest-Artificial-Mind) machine would have the tabula rasa capability to perceive and remember and associate utterances in Sax. In other words, there would be a quasi-genetic memory channel serving quasi-auditory purposes. The SAM machine would also have the motor capacity to generate utterances in Sax. Whenever it made utterances in Sax, it would hear its own utterances. The above establishments allow us now to state some possibilities. We could use a random-activator process for the SAM to gradually become aware of its own ability to make "sounds." It would accidentally make sounds and simultaneously perceive them. By learning to make its own sounds, the SAM machine would be able to set up bouleumatic accumulators for the purpose of consciously making any desired utterances in Sax. By virtuality, the machine would be able to think in the language Sax and it would not matter that the machine were not thinking internal sounds. It would think in perfectly liquid units of code. The human agent might be able to pronounce Sax aloud, if he wanted to. So far we have given a description of how language might work in the mental organism. The language system as so envisioned is simple in that it does not require actual acoustic equipment to deal with actual sounds, although if we keep strictly to our analogies it should remain possible as an option for future development to go into acoustic devices. However, the language system is only a part of the mental organism. For the language system even to function at all, there must be experiential interaction between the SAM machine and its environment, and meaningful information content must find expression in the Sax language. Here is where it will be important to deploy the Nommultic concept of the basic ratiocinative device. It may be valuable to construct two SAM machines, so that there can be the two machines, the human agent, and the background environment. If the human agent taught the language Sax to the two machines, then perhaps they could learn together in a machine society. 23 AUG 1977 It is easily obvious that the mature existence and functioning of a highly intelligent mind is not dependent on the variety or breadth of its perception channels. Certainly a human being can think and use language even if he becomes blind, deaf, and so on. So obviously then the conscious intelligence would be operating on the basis of memory and of ratiocinative processing equipment working along with memory. Wow, what if the ratiocinative processing were of a homogeneous nature, so that the "depth" or "level" of intelligence would depend just upon a kind of pyramidal quantity-question as to how much processing equipment there is available either to function at once upon large aggregates or in a parallel fashion?

End of Part One of the Nolarbeit Theory Journal.