1956-1974
Is It Deja Vu All Over Again? Part IV
Summary
“Irrational Exuberance” Leads to a Loss of Funding and the Advent of the “First AI Winter”
The New Computing Machine and its Capabilities
We have now reached a point in our narrative in which the digital programmable computer has become reality. Artificial Intelligence has been singled out (1956) as perhaps a logical next step in realizing the new and largely unexplored potential of the new computer.
The new (at that time) computing machine could carry out computations at high speeds, but with necessarily limited accuracy due to machine limitations in storage capacity and the fact that it is, after all, a “finite state” machine. It could just as easily carry out the basic operations of classical symbolic logic. The symbols 0 and 1 can be treated as numeric data, or logical data (false or true, for example), or simply switches in the off or on position, or whatever meaning you may wish to assign to them provided that it is consistent with the rules of Boolean Algebra.
Recall now the basic guiding principal for the Dartmouth conference: “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principal be so precisely described that a machine can be made to simulate it”.
We interpret the word “machine’ to mean the new computer, if not some future version of it. The important issue already is the implication that the machine is going to simulate anything that we refer to as “intelligence”. The use of the word “intelligence” here was probably unfortunate.
In the mind of the author, a machine is a machine, is a machine. A computer, for example is after all just another kind of machine. It has been endowed with enough capability to allow it to carry out complex numerical computations, for example. In fact, the machine itself has almost no capabilities in its basic design. It was specifically designed to carry out several simple types of operations electronically, using a 2-symbol Boolean Algebra. By properly concatenating (joining together) the two symbols into strings, it could store integers accurately up to a certain maximum allowable size (determined by the number of bits or bytes concatenated into a “machine word”. It could carry out the standard operations of arithmetic on those words (think “numbers”), including addition, subtraction, and multiplication, and to some extent division (to do this well requires an extension to numbers which are not integer). This latter feat is accomplished by adding “floating point” notation and arithmetic, thereby allowing the equivalent of decimal fractions to be used.
The machine can store words in assigned memory locations and can retrieve them on command. It can load them into “registers” which can then perform simple (Boolean) arithmetic operations on them and store the results back into assigned memory locations. The machine can compare the contents of two registers and logically determine whether they are the same or not. The machine communicates with us via some electronic connection that allows us to feed our data into specific memory locations within the machine and lets us look at the contents of machine memory through some suitable electronic device such as a printer, or other display mechanism. These are some of the basic, lowest level operations which the computer can perform. There are only two symbols, 0 and 1 and the operational arithmetic is provided by Boolean Algebra. The first computers were realized largely as banks of switches (a form of memory), each of which could take the state 0 or 1 by being either off (0) or on (1), respectively. Groups (machine words) of these switches could be “read” to obtain the appropriate sequence of zeros and ones which represented numbers, and then the registers (in the Central Processing Unit, CPU) could operate on those numbers, using the rules of Boolean Algebra. The results would then be stored back into memory for future use, as appropriate.
The designers of the first machine realized that by being able to give the machine a sequential list of these instructions to be executed by the machine (a program), these basic operations could be expanded into long sequences of steps in which a basic operation is carried out at each step (called a machine cycle). By carrying out long sequences of instructions, the result could represent a complex sequence of computations, as might be needed to solve complex problems. Now we recall that the symbols don’t have to represent numbers, and so the same comments apply to the basic operations of symbolic logic, for example, as well.
Not to belabor the point or to go into greater detail, we summarize by saying that it is pretty amazing how many wonderful things can actually be done by such a machine, and today we are familiar with many of them, which certainly go beyond numerical computations or symbolic logic. These include word processing, spreadsheet accounting, databases, and graphics of all kinds, including Computer Aided Design (CAD). Color can be stored as an added numerical part of a word which might reference the color of the output when printed or displayed), etc. In fact I have just listed some of the earliest creations of computer programmers which go beyond just number crunching or logical operations. A world of useful applications have been realized on this machine and programming has progressed upward to higher and higher level languages. These languages refer to the “language” in which a program is written. At the higher levels, they simply rely, to a great extent, on “calling” useful stored programming sequences or “modules” of the basic machine operations to ease the burden and necessity of programming every single step of a complex program in machine language.
The Computer at the Time of the Dartmouth Conference and Hopes for the “Intelligent Machine” Should it be “AI” or “CAI” ?
Machines had not progressed very far beyond the basic capabilities and the existence of the first programming languages at the time of the Dartmouth conference. “Assembler” language was the first, operating at the lowest level (machine or “binary” level). The first commercial higher level language called “Fortran” (FORmula TRANslation language), appeared first in 1954 as a result of work by John Backus and colleagues at IBM. Fortran became more widely available starting in the same year as the Dartmouth conference. Fortran instructions (written in a language more closely resembling English) were translated into Assembler (machine) language by a “compiler” which is a programming language translator that runs on a computer. The machine itself, of course only responds to Assembler-level commands.
In some sense, the 10 persons who participated at the Dartmouth conference must have had a vision of the possibilities of things to come. Perhaps they cannot be faulted for thinking that if the brain operated by manipulating sequences of logical operations, then since the machine could do the same, it surely might just do whatever the brain could do.
I’m sure there are many versions of what we might think intelligence means, but to the author, true intelligence really requires what essentially all human brains possess, and that is a sense of self awareness, and a conscious mind which is capable of “understanding” or evaluating its current dynamic environment (perception). Given this capability, the mind can make predictions of what might happen next, based on understanding, and past experiences (memories) which are related. Once the mind understands the possible consequences revealed by the predictions, it can quickly respond to any situation by using that knowledge to make decisions. In many cases, these decisions determined the survival or well-being of the individual or the community. And needless to say, in some cases, the faster those decisions can be made, the better. Going beyond this level, the mind is capable of creating its own interests and plans of action and even imagining visions of things that it has never seen or encountered. Is the mind simply following some formal rules of logic? Probably not, and this has been a point of contention since the beginning.
These operations described above are what the mind/brain union tends to do pretty well, and this is what I would expect a truly “intelligent” machine to be able to do, as well. In my opinion, no machine can even come close to doing all that I have described in my simplified view expounded above. Further I believe that it is not likely that a machine can ever do what I have described. To do so would require, in my opinion, a self-aware conscious mind and brain carrying out the functions that these entities do carry out in human beings. Even today, we do not understand much at all about how the brain works or even what the connection would be to what we think we see in the visible structure of the brain. We will have much more to say about this later.
Further, this description only starts to address the full capabilities of the mind and its major resource, the brain. Unless a machine can exhibit the same capabilities as I perceive the mind/brain union can exhibit, I personally would have trouble ever associating the word “intelligence” with the machine. I know that I am not alone in this reasoning. However, I am much more comfortable with a phrase like “computer aided intelligence” (CAI), where the mind possesses the intelligence, but the computer can supply important information or data to the intelligent thinking process of the mind. I believe we have many examples today of useful CAI.
In any event, given that the Dartmouth group may have had an overly simplistic view of the capabilities and operation of the mind/brain union, I believe that they actually expected to produce a competitive intelligent machine which would be essentially human in its own skill set.
The Dartmouth Ten – in Their Own Words
To carry this thought further, here are some of the words that were written or spoken by some members of the Dartmouth group. These words are taken by the author to represent what they actually believed they were going to achieve (names have been omitted out of respect for those who made these predictions):
(1965) “machines will be capable, within twenty years, of doing any work a man can do.”
(1967) “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
(1970) “In from three to eight years we will have a machine with the general intelligence of an average human being.”
AI – Strong and Weak – Skepticism and Attacks – The First “AI Winter”
It is important to note that in later terminology, the above comments would correspond to what has been called a “strong” view of AI. There must be a weaker view, then, but at the time of the pronouncements, those responsible for these words were attacked on the basis of this “strong” view of Al.
At this time (prior to the cutback in funding in 1974), many were becoming skeptical of the claims and dreams of the artificial intelligence movement, and the fact that few really meaningful advances had been made. To summarize a few of the damaging pronouncements leading to the first AI winter:
In 1967, Minsky and Papert (MIT) published the book, “Perceptrons”. The book showed that a perceptron (think neural net) with no hidden layers could not realize a certain logical operation. This publication pretty well killed all neural network research for about a decade afterward.
In 1972, Hubert Dreyfus published a book entitled “What Computers Can’t Do – A Critique of Artificial Reason”. In it, he emphasized the differences between what the Mind/Brain duality is capable of, versus what a machine is capable of. He took clear issue with the idea that all intelligence can be described by logical rules (thereby taking issue with the corresponding premise enunciated as the basis for their efforts by the Dartmouth group). Although he was roundly attacked for these comments, they are not necessarily the last word on his attack.
As attacks on AI were being made in the US, there were similar episodes in the UK. In an unpublished report to the National Science Council, J. Lighthill filed his negative views of AI, entitled “A Report on Artificial Intelligence” (1972).
The Japan “Fifth Generation Project”
It is interesting to mention that, more recently, in 1982, Japan launched its extremely ambitious “Fifth Generation Project” whose ultimate goal was to produce a machine that could, in just about every way imaginable, duplicate the intelligence and actions of a human being (strong AI). Here is a quote from the Fegenbaum and McCorduck book, written after visiting Japan to learn more about the project: “Their goal is to develop computers for the 1990s and beyond – “intelligent” computers that will be able to converse with humans in natural language and understand speech and pictures. These will be computers that can learn, associate, make inferences, make decisions, and otherwise behave in ways we have always considered the exclusive province of human reason”.
The intent was to produce a new “fifth generation” of non- von Neumann architecture supercomputers using a new kind of inferential operating system and a Prolog-related (PROgrammation en LOGique) language to produce a truly “thinking” machine. The intended objective was that this would lead to dominance of AI by Japan, with important economic consequences. The project was quietly ended in 1992 following expenditure of an extremely large amount of money with little, if anything, to show for it.
What’s Next?
It seems that there is so much that it is important to write as we pursue our quest to understand AI, or as I have referred to it, “the monster that ate human beings”. All cannot be said in one sitting, and so we continue this saga in our next exciting installment to appear shortly, as AI continues to evolve in our collective history…
Please stay tuned as we look further and deeper into the question of what and how AI has evolved to the present time, and how it does what it is reported to do, and so much more, as well. And then there is the question of the mind/brain duality and its possible relation to the machine, which we have not yet touched upon. Will the monster really eat human beings? Stay tuned!
I hope you won’t miss a single episode as we continue to delve more deeply into the mystery of “the monster that ate human beings”…