Is It Deja Vu All Over Again? Part III
The Advent of Artificial Intelligence – Phase 1 1956-1974
SUMMARY
Some prior history leading up to the advent of Artificial Intelligence and Cognitive Science in 1956. The Dartmouth Conference and the founding principle of Artificial Intelligence. Subsequent work in the 1960s leading up to the first withdrawal of funding and the start of the first “AI winter’.
More Early History of AI – the 1940s and 1950s
We have now reached the point where someone is going to take the possibility of an intelligent machine seriously enough to make an effort to investigate the issue in its own right.
As early as the late 1940s and early 1950s, Edmund Berksley had designed “Squee”, a robotic squirrel, and had written a book entitled “Giant Brains or Machines that Think”. What followed seems to have been inevitable.
In the 1950s, we began to see important advancements in computing power and the introduction of machines that made this power available to a larger audience. In 1951, the first Univac machine was delivered to the US Census Bureau. On election night, November 4, 1952, while public opinion had suggested that Adlai Stevenson would win the presidency of the US, an examination of early polling results carried out on a Univac computer, led to the prediction that Eisenhower would be elected instead. Univac executives feared that the prediction would be in error and so prevented the announcement by news correspondents until late on election night when it began to appear that the computer prediction would, in fact, be correct. A year later, work was begun in the UK to build the first prototype of a computer based on transistors rather than vacuum tubes, a major advancement in the early evolution of the computer.
In 1950, four years before his untimely death, Turing had used the term “artificial intelligence”, with reference to what a universal computer might become capable of. In 1951, he introduced the “Turing Test” for machine intelligence. The test consisted of a “blind” interrogation between a human on one side, and hidden from his view, both a machine, and a human being, on the other side who would both respond to his questions. The machine would be said to have passed the intelligence test if, at the end of the interrogation, the human asking the questions could not determine, with sufficient accuracy, which answers had come from the machine.
In 1955, exploiting the ability of the computer to carry out the operations of symbolic logic, Newell, Simon, and Shaw (Rand Corporation) used a program named Logic Theorist to prove 38 theorems from the book “Principia Mathematica”, authored by Whitehead and Russell. They also developed a program allowing the computer to be the opponent in a game of checkers.
The Dartmouth Conference and the Advent of Artificial Intelligence
In 1956, two important meetings were scheduled. The first meeting, involving a young group of fledgling logicians and mathematicians, took place at Dartmouth University in New Hampshire. The stated objective of the meeting, sponsored by the Rockefeller Foundation, was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principal be so precisely described that a machine can be made to simulate it”.
Though no formal future plan came from the meeting, it was generally agreed that groups at Stanford, MIT, and Carnegie Mellon Universities would work toward the common goal of realizing the objective of the meeting. John McCarthy, one of the organizers of the meeting took the phrase “artificial intelligence” (AI) to describe their collaboration. This meeting is generally regarded to be the time and place of the founding of the new “artificial intelligence”. This effort would explore the potential of the computer to duplicate or mimic the processes of the human brain.
The MIT Meeting on Information Theory – the Advent of Cognitive Science
A few months later, in September of 1956, a Symposium on Information Theory took place at MIT, involving an inter-disciplinary group of scientists, including many from the Cybernetics movement. The purpose of that meeting was to consider progress in feedback control and communication. This second meeting saw a number of important seminal papers presented on many of the issues also surrounding the use of computers in multi-disciplinary cognitive studies. This second meeting has come to be recognized as a good starting reference point for multi-disciplinary “cognitive science”, which would come to include the efforts in artificial intelligence as one of its branches.
Following these meetings, a number of developments would help AI to find a foothold as a discipline. In 1958, John McCarthy, at MIT, invented the LISP programming language. Its purpose was to make it easier to program applications that attempt to model human thought. The LISP language, or its successors, is still in use today in AI research.
The 1960s and 1970s
AI development took several different directions in the 1960s. In 1965, at Stanford, the first attempt to build an expert system was initiated (intended to encapsulate knowledge and procedures of experts in a specific field). The resulting expert system was given the name “DENDRAL”, by Feigenbaum, Buchanan, and Lederberg (1971). It was specifically designed to encode knowledge from chemistry and physics. It provided information on organic chemical composition of materials from their mass spectrography data, and was reported to compare favorably with corresponding analysis by expert chemists. In 1974, another expert program “MYCIN” made its debut, also at Stanford. This latter expert system was designed to aid in the medical diagnosis of infectious diseases.
In 1966, J Weizenbaum (MIT) experimented with a computer program named “ELIZA,” which could fool users into believing they were carrying on a conversation with a real person. ELIZA responded, via printed responses, to comments or questions posed by the user. The program was modeled along the lines that a psychotherapist might follow when carrying on a therapy session with a patient. It used “canned responses”, based on recognized keywords as prompted; for example, (for purposes of illustration only) (ELIZA) “you say you are unhappy, can you tell me more about that?”. This should give you the rough idea of how ELIZA worked.
There was also interest in modeling directly the action of networks of neurons in the brain and nervous systems of animate beings. One of the first efforts in this direction included something called the “Perceptron”. This device used an array of photocells connected to something like a primitive neural network model realized on specially built hardware for image recognition. It was hoped that it would provide a form of machine learning that would generalize in many ways. The timing of this effort, so early in the evolution of AI, was surely premature.
There were also developments in music production, and in computer animation at Bell Labs, during this era. The area of robotics received attention, as well, and in 1969, the “Stanford Arm” was the first commercially successful result of this latter work. It led to the “PUMA” system of industrial robots which were adopted for use in automobile assembly lines, and which are familiar to us today. In 1974, “Silver Arm” was invented to do small parts assembly using touch and pressure sensitive sensors to duplicate the actions of the human fingers via computer control.
These and many other developments during the 1960s and early 1970s seemed, or at least hoped, to herald the dawn of a new age of machine expertise in carrying out tasks which seemed to duplicate and, in some cases, even replace human skills at the same tasks. It is not surprising that the level of interest in the potential of these new developments was very high. As a result, research funding for some of this work had been made available from government sources, including DARPA in the US (Defense Advanced Research Projects Agency), and the British government in the UK.
The First “AI Winter”
Sadly, however, the excitement, anticipation, and speculation on the part of AI researchers, concerning their work, turned out to be excessively optimistic. Expectations and predictions of what AI would deliver greatly exceeded the actual results. Skepticism and open attacks on this work were making an appearance, and in 1974, funding began to be withdrawn from the fledgling AI research efforts. The withdrawal of funding has been called the onset of the “first winter” of AI efforts.
The first of what would become multiple phases of evolution of the AI movement in our narrative, has now been launched. The movement has also suffered its first major setback, with the onset of the “first AI winter”.
What’s Next?
In our next post, we will visit, in more detail, some of the events leading up to the first AI winter, before continuing our assessment of this particular branch of what has now come to be known as Cognitive Science!