Is It Deja Vu All Over Again? (Part VIII)
What’s It All About, Alfie? (Part III)
SUMMARY
Much of current AI research is based on the use of a poorly understood type of computing network which was designed by humans to superficially resemble the neuron bodies found in the brain, even though we do not know how these actually function, nor do we know how they do what they do, when operating as a connected system in the brain. The resulting “artificial neural networks”, implemented on a computer, operate as black boxes which accept numerical inputs and produce numerical outputs according to a simple human-conceived architecture that is supposedly going to operate in the same way that the brain operates (even though we know very little about how the brain operates). Can you trust a black box?
The idea that the artificial neural network can do what the real neural networks in the brain can do, leads to a tempting conclusion. If true, then we can just let the artificial brain (the computer using artificial neural networks) think for us and tell us what to do or what to believe. Can we really afford to place our trust in the black box? What can possibly go wrong? This is not science, it is more properly called seduction…
The Neural Network Black Box
In our first Dialogue it is pointed out that much of current AI research seems to be focused on a single type of technology known as a neural network or neural net. This is not to say that other types of structures and networks are not being considered – they are, but this type of network is very much a “black box” in the traditional sense (a black box accepts inputs and produces outputs in response). It is also pointed out that there isn’t a strong scientific foundation to guide our use and understanding of this simple kind of network in applications, even though we hear stories of its being applied to a number of different situations.
In our first Dialogue we also exhibit the essential structure of a basic neural network and point out that it was conceived based on the observation of the neuronal networks that are found in the brain, even though we don’t actually understand exactly how they work or how they work together to do what they do in the brain. Nonetheless, a significant amount of attention and effort have been directed toward trying to use a simple computing architecture for doing many things that might be referred to as artificial intelligence. This effort was the original motivation for using what we refer to today (in a loose sense) as a neural network.
Why Should We Believe It? – Beware of Temptation
There are several basic questions that we might ask. First, for what reason are we directing so much effort to try to make these networks do something “intelligent” in appearance? Second, given what is or is not known about them, why would we have any reason to believe that they might do something, perhaps magically resembling what the brain can do? This is especially important since we do not fully understand how the brain does what it does. Can you trust a black box?
Let us examine the issue in greater detail. First, there are two major things that we do know about neural networks. The first is that, if the architecture and parameters are correctly determined, it is known that they can be used to approximate a class of curves which are commonly of interest, although many techniques are already known for doing this and they are well understood. In statistics, for example, this process of fitting curves is referred to as “regression”, and many books will refer to the possible use of neural networks for regression.
The second thing that is known about the basic neural network architecture, as discussed in our first Dialogue, is that there is a type of algorithm that, at least for reasonably well behaved error functions, can be used to automatically update, or “train” the parameter set, as explained in the first dialogue. This is done in such a way that the input data to the neural net can be shaped by the neural network with the result that the output from the network will approximate, as well as may be possible, a desired output.
A simple and well-known example might be (using the familiar surveillance theme), to answer the question of whether a certain person (photographs available) is actually present in a crowd, as seen in other photographs which are used as input to the black box? The output required might be as simple as simply, “yes” or “no”, or as complicated as identifying that person in the crowd photographs by marking that presence wherever it occurs. A popular process (or algorithm) for automatically updating the parameters of a neural network to approximate a desired output, is often referred to in the AI literature as “back-propagation”. This technique is not unique to AI, but it was re-discovered by AI researchers for use with the neural networks which are in widespread use today.
The basic neural network architecture, and the algorithm to adjust, update, or train, (as you like) the neural network parameters to achieve the desired result often requires very little programming today (using suitable computing libraries) so that the whole thing can be easily implemented with a reasonably small amount of effort, often represented by no more than a few lines of code. This fact makes them attractive for easy implementation.
The result is that we have a black box architecture that can be easily updated in an effort to transform input data into a desired output, to the extent that this is mathematically possible. This possibility is determined, for example, by our choice of error measure, and the complexity of the neural network architecture we have chosen (see our first dialogue). However, if we don’t understand what we are doing, what kind of result should we expect? Can you trust a black box?
However, this idea is instantly appealing. The argument is often made that we can’t possibly find the time and resources necessary to solve, using the usual methods of problem solving, many complex problems that we might like to solve, such as identifying an individual in a large crowd of people. If we carry the analogy far enough, perhaps we could just find a magical approach to doing pretty much anything we’d like to do, in a computational sense, without having to go to all the trouble to understand the problem, and then formulate and code a satisfactory solution.
The first two objectives,
- (1) to understand a problem well enough that
- (2) we can formulate a solution for it,
constitute the basic tenets of scientific inquiry and are often referred to by the simple word “reductionism” (reducing a problem to its ultimate essence which then gives us the necessary understanding to be able to formulate a solution). Wouldn’t the world be a better place if we didn’t have to do all this work in order to solve every possible problem? We could just turn the whole thing over to our all-encompassing (perhaps omni-present, and omnipotent) black box and let it figure the whole thing out for us! But, can you trust a black box?
What is the Motivation? What are the Dangers?
At this point, you can perhaps begin to understand some of the motivation behind the interest in neural networks. After all, if the brain can apparently do it, surely it must be possible, and all we have to do is set up something that we think looks like a brain, then turn it on and wait for the magic to happen! I can think of no better reason to use the word magic than this, and much work in artificial intelligence has apparently been based on this belief in magic (like the alchemist in his chamber hoping that something that he can do will magically turn lead into gold!).
We should not have to point out the grave dangers lurking in this kind of thinking, but rest assured, the idea will appeal to almost anyone, and they might readily accept that the black box can really perform magic if they are given enough reason to believe that (and it might not even take that much effort to convince the average person).
It is not hard to imagine, for example, the rebirth of the famous Oracle at Delphi rising from the ashes in the form of our new black box! Ask it any question, and it will provide an answer (and rest assured that it always will). In some sense, this whole concept pretty much renders things like rational thought, reductionism in science, and the very operation and use of our brains essentially obsolete. At this point, you would all be free to go and become artists, or whatever you want to be, as we were told not so long ago by a prominent member of Congress, after Congress had introduced its own latest version of the ultimate all-knowing, all-doing, and all-caring black box (known as ……)!
Without belaboring the point, we have just explained why there might be so much interest in turning the black box into an answer to just about anything that you might need to know. Not only that, it is easily implemented, as we have pointed out, so no real effort is required, once the whole thing is available and operating. Have you ever heard the phrase “easily develop your own AI” ? The concept is out there, and you are being offered the tools to do just that. Simply follow the instructions, and your own version of magic can (or at least may) become reality! That is, if you can trust a black box!
This whole process is often referred to as seduction and to the extent that you are buying it, you are being seduced! This is not to say that there cannot be some validity in anything that is done, but how would you know? The danger is that you don’t know, at least not in the present state of the art! After all, it’s only a black box. The old Latin expression, “caveat emptor” is most appropriate here, “buyer beware” (you are at risk of buying a defective product!).
Is it Science?
First let us bring our attention back to a question posed in an earlier post; “is it science?” (we already know from the first dialogue that if it is science, then it is an empirical science). Now, we might ask of the reader, were you paying attention to what has just been discussed? Not only is this not science in the traditional sense (reductionism, etc), but it offers the hope that such difficult matters might actually be addressed by a black box that will figure it all out for us! It’s hard to imagine what could be better than that, but as we have just asked, how would you know? Can you trust a black box?
In an effort to further clarify what artificial intelligence is, we point out that some have referred to it as engineering, and this epithet seems to be more appropriate. What seems to be the case is that artificial intelligence research is not currently based on a proper foundation of its own scientific understanding (such as understanding how the brain functions and then building similar and unique computer architectures that can or might mimic that operation, if that is possible). Instead what is actually happening is that simple tools and ideas and techniques that are somewhat easy to implement are being borrowed en masse from other disciplines, including mathematics, control theory, signal processing, and computing, and the attempt is being made to apply them to a simple basic computing architecture in such a way that they produce a desired result. This might more properly be referred to as engineering, and it seems proper that the term can be applied to artificial intelligence, as practiced today.
Is it Credible? How Would You Know?
We will end this brief but hopefully thought-provoking post with a commentary on another question that we have asked. That question is: by what means can we judge or validate the results being obtained by the tool we are considering here (neural nets), in popular accounts of artificial intelligence achievements?
You are surely aware of the calls in the published literature for the need to explain and/or understand how and why the tool produces the outputs which we are to trust (or distrust, as may be appropriate). A recent article in Forbes magazine referenced failures of a currently important AI project to aid physicians in treatment and diagnosis of cancers. It also clarified some of the problems associated with using current AI black box assistance in this important area of healthcare.
As explained in the Forbes article, the real problem was one of trust in the results. As it was reported, the primary problem was that when the AI made predictions or recommendations with which physicians could readily agree, they were inclined to accept them, but they were of little additional value. However, when they disagreed, physicians needed to know why the AI had produced an answer or recommendation that they did not necessarily agree with. In short, something critical was missing, and that component was so critical, that it essentially negated the potential value of the effort, and might even have caused harm to the patient. You see, we really do want to know how and why the black box is doing what it is doing, and that, dear reader, is one of the most apparent flaws in the whole black box idea. It must be credible, correctly interpretable, and perhaps we might say, convincing, in order to be of value. Further, the results must be reproducible by anyone who might want to do so.
Are you aware that there is, and for a long time has been, a Journal of Irreproducible Results? Here is another problem with the black box. Unless two people can use the tool to reproduce the same kind of results, do we believe those results? Suppose they try the same experiment but with different data? Will the same kind of success be achieved in the output? Such questions carry great validity and importance if we are to establish trust and credibility, and the answers to such questions are often lacking in what we are being asked to believe or trust in the world of artificial intelligence. Please note that we have just provided a partial answer to another question posed earlier- namely what is lacking? The simple answer is a reason to believe or to trust the result! Can you trust a black box?
In recognition of the importance of these issues, DARPA (Defense Advanced Research Projects Agency), which funds and follows research in artificial intelligence, is now funding work on what has been referred to as “XAI” (explainable AI) in order to address the problem(s) referred to above and many related problems where trust in the results is of critical importance.
You may already be familiar with reports of AI identifying a turtle as an assault rifle, and reports of failure to duplicate reported results, and other blatant failures to achieve the required goal or output from neural networks, and we won’t go further into it at this point. Certainly not everything has failed, but we must have an open mind about what is being done and what is actually being achieved, and to view it all in a proper context of understanding. And just as important, the possibly critical need to understand how and why the results were produced.
We are going to have much more to say about these matters in future Dialogues and posts and we will not do much more than raise questions for thought here, but at least we must be aware of these issues and not blindly start accepting what, at least in some cases, may actually require a belief in magic!
Looking Ahead
Looking ahead, there have been some important scholarly papers being written (perhaps rare when discussing matters of artificial intelligence, to-date) which examine on a rational basis, the process by which success is often measured when evaluating a result obtained with the techniques that people are using today in artificial intelligence. These techniques are often related to use of very large data sets, incredible amounts of computing power, and often massive neural networks. These latter observations also partially address another question raised earlier, namely, how are the results which are being reported today being achieved? There is much more to be said about this also, and we will continue to pursue this question in more detail in later posts.
As always, thanks for joining us, and we hope that you will stay tuned as we delve more deeply into the wizard-behind-the-screen (of Oz fame), or more simply, in our own words, the monster that ate human beings!