link to us
from the No more toys! dept.
An examination of the reasons behind the seeming lack of progress in the artificial intelligence field over the last two decades, and what might lead the "father of AI" to declare the field brain-dead. Where has the big picture gone?
Why AI is “brain-dead”
Why AI is “brain-dead”.
Those of us lucky enough to work in the artificial intelligence field, as hobby or career, have probably all heard questions or comments such as the following:
“Why don’t we have robot maids yet?”
“How come nobody has built a HAL 9000 yet?”
“Why can’t I just talk to my computer, like on Star Trek?”
“Chat-bots aren’t any better now than Eliza was!”
Chances are that the readers of this article have also heard the quote from Dr. Marvin Minsky stating that AI is brain-dead. If not, read these first:
The author believes that Dr. Minsky is largely correct in his criticism of many of the research approaches being followed as ignoring the bigger picture. Further though, an encyclopedic collection of “commons sense” information is not exactly a HAL 9000 either. I think common sense is an indication of the operation of intelligence, not the cause of it. Common sense knowledge will provide a benefit to intelligent systems of course, but is not a panacea, and is merely one of the many characteristics that we typically include in the concept of “intelligence”. But that touches upon what I consider to be of pivotal importance, the definition of “intelligence”, as that it is the implicit objective of artificial intelligence research.
As you can see, the definition is rather ambiguous. You will get a slightly different answer from each person that you ask for a definition, although there are many common terms used. Such as “understanding”, or “awareness”, or other even more confusing and ambiguous terms. The classic “Turing Test” is a formalized acceptance that we can not scientifically define an intelligence test, because we can not scientifically define intelligence. The solution that Turing came to is that we can probably tell intelligence when we see it, therefore if you can be fooled into believing that a machine is intelligent, then it probably is. Some would disagree with my interpretation of Turing’s test, but in essence it was that if humans have X (intelligence, whatever that is), then if a machine could convince you it is human it must have X also.
So what do “I” believe intelligence is? I think that it is a label we apply to any system that produces intelligent behavior. What is “intelligent behavior”? That is of course a very long and debated list of behaviors. The important thing is not what I would include in the list, but to note is that it is a list of behaviors, and not a clear explication of a formalized process. And therein lays the trap that so many researchers fall into. They want to find some kind of simple formal principle or process, which can then be used to recursively describe general intelligence in all it’s glory. This desire is referred to as “physics envy” by Dr. Minsky, which I think is an extremely apt recognition of the underlying wish. If such a principle could be found by reduction and scientific principle, then one would be the Newton or Einstein of artificial intelligence.
However, I personally doubt that there is a single simple principle or law waiting to be discovered, which will unfold recursively into emergent intelligence. My personal perspective is that intelligence is a messy and chaotic affair, not a single linear process.
I think that there is a further factor involved in this wishful thinking: intelligence as I defined is really a huge collection of intelligent behaviors leading many researchers to become overwhelmed and turn to hoping that an “emergent intelligence” will spring like a phoenix from their work. This is often combined with what I term “more-thinking”: the hope that more speed, more connections, more rules, more memory, etc., will cause disappointing results to transform into success. This can be seen in followers of: genetic algorithm (GA), artificial neural net (ANN), Bayesian reasoning, case based reasoning, etc. The typical cycle observed by those in this field for the last couple of decades is: first a great deal of excitement surrounding initial experiments, then grandiose projections of further success, subsequent lack of further progress, and eventual lapsing into the belief that more resources will lead to success.
The fact is that a limited and insufficient design will not magically become sufficient if you throw greater resources at the problem. A weak or limited model will produce weak and limited results in larger volume when scaled up. Part of the thinking here seems to be: the human mind is not understood, so if you multiply an ANN or GA beyond the ability predict, perhaps it will do whatever that thing is that we don’t understand! I have in fact heard the opinion that with GA the results are inevitable! Sadly it seems that there is a tremendous lack of understanding in the reality, and an opposite proportional irrational excitement in the theoretical possibilities. Our current ANN models are not remotely representative of our contemporary knowledge of neurobiology, and focus only on a limited aspect of neurons, which are the minority of cells in the human brain. The current GA work largely ignores the complexity required of the environment and fitness test for a meaningful application of the method (of which there has not been one to date). These constraints dictate the possible outcomes of any GA process, which is merely shifting the difficult work from designing the processing algorithms to designing the environment and fitness algorithms. Natural evolutionary process occurs in our complete environment. Similar results are not likely unless a similar environment is modeled. This is truly wishful thinking.
My observation has been that many researchers recognize an important aspect of cognition, via introspection or reduction, and then try to explain all other cognitive characteristics as being a product of permutations of that one aspect. This typically leads to a rather myopic and exclusionary championing of their one “principle” of intelligence. I think it likely that there are a goodly number of “principles” (or “laws” if you will) that contribute to what we consider intelligence. Should researchers specialize in a specific area in the hopes of eventual integration into a comprehensive system of theory, then I would have no criticism to offer. Unfortunately that is not normally the case, and they instead stand on their theory hoping that others will someday come to see things their way. What should probably be a large cooperative effort has turned into individual labs focused upon being the first to find “the answer”.
I blame introspection as largely being an obfuscating factor in this matter. Our “experience” of thinking, or more accurately, how we think about thinking, often leads us to believe that it provides insight into the underlying processes. This is a dangerous assumption, because our “mental narrative” is most probably only a small portion of the processes which make up the human mind. There have been many experiments (such as split-brain studies) that indicate our sense of “I” is more of a running narrative of justification for our actions than a linear logical decision making process. I would refer readers to my “Cognitive Model Bias” article for references in support of this assertion.
The belief that there is a primary algorithm, process, or principle which leads to intelligent behavior probably comes from our introspective experience. Our perceptual “I” is our own built in misleading simplifier of the actual process that leads to our behavior. There are absolutely no medical or neurological studies that indicate a unified linear cognitive process operating in the human mind. There is certainly no indication that the brain operates on any single electrical or chemical process or architecture. The complexity and volume of neural connections is in fact only one aspect of the most complex system that we are aware of. Other biological systems are observed to be redundant and variegated, so why do we try to reduce the most advanced biological system of all to a simple network?
I strongly recommend that anyone interested in artificial intelligence read “Society of Mind” by Dr. Minsky. This book describes in bits and pieces a complex system of hierarchical agents or processes that all contribute to eventual intelligent behavior. I believe that the human mind operates in just such a chaotic and complex fashion. By “chaotic” I mean so complicated that it is not predictable, as typically meant by “chaos theory”.
Should my opinion prove to be true, that would certainly explain the less than satisfactory results from systems which have been built upon a singular concept.
I also believe that “Behaviorism” additionally contributes to a more limited mono-modal viewpoint of intelligence. When we are observing others we can only be aware of the behavior they exhibit, and not the underlying causes for that behavior. Psychology has evolved with time in recognition of the inevitable degree of error in projecting underlying mental function based solely upon observed behavior. The psychological model of mental function has slowly become more fractured and less single mental process oriented. Nonetheless, there are cases in which a lack of understanding of the underlying principles inhibits ability to formulate accurate models of even top-level behaviors. Probabilistic models can be produced, sometimes with acceptable error margins, but that does not provide adequate understanding of the subject to manufacture a substitute, which is in many ways the objective of artificial intelligence research. Consider that the American Psychiatric Association has produced a research agenda for the DSM-V which suggests that the existing DSM-IV model of mental disorder and function has been shown to not represent the actual underlying cognitive structures of the human mind.
I think that an excellent analogy is provided by “Brownian Motion”.
Please see this applet and description: http://www.phys.virginia.edu/classes/109N/more_stuff/Applets/brownian/brownian.html
Prior to Einstein’s realization that there was complex interactions occurring beneath the observed level, the seemingly random behavior of particles in suspension was inexplicable. There were probabilistic models and many proposed theories as to the observed behavior. These theories included animation (life) of the particles, boundary instabilities, micro-currents, evaporative effects, and expulsion of micro-bubbles by the particles. All were very logical and cogent theories, none of which were correct of course.
I believe that this provides an excellent metaphor for the state of AI research. Much as pre-Einstein physicists tried to formulate an “equation” to explain the movement of the particle, so also do many artificial intelligence researchers try to produce an algorithm that expresses the complex behavior that we deem: “intelligence”. The hope that a “trick” or “shortcut” can be found to explain such complex behavior is very like the incorrect “Brownian Motion” explications prior to Einstein. The belief that a probabilistic model is comparable to general intelligence is frankly beyond my understanding. This is why I do not believe that the “Turing Test” is really a test for intelligence, it is merely a best approximation because we are unwilling to accept the breadth of the concept we want to test for. Besides, many young humans think that “Barney” is a real purple dinosaur, but that doesn’t make it so.
Artificial intelligence is hard because intelligence is comprised of hundreds (or thousands) of separate behavioral characteristics generated by the most complicated system in nature. We will not advance closer to the real goal of AI (human level general intelligence) until we accept that there are no magical shortcuts. Instead of hoping that balancing robots suddenly become intelligent, or that “emergent intelligence” will spring from an ANN or GA or Bayesian network, we need to accept that they each may provide a tool or modular solution to different aspects of general intelligence. Exclusionary philosophies have failed to produce anything but briefly exciting parlor tricks.
If study of the human mind provides any benefit to the development of artificial intelligence, then we should apply what we have learned. The human brain is a multi-modal variegated system. If human type intelligence is our objective, then wouldn’t it be logical to design a system of similar kind?
|"Science is a willingness to accept facts even when they are opposed to wishes." -- B. F. Skinner|
|All trademarks and copyrights on this page are owned by their respective companies. Comments are owned by the Poster.|