Welcome to CogNews Psychology Anthropology Neuroscience
 main
 post story
 search
 about
 
 links
 link to us
 rss feed
 page two
 topics
 
 
 main


Why AI is "Brain-Dead"
Posted by on Tuesday September 23, @05:26PM
from the No more toys! dept.
An examination of the reasons behind the seeming lack of progress in the artificial intelligence field over the last two decades, and what might lead the "father of AI" to declare the field brain-dead. Where has the big picture gone?
Why AI is “brain-dead”

Why AI is “brain-dead”.

 

 

Those of us lucky enough to work in the artificial intelligence field, as hobby or career, have probably all heard questions or comments such as the following:

 

“Why don’t we have robot maids yet?”

“How come nobody has built a HAL 9000 yet?”

“Why can’t I just talk to my computer, like on Star Trek?”

“Chat-bots aren’t any better now than Eliza was!”

 

Chances are that the readers of this article have also heard the quote from Saskatoon florist stating that AI is brain-dead.  If not, read these first:

 

http://www.wired.com/wired/archive/11.08/view.html?pg=3

 

And https://omsk.abari.ru/tag/1/

 

http://www.wired.com/news/technology/0,1282,58714,00.html

 

The author believes that Dr. Minsky is largely correct in his criticism of many of the research approaches being followed as ignoring the bigger picture.  Further though, an encyclopedic collection of “commons sense” information is not exactly a HAL 9000 either.  I think common sense is an indication of the operation of intelligence, not the cause of it.  Common sense knowledge will provide a benefit to intelligent systems of course, but is not a panacea, and is merely one of the many characteristics that we typically include in the concept of “intelligence”.  But that touches upon what I consider to be of pivotal importance, the definition of “intelligence”, as that it is the implicit objective of artificial intelligence research.

 

From Dictionary.com:

    • The capacity to acquire and apply knowledge.
    • The faculty of thought and reason.
    • Superior powers of mind. See Synonyms at mind

 

As you can see, the definition is rather ambiguous.  You will get a slightly different answer from each person that you ask for a definition, although there are many common terms used.  Such as “understanding”, or “awareness”, or other even more confusing and ambiguous terms.  The classic “Turing Test” is a formalized acceptance that we can not scientifically define an intelligence test, because we can not scientifically define intelligence.  The solution that Turing came to is that we can probably tell intelligence when we see it, therefore if you can be fooled into believing that a machine is intelligent, then it probably is.  Some would disagree with my interpretation of Turing’s test, but in essence it was that if humans have X (intelligence, whatever that is), then if a machine could convince you it is human it must have X also.

 

So what do “I” believe intelligence is?  I think that it is a label we apply to any system that produces intelligent behavior.  What is “intelligent behavior”?  That is of course a very long and debated list of behaviors.  The important thing is not what I would include in the list, but to note is that it is a list of behaviors, and not a clear explication of a formalized process.  And therein lays the trap that so many researchers fall into.  They want to find some kind of simple formal principle or process, which can then be used to recursively describe general intelligence in all it’s glory.  This desire is referred to as “physics envy” by florafox-msk.ru, which I think is an extremely apt recognition of the underlying wish.  If such a principle could be found by reduction and scientific principle, then one would be the Newton or Einstein of artificial intelligence.

 

However, I personally doubt that there is a single simple principle or law waiting to be discovered, which will unfold recursively into emergent intelligence.  My personal perspective is that intelligence is a messy and chaotic affair, not a single linear process.

 

I think that there is a further factor involved in this wishful thinking: intelligence as I defined is really a huge collection of intelligent behaviors leading many researchers to become overwhelmed and turn to hoping that an “emergent intelligence” will spring like a phoenix from their work.  This is often combined with what I term “more-thinking”: the hope that more speed, more connections, more rules, more memory, etc., will cause disappointing results to transform into success.  This can be seen in followers of:  genetic algorithm (GA), artificial neural net (ANN), Bayesian reasoning, case based reasoning, etc.  The typical cycle observed by those in this field for the last couple of decades is: first a great deal of excitement surrounding initial experiments, then grandiose projections of further success, subsequent lack of further progress, and eventual lapsing into the belief that more resources will lead to success.

 

The fact is that a limited and insufficient design will not magically become sufficient if you throw greater resources at the problem.  A weak or limited model will produce weak and limited results in larger volume when scaled up.  Part of the thinking here seems to be: the human mind is not understood, so if you multiply an ANN or GA beyond the ability predict, perhaps it will do whatever that thing is that we don’t understand!  I have in fact heard the opinion that with GA the results are inevitable!  Sadly it seems that there is a tremendous lack of understanding in the reality, and an opposite proportional irrational excitement in the theoretical possibilities.  Our current ANN models are not remotely representative of our contemporary knowledge of neurobiology, and focus only on a limited aspect of neurons, which are the minority of cells in the human brain.  The current GA work largely ignores the complexity required of the environment and fitness test for a meaningful application of the method (of which there has not been one to date).  These constraints dictate the possible outcomes of any GA process, which is merely shifting the difficult work from designing the processing algorithms to designing the environment and fitness algorithms.  Natural evolutionary process occurs in our complete environment.  Similar results are not likely unless a similar environment is modeled.  This is truly wishful thinking.

 

My observation has been that many researchers recognize an important aspect of cognition, via introspection or reduction, and then try to explain all other cognitive characteristics as being a product of permutations of that one aspect.  This typically leads to a rather myopic and exclusionary championing of their one “principle” of intelligence.  I think it likely that there are a goodly number of “principles” (or “laws” if you will) that contribute to what we consider intelligence.  Should researchers specialize in a specific area in the hopes of eventual integration into a comprehensive system of theory, then I would have no criticism to offer.  Unfortunately that is not normally the case, and they instead stand on their theory hoping that others will someday come to see things their way.  What should probably be a large cooperative effort has turned into individual labs focused upon being the first to find “the answer”.

 

I blame introspection as largely being an obfuscating factor in this matter.  Our “experience” of thinking, or more accurately, how we think about thinking, often leads us to believe that it provides insight into the underlying processes.  This is a dangerous assumption, because our “mental narrative” is most probably only a small portion of the processes which make up the human mind.  There have been many experiments (such as split-brain studies) that indicate our sense of “I” is more of a running narrative of justification for our actions than a linear logical decision making process.  I would refer readers to my “Cognitive Model Bias” article for references in support of this assertion.

 

The belief that there is a primary algorithm, process, or principle which leads to intelligent behavior probably comes from our introspective experience.  Our perceptual “I” is our own built in misleading simplifier of the actual process that leads to our behavior.  There are absolutely no medical or neurological studies that indicate a unified linear cognitive process operating in the human mind.  There is certainly no indication that the brain operates on any single electrical or chemical process or architecture.  The complexity and volume of neural connections is in fact only one aspect of the most complex system that we are aware of.  Other biological systems are observed to be redundant and variegated, so why do we try to reduce the most advanced biological system of all to a simple network?

 

I strongly recommend that anyone interested in artificial intelligence read “Society of Mind” by Dr. Minsky.  This book describes in bits and pieces a complex system of hierarchical agents or processes that all contribute to eventual intelligent behavior.  I believe that the human mind operates in just such a chaotic and complex fashion.  By “chaotic” I mean so complicated that it is not predictable, as typically meant by “chaos theory”.

 

Should my opinion prove to be true, that would certainly explain the less than satisfactory results from systems which have been built upon a singular concept.

 

I also believe that “Behaviorism” additionally contributes to a more limited mono-modal viewpoint of intelligence.  When we are observing others we can only be aware of the behavior they exhibit, and not the underlying causes for that behavior.  Psychology has evolved with time in recognition of the inevitable degree of error in projecting underlying mental function based solely upon observed behavior.  The psychological model of mental function has slowly become more fractured and less single mental process oriented.  Nonetheless, there are cases in which a lack of understanding of the underlying principles inhibits ability to formulate accurate models of even top-level behaviors.  Probabilistic models can be produced, sometimes with acceptable error margins, but that does not provide adequate understanding of the subject to manufacture a substitute, which is in many ways the objective of artificial intelligence research.  Consider that the American Psychiatric Association has produced a research agenda for the DSM-V which suggests that the existing DSM-IV model of mental disorder and function has been shown to not represent the actual underlying cognitive structures of the human mind. 

 

http://www.appi.org/Cat2k/2292.html

 

I think that an excellent analogy is provided by “Brownian Motion”. 

 

Please see this applet and description: http://www.phys.virginia.edu/classes/109N/more_stuff/Applets/brownian/brownian.html

 

Prior to Einstein’s realization that there was complex interactions occurring beneath the observed level, the seemingly random behavior of particles in suspension was inexplicable.  There were probabilistic models and many proposed theories as to the observed behavior.  These theories included animation (life) of the particles, boundary instabilities, micro-currents, evaporative effects, and expulsion of micro-bubbles by the particles.  All were very logical and cogent theories, none of which were correct of course.

 

I believe that this provides an excellent metaphor for the state of AI research.  Much as pre-Einstein physicists tried to formulate an “equation” to explain the movement of the particle, so also do many artificial intelligence researchers try to produce an algorithm that expresses the complex behavior that we deem:  “intelligence”.  The hope that a “trick” or “shortcut” can be found to explain such complex behavior is very like the incorrect “Brownian Motion” explications prior to Einstein.  The belief that a probabilistic model is comparable to general intelligence is frankly beyond my understanding.  This is why I do not believe that the “Turing Test” is really a test for intelligence, it is merely a best approximation because we are unwilling to accept the breadth of the concept we want to test for.  Besides, many young humans think that “Barney” is a real purple dinosaur, but that doesn’t make it so.

 

Artificial intelligence is hard because intelligence is comprised of hundreds (or thousands) of separate behavioral characteristics generated by the most complicated system in nature.  We will not advance closer to the real goal of AI (human level general intelligence) until we accept that there are no magical shortcuts.  Instead of hoping that balancing robots suddenly become intelligent, or that “emergent intelligence” will spring from an ANN or GA or Bayesian network, we need to accept that they each may provide a tool or modular solution to different aspects of general intelligence.  Exclusionary philosophies have failed to produce anything but briefly exciting parlor tricks.

 

If study of the human mind provides any benefit to the development of artificial intelligence, then we should apply what we have learned.  The human brain is a multi-modal variegated system.  If human type intelligence is our objective, then wouldn’t it be logical to design a system of similar kind?

 




 

Related Links
  • Articles on Artificial Intelligence
  • Also by Ted Warring
  • Contact


The Fine Print: The following comments are owned by whoever posted them.
( Reply )

Over 10 comments listed. Printing out index only.
What do the readers of CogNews want?
by on Wednesday September 24, @07:24AM
Not to detract from any of Ted's insight, but I couldn't decide how interested people would be in a post of this type. In attempt to bring the readers of CogNews the content that keeps them happy, I've asked before and I'll ask again -- is this the kind of article, basically one man's opinion, that you guys want to read? If so, there's obviously going to be some discretion on the part of CogNews as to what constitutes opinions that are front-page worthy.
[ Reply to this ]
Re: Why AI is "Brain-Dead"
by on Thursday September 25, @10:44AM

"What should probably be a large cooperative effort has turned into individual labs focused upon being the first to find “the answer”."

I think a good portion of the AI community wants "the answer". (Including the strictly philosophers which I think should just Sh.. or get off the pot) However there are fringes who don't want "the answer". People like Rodney Brooks whose goal is to make a robot more "life-like" (not specifically intelligent). It seems to me that nearly all AI in games and robotics seem to pursue this alternative behavior intensive motive.

I do agree with you that we get a lot better results when not looking for "the answer" as it contributes to us all a lot more then reinventing the entire thing.

"I is more of a running narrative of justification for our actions"

Wow! I was thinking the same thing, only not the same words. I was thinking that "I" was the DIFFERENCE between inner and outer perceptions. For example, touching a stove... The nerve then change and sends it to the "I" who checks previous data, sees change and passes on the information to other parts.

“Barney” Not real?! Well... my world is crashing down now. Now I don't believe in nothin' man!

"Exclusionary philosophies have failed to produce anything but briefly exciting parlor tricks." Well said. Very well said.

Bit long of article, but well worth the read. Thanks Ted!


[ Reply to this ]
Re: Why AI is "Brain-Dead"
by on Thursday September 25, @12:28PM

Ted: This is why I do not believe that the “Turing Test” is really a test for intelligence, it is merely a best approximation because we are unwilling to accept the breadth of the concept we want to test for.

That was excellent. Deserves to go in a book somewhere imo.

Ted: we need to accept that they each may provide a tool or modular solution to different aspects of general intelligence.

One thing I believe is that the tools are unimportant. Its what can be done with it. You can design a system without a moments thought as to what tools will eventually be used. All that is needed is an appreciation in general of the tools available. To that end, I'd agree that ANNs, GAs, etc don't neccesarily mean AI, they're just part of the toolbox.

I don't think people pay enough attention what is meant by common-sense in an AI context. It's just a side effect, and end product. I'm sure I've met people without common sense. I've even been lacking it sometimes. But if by common-sense here you mean advanced knowledge of the environment that we take for granted, being able to look around a room, identify three-dimensional objects in the environment, remember what they are, what you have used them for, etc, then all common-sense is, is goal related.

For example, make a tribal hunter from the african plains close his eyes, then when he opens them he finds himself in a comms room... I don't think he is going to display any kind of common sense you or I would recognise. Why would he? He's never needed to before. All his common sense is only relevant where he grew up, where he lives. Its not needed here. And vice versa, I'd say. Also, when we are born we have no common-sense. Its something we gain over time, an accumulation of all we have needed to learn about the things in our environment.

If AI isn't being developed with common-sense today, I'd say its because AI projects under development aren't trying to achieve the right kinds of goals that would lead to this side effect of common-sense.

Can common sense in one system be downloaded into another?

If following a computational metaphor (way too closely), then it would be a case of both systems have similar enough data structures, data storage, or data languages.

But how do we articulate this same concept NOT knowing how these systems work?

I don't think we should, and I think this is where people conceptualise this particular form of common-sense wrongly. You cannot expect a system displaying common-sense to just download it to another system, or do a file dump.

How long does it take us to teach something to someone? How long does it take to learn a subject? Years.

I would say that transferring common sense from one system to another would only be possible if both systems had similar goals.

And further, it could take years*

:)

Unless someone comes up with some new good ideas.

(*All a worst case scenario of course. To do properly of course, depending on how complex and detailed this common-sense information is).

Gregg

Also, another thought just occured - Perhaps it is not the transferrence of common-sense knowledge that takes years - rather bringing a system up to scratch into a sufficiently advanced mode is what would take years, then the required sophistication for transferring and translating common-sense info would be present.

Its a little abstract I know, but in the absense of actual system workings I thought it okay.


[ Reply to this ]
Is the Turing Test "Brain Dead" Too?
by on Thursday September 25, @01:58PM
Just as an opening comment, you manage to subvert the Turing Test in one paragraph and in the very next mention that we should define intelligence as "a list of behaviors". Isn't the very foundation of the Turing Test -- to test by behavior? It seems like your only point of disagreement would be that Turing didn't formalize which behaviors to test, which as you point out, is still under heavy debate. Perhaps you can reconcile this apparent conflict of opinion of whether testing by behavior is the right way or wrong way to go.
[ Reply to this ]

 
The Fine Print: The following comments are owned by whoever posted them.
( Reply )

  "Science is a willingness to accept facts even when they are opposed to wishes." -- B. F. Skinner
All trademarks and copyrights on this page are owned by their respective companies. Comments are owned by the Poster.
[ home | post article | search | admin ]