Friday, November 16, 2007

Incomplete Minds aka politicians

My fellow blogger, The Recursion King, has made an interesting post over here, talking about an initial brainstorm for a model of AI.

I say initial, because its nowhere near complete. In fact, what he proposes has been done before.

Its not the data access and storage that makes consciousness, its what happens with that information that we do not yet understand. I'll use the example of a friend of mine, whom we'll call Ruby. She's an artist, a great one in fact. We were discussing how creativity works, and this is what she told me about artists: "We need to be able to picture the way shadows fall, how materials look under light, partial light, two lights. We need to be able to picture how an object looks from different angles, and how to draw that possibly very strange and unique shape. We need, in short, a 3d engine in our minds."

Our brains are capable of some great, fantastic stuff, intuition being the best example. The true difficulty in AI comes from giving it the capability to expand its perceptions, and make intuitive guesses.

Data mining as it stands today is probably the most advanced method of analyzing disparate data, and finding connections. But even that pales in comparison to what the human mind can deduce if properly trained. Semantic linking is a good start, however, as far as I can see, its only 10% of the solution.

What true AI would be able to do is take small amounts of information, and infer more information from that, based on internal reasoning. Ever had a hunch? That is your subconscious working on information you may not be aware of, and synthesizing it together into a whole for your conscious mind to act upon.

Take for an example, an AI that could solve murders. It takes humans years and years of experience to solve some of the toughest cases, and even then, they can be foiled by proper planning.

An AI that could solve murders would need to be able to decipher body language, understand the difference between lies and reality, infer missing pieces of information, like motives or methods. And even then, a well-executed murder may not ever be solved. However, with intuition, the human would have a guess as to which course to follow, and have an advantage over the AI.

But I'm not being fair, solving murders is a hard problem, despite how easy Sherlock Holmes may make it appear. How about an easier one? Say, Starcraft? Its quite possible in this day and age to make an AI that can beat any human player most of the time, without needing semantically linked memories. In fact, an AI that was not constrained by game developers would always beat human players in FPS games(due to perfect reaction speed, perfect aiming).

My point however, in all of this, is that semantically linked concepts and memories do not an AI make. The Recursion King has some great points, and I look forward to see what more he comes up with. But true AI is in the class of problems that some computer scientists think are Hard, or NP-Complete. Semantics will not solve it for us, unfortunately.

No comments: