Last month, I had the privilege to give a talk to the Science Cafe in Indianapolis. This article is a summary of my presentation and discussion. Before moving forward, I just want to give a shout-out to everyone interested in science, intelligent discussion, and Chicago-style pizza to consider attending the next Science Cafe, held on the 3rd Monday of each month. Learn more here: https://www.facebook.com/ServingScience
So, ON TO THE SHOW (ARTICLE)
Thinking About Thinking
When humanity first embraced the potential of computers and sought out to create artificial intelligence, we were immediately faced with the reality that we don't really know much about our own intelligence. It is kind of hard to replicate something if you don't understand the original. Because of this, the pursuit of artificial intelligence has ironically brought more progress into understanding the genuine article (the human brain) than it has to creating an artificial one. Here are five big things we have learned about the brain and how we think.
1. The Brain is a Pattern Finding Machine
So computers have learned to beat us at Chess and Jeopardy, but they still can't beat us in the amazing feat of reading the handwritten address on an envelope. The human brain is the most powerful pattern finding machine on the planet, if not a bit too powerful. It is so passionate about finding patterns that we even find faces of Jesus and Elvis in food. We have the ability to find correlations and coincidences everywhere, even if they aren't there.
Most research and development in artificial intelligence today is in the area of pattern recognition. Technologies such as Siri (Apple's voice recognition), Google search, and OCR (optical character recognition) which is used in the popular Evernote are all great examples of AI pattern recognition being put to work. Creating the equations to find connections within chaotic information don't just teach us about AI, but about how we think and process information ourselves.
2. The Human Brain Uses Experience and Cultural Bias to Create Solutions from Ambiguity
If you are a Star Trek fan, there is a good chance you are familiar with the phrase "neural net" which is used often in reference to the remarkable android named "Data." Without getting technical, a "neural net" is any effort to imitate the ability of the human mind to approximate solutions. You see, while computers think in terms of "yes and no" the human mind is one giant "maybe." We have the ability to identify something as "more this than that" while computers, by default, need to be specific.
Learning to reproduce this ability has taught us a lot about how our minds think. For example, we have come to appreciate how much our brains use experience and context to sort out "fuzzy" information. For example, the only reason you can recognize the captchas below is because you have experience with the English language and its usage in context. If your only language was Mandarin, the below images would look like mere doodles.
Many AI theorists have stated that both the extreme quantity of raw experience as well as the granular nature of it makes it almost impossible for programs to perform this function as well as we do. It is like how your parents don’t understand the “group language” you develop with your friends, they are human but they don’t have the same background and experience. Without some kind of learning and live experience, computers won’t have the same “data” we do.
3. The Human Brain Thrives on Conflict
Most of the time we complain when we experience internal conflict. It stresses us out to be "of two minds" on something. Even culturally, we often value the appearance of single-mindedness and self-assuredness in others. However, the reality is that it is the ability to doubt and be in conflict which makes us so adaptable.
The largest breakthroughs in self-learning and motion adaptability in AI, largely in robots, has been around giving machines the ability to consider and then resolve conflicting information. To do this, AI programming has left the world of "linear thinking" a long time ago. Instead, the most advance AI today is a combination of multiple layers of "minds" which all work to resolve different information.
For example, researchers of Freie Universität Berlin, of the Bernstein Fokus Neuronal Basis of Learning, and of the Bernstein Center Berlin and have developed a robot based off the minds of bees.
The robot contains an internal “Id” brain which uses color to make decisions by priority and an external “Ego” which monitors the success and writes new processes to adapt. These “two minds” produce a learning process. [I am adding the “Id” and “Ego” labels, the scientists used words like “mini brain.”
Since Freud introduced the idea of the Id, Ego, and Super-Ego we have been seeking to understand why we have such conflicting internal motivations. As it turns out, we would be able to survive without them. Having constant internal struggle is what allows us to change and adapt to circumstances. In fact, an argument could be made that intelligence is simply the ability to reconcile conflicting information into a favorable outcome.
4. The Human Brain Can "Feel" Math
So going back to the "neural net" concept, while computers need data and numbers to generate motion, the human brain "feels" information. Just think about shooting a basketball. A robot would either have the right calculations and make the shot every time, or it would miss every time. A human being, instead, can estimate motion to shoot the ball and grow an ever favorable ratio of hit to misses with experience.
It goes beyond the ability to process analogue input (our muscles and senses), it is also the ability to execute complex calculations from those inputs. In other words, we are taking in "fuzzy" information and then producing "fuzzy" sums to act on. The ability to do this is the greatest barrier to effective robots, and is the second biggest area of pursuit in the world of artificial intelligence.
5. The Human Brain is Surprisingly Un-Formatted
In the TV series Fringe, the mad scientist, Walter Bishop, frequently states he believes the human brain is "infinitely capable" starting from birth. This claim is not without merit. While many animals are usually born with a healthy dose of instincts and quickly assume a competent library of motion, human babies are quite helpless. It is almost as if we are surprised to be human at all.
The brain has the ability to change in amazing ways. For example, the breakthroughs in neural interfaces for prosthetic limbs are NOT because we have learned to talk to the brain, but because we found out the brain can learn to talk to an electrical interface. As long as we can visually associate a mental process with a motion, our brain can grow the pathways needed to integrate into our natural impulses. Kinda cool, huh?
The challenge for AI then is to figure out how to find these "base patterns" of learning that can begin with very little data, like a baby, and work its way up. It could very well be that the human brain doesn't even require a human body for function. We could probably adapt to just about any form with an electrical interface and sensory input. The big key is that the more "pre-programming" in the mind, the less the mind can learn.
ADDED NOTE: In my presentation and initial write-up, I connected being "un-formatted" with being "blank." Another brain on this subject, Steven Pinker, has done of a good job of showing these are not the same. The brain is "un-formatted" in that it is highly adaptable to whatever environment it finds itself. However, we are born with inherent motives and compulsions which that adaptability is working to satisfy. Even better, those impulses are proving to be more altruistic than we first thought. I recommend Steven's book, "The Better Angels of Our Nature."
Artificial Intelligence Isn't Even Close Yet - The Greatest Mystery of Intelligence
Depending on who you talk to, namely press vs. scientists, AI is either scary and imminent or terrible and primitive. The truth is more the latter. We know so little about how our own minds work that it is impossible for us to reproduce them any time soon. The biggest challenge is #5 - creating a brain that learns from nothing. Any current AI programs come “pre-loaded” with knowledge and experience we already have.
How learning happens when there is no experience to draw from and no “points of reference” is the great mystery. Not to mention the great mystery of how human being evolved to have less instinctual or “pre-formatted” knowledge than other species instead of more. The truth is that the AI we are creating today are simple imitations of our brains most basic functions. There is still a lot more to discover about our own brains, not to mention artificial ones.