Simple Analogy to Demystify AI

by Tom Koulopoulos

Still Trying To Demystify AI? Here’s A Simple Analogy That Even A 10-Year-Old Can Follow.

The buzz about AI is everywhere, but most of us still think of AI as a black box. It isn’t.

Simple Analogy to Demystify AI

Look Ma…No Hands!

In the first column of this two-part series I talked about how leading-edge AI, such as DeepMind’s AlphaGo Zero, can train itself without human intervention. In this second column I’ll share a simply analogy that helps demystify AI and differentiates human intelligence from artificial intelligence.

Think back to when you first learned to ride a bike. Recall the feeling of being overwhelmed by all of the things you needed to keep in mind in order to achieve the simple goal of staying upright and in control? I’d bet that you’d have a hard time listing even a fraction of the rules that went into achieving that goal.

There were rules for how, when, and at what speed to pedal, what to do with your many body parts in order to maintain balance, observing your surroundings, the type of road you were on, its contours, was it wet or sandy, were there obstacles in your way, what was the likelihood of a car coming around the next bend in the road, was it light or dark out, cloudy or sunny, calm or windy, all of this and so much more that went into the simple act of staying upright long enough to allow the gyroscopic effect of your wheels to keep you from falling.

You’re thinking that you weren’t really aware of any of that, right? That’s because your brain does a wonderful job of observing much more than you are consciously aware of. It knows that certain repetitive behaviors, in a specific context, are empirically correct–they achieve the goal of staying upright. You don’t really keep track of every behavior and contextual input. You’d never be able to respond in time if you did. Ultimately, intelligence (real or artificial) is being able to behave in a way that achieves the right results without necessarily knowing all the details of why; it just worked.

Years passed and then something very frustrating happened. One day you had to take all of this knowledge and transfer it to your own child, niece, nephew, or grandchild. Did you give them the list of the hundreds, perhaps thousands, of individual rules that you’d learned to follow? Of course not! You couldn’t if you wanted to because there were just too many and most of them you weren’t even aware of. So, how did they learn? The same way you did, by gathering hundreds of experiences and thousands of minute actions and inputs that were all measured against the goal of staying upright.

AI Is All About The “How”

That’s all that AI is, a really good student that learns over time how behaviors and contextual inputs result in progress towards a goal. In your case, with the bike, you did that by falling down and scraping your knees over and over. In the case of AI it’s done by running millions upon millions of simulations.

While that can make AI very good at a particular task, running a simulation of Chess or Go has no context outside of the two-dimensional board and the finite number of legal moves that can be made (the king on a chess board can’t leave the board and go into the refrigerator to hide from a checkmate). In other words, AI can only get good at what you tell it to get good at.

This is where so many conversations about AI get sensationalized. The complexity of how AI makes a decision doesn’t make the decision conscious, sentient, or magic; it just makes it very difficult to understand; in the same way that you can’t list all of the things that go into riding a bike.

It also doesn’t mean that AI can somehow start making decisions about things outside of what it was trained for. Using the bicycle analogy, just because you’ve mastered riding a bike doesn’t mean you can now drive a car or fly an airplane. In fact, you couldn’t even go from two wheels to a unicycle without having to learn an entirely new set of rules.

The same applies to AI. AlphaGo Zero uses narrow AI that is exceptional at Go, but AlphaGo Zero is not suddenly going to start making decisions about anything other than black and white stones on a Go board. Each area that AI is applied to is one that we choose to apply it to. AI is all about “how” to do something better, faster, more accurately.

And this is where the paths of human intelligence and artificial intelligence come to a fork in the road. While AI can learn the “how” to just about anything better than a human, it does not have the curiosity to ask “why.”

In the final analysis, if such a finality exists when talking about sentient conscious beings, it may not be intelligence or even intuition that accounts for the unique value of the 100-billion neurons that make us human, but rather the simple act of curiosity.

My advice? When a computer asks, unprompted, “Why should I play Go?” Then start to worry.

Design Driven Event 2019

This article was originally published on Inc.

Wait! Before you go…

Choose how you want the latest innovation content delivered to you:


Thomas KoulopoulosTom Koulopoulos is the author of 10 books and founder of the Delphi Group, a 25-year-old Boston-based think tank and a past Inc. 500 company that focuses on innovation and the future of business. He tweets from @tkspeaks.

Leave a Reply