Grandma should understand Artificial Intelligence. Here’s how I explained her.
Spoiler alert: she got it.
Let me start with a quote from Andrew Ng:
“Artificial Intelligence (AI) is the new electricity. Just as 100 years ago electricity transformed industry after industry, AI will now do the same.” — Andrew Ng.
The question is not if he’s right, this is already happening all around us.
Here’s the real question: is everyone ready?
When a technology starts touching every aspect of our lives, it becomes important that people understand its basic principles. And I’m not referring just to tech enthusiasts, I mean everyone who’s going to be touched by it. In this case…every single one of us. Even grandma. With the democratization of technology it should come the democratization of knowledge.
Take electricity. You don’t need to know Coulomb’s or Faraday’s laws to use it, but there are some main concepts you need to understand to do it productively and safely.
We all know that electricity can power different devices to do different things. Have an electric heater? Plug it into a socket, “electricity” will “flow” into it and it will warm you. Want to light up your room at night? Plug it into a socket, and it will light up.
You also need to know what not to do with electricity. NEVER put something that has “electricity flowing” in it into water. NEVER touch metallic objects with “electricity flowing”. A small percentage of world’s population knows what is Joule’s effect and what does “conductivity” mean, but everyone has to know that certain things have to be avoided.
What about AI?
Is it possible to build an intuition around how AI works, its opportunities and threats without having a PhD in Computer Science?
From my experience with my lovely grandma and many other persons not in the field this is not only possible, but also very important and usually well appreciated.
I’m writing this blogpost to share the main ideas that I’ve used to explain what AI is and why understanding it matters.
An AI intuition:
Disclaimer: what I’m going to say is a simplification. It won’t get you a PhD, but hopefully the nuts and bolts to interpretate AI and Machine Learning. And if you have a PhD, please chill.
I’d love to start with one of those cool quotes that give you the “Eureka!” moment immediately. Sorry, I don’t have one (yet) for AI. I’m going to ask you a question instead:
What kind of person do we define as “intelligent”?
Everyone has met people that could define as truly, amazingly intelligent. In my case, one of them was a high school friend. One morning he reached school and asked me what that day’s math test was about. I had been exercising on the topic (logarithms FYI) for weeks: there was no way he was going to take a sufficient mark with just a one minute brief.
Long story short: I got 9.5/10 at that test, he got 10/10. Main difference? I studied, exercised, learnt all the rules, tricks and procedures to solve logarithms. He was plain intelligent, and achieved his goals without any of this, using his intuition to cover the holes of his study.
In one sentence, he used intuition instead of well-defined procedures.
We can make computers act similarly by using Machine Learning (ML), a set of techniques that currently power basically every successful AI implementation.
If you have ever given instructions to a computer, even on a small excel file or some formula on a calculator, you’ll know how it works: you give to the computer some extremely precise instructions on what you want it to do. When you pass some input data, the computer is going to simply, blindly, relentlessly execute your instructions and return some output data.
Machine Learning is a completely different approach. It doesn’t start with the actions you want the computer to perform, it starts with the data. You give the computer some input and output data that are related without knowing how, and “ask” the computer to figure out that relation for you.
But grandma has never programmed a computer. She sometimes plays chess online, but nothing more. To help her get why this is such a breakthrough, let’s think about how we learn to walk.
I don’t know you, but I didn’t learn with my mother (and grandma) telling me: “OK Gianluca, you ready? Now: move your weight on the left feet, contract right quadricep, extend right leg, move your weight forward, make contact with the ground with right feet…”. You get the point. This approach would have been quite difficult, right? It requires:
- Extremely precise rules
- To cover all possible cases (how to adjust with different shoes? How with slippery floor?)
- Exactly define processes and entities (how do you contract a quadricep? how do I evaluate balance?)
We learn in a different way: we see our parents walking, and try to imitate them. In the process, we’ll fall many times, but while iterating we’ll slowly start to improve, creating our own “walking process” without realising it.
In other words: we take some data as an input (perception from our body, equilibrium, sight of world around, etc.), and an expected outcome (our parents walking), and try to combine those inputs from the body to match the output we want.
Grandma is starting to get it, but to her doesn’t look like a big deal. Yet.
Let me tell you and grandma why this is a breakthrough:
There are some phenomena that would be incredibly hard to describe with a “traditional programming approach”, even more than walking. Think of computer vision: how would you define rules to distinguish between a dog and a cat from a picture? A 3 years old kid can do it, but I’m sure that even the smartest person in the world can’t describe the “rules” he unconsciously applied to do so.
Machine Learning is the technique that unlocks this kind of tasks.
You can’t figure out the rules that relate the pixels of an image (input) to its content (output)? A Machine Learning algorithm can do that for you (specifically, in this case Deep Learning algorithms are very efficient, the core principle is the same discussed above).
With Artificial Intelligence, decisions are driven by data, not by previously defined processes.
This has positive and negative sides. We just talked about the positive side: we can give to computers the hard work of figuring out relations between data, unlocking complex problems impossible to solve otherwise.
Let’s talk the negative side now:
The criteria that the algorithm will “find out” to take decisions are not under our control, but are inferred from data. For this reason, you may:
- Get a biased decision if your data is biased.
- Not be able to explain why a certain decision was taken, and therefore lose control over it.
Let’s talk about the first one: we all know that Facebook uses an AI algorithm to pick content for our news feed. This means you won’t say “yo Mark, I’d like to see content from political party X and Y”: the algorithm will analyse your interactions (input data : content, output : interaction) and infer your interests and preferences.
This is the reason why anti-Trump people got shocked by the first exit polls (I was in San Francisco at the time, I’ll never forget the atmosphere in the office those days). If you interact a lot (like, share, comment, or even stop scrolling) with positive content regarding Hillary Clinton, the algorithm is going to suppose that this is something you want to read, and populate your feed with similar content. You’ll then build the illusion that just few people are talking about Trump in positive terms, and he’s going to loose the elections. The same thing works in the other way: Trump supporters that engage with pro-Trump content see a news feed mostly populated with similar content, and start thinking that he’s the only true contender to the white house, and that some over-the-lines affirmations he made are actually acceptable; after all, the whole Facebook feed (aka the world) agrees, right?.
The solution to this problem is to acknowledge its existence first.
Before putting likes on Facebook everywhere, think how every like is going to influence what you’ll read on your feed from that moment on. Grandma doesn’t have Facebook, but if she had, I’d recommender her to like stuff consciously.
Let’s go over the second problem: loosing control and interpretability over the algorithm’s decisions. It may sound like a skynet scenario, but in reality it’s less sci-fi and in my opinion human extinction is not on the table (yet). It’s still dangerous though.
Remember the main reason why we like Machine Learning: it allows us to let computers find out the relationships between data, without telling him. This also means that it may be hard to know what the algorithm considered to take its decision, and why it made that call.
Think about when an algorithm will decide if you should go to jail. Or if you’re eligible for a loan. Or if you should be hired.
Would you like to know why it decided “yes” or “no”? Or at least what it considered? Yes, you do.
Let’s make an example. If someone trains a ML algorithm to learn from data, and the data contains the color of your skin, it’s likely that it will affect its decisions. No one will ever write “if skin person = black, then do X, if skin person = white, then do Y”. The algorithm will, somehow, figure something like this out and apply it. And we probably won’t know what criterium it’s using.
Now, this is scary. You may think that a computer-based decision is free of bias, instead it’s going to reflect the bias of the dataset. Make it learn to discriminate criminals from a racist judge, you’re going to have a racist AI. And if you’re not careful, you’ll never know it (remember, there’s no line of code that says “if person skin = black add more risk”).
How do we solve this?
European Union already worked on a regulation for algorithmic decision-making and a “right to explanation”. I’m not sure it’s possible to explain everything that an algorithm is doing, yet some of those black boxes may actually be useful.
What I believe is important is that information about AI becomes more democratic. Companies should start talking about their algorithms and how they use our data to power the tools we use. I’m not talking about scientific papers — even though they’re important too — I’m talking about basic information that can be relevant for everyone, and most importantly with a language and a communication strategy that aims at bringing everyone on board.
On the other hand, people have to open their eyes. Ask questions, be curious, wonder why your Facebook feed seems so interesting, and how software takes decisions about your life. Demand information, demand explanations, demand clarity.
AI is a revolution that concerns all of us. And we all must be part of it.
Hopefully this blogpost helped you understanding a little more what AI is, why it’s so powerful, and be more aware on why we should be careful.
If you have questions or want to express some thoughts, I’d love to start a discussion. Also, share it so that more people can take part of the revolution.