Making Friends with AI

Are you using the term ‘AI’ incorrectly?

Cassie Kozyrkov
5 min readMay 26, 2018
Here’s the audio version of the article, read for you by the author.

There, I said it: I don’t mind that industry uses ‘AI’ and ‘machine learning’ to mean the same thing. But is it technically correct? Is it sloppy? And if we pick some nits, who’s really using the term ‘AI’ incorrectly?

Photo by Houcine Ncib on Unsplash

Industry’s ‘AI’ would technically be called ‘deep learning.’

Academia doesn’t consider artificial intelligence (AI) and machine learning (ML) interchangeable, and as an ex-academic I sympathize with their definitions and agree that technically AI is a proper superset of ML which is a proper superset of deep learning. (How’s that for proper?) Deep learning (DL) is ML that uses a particular class of algorithms (neural networks) and it’s what industry tends to mean when it says AI.

But I also think that most people (and industry) don’t particularly care about that distinction and will use language in a less formal way. Language evolves, whether we like it or not. Originally used by professorial types, the term AI has fled out of their clutches and into the common vernacular as something else.

Language evolves, whether we like it or not.

At the risk of offending researchers, I feel it’s most helpful to acknowledge the new way industry is using the term and explain the modal usage for readers who aren’t interested in nuance. It’s okay to let the English language evolve as long as we keep up. Coined in 1956, the term AI was never defined all that strictly. (Right, academics? Remember the days when AI was an embarrassing term to use on your grant applications… so you just replaced all instances of it with machine learning?) With poorly defined terms, there’s not really such a thing as using them correctly. We can all be winners. The words move around.

And watch out, definitions lawyers: wouldn’t it be embarrassing if what you’re calling AI is actually technically called reinforcement learning (RL) and you’re also misusing your words? Here, have a hug, we can all be friends. If your definition hinges on sequences of actions, planning, gathering information from an environment, figuring out policies for behaving in the future — a classic example is a computer learning to do stunt maneuvers with toy helicopters — then you might be thinking of RL.

If you’re looking for the robot entities of sci-fi, then you might like the term HLI: human-like intelligence.

If you’re drowning in all this alphabet soup of AI, ML, DL, RL, while looking desperately for the robot entities of sci-fi, then you might like the term HLI. Human-like intelligence. If you’re going to be referring to ‘an AI’ in a way that evokes personhood, it’s probably best to call it ‘a HLI’ instead. Those of you who are worried that there’s HLI lurking in every cupboard can breathe easy; all those industry AI applications are not HLI and aren’t about building actual minds. Everyone’s too busy using AI to solve real business problems that involve some solid unglamorous thing-labeling.

When I’m feeling especially cheeky, I find that this quote by Mat Velloso is my new favorite definition of the difference between AI and ML:

“If it is written in Python, it’s probably machine learning.

If it is written in PowerPoint, it’s probably AI.”

You’d be surprised how often someone’s “AI solution” is actually just this. Source: Andriy Burkov.

Let’s summarize. Close your ears, professors. Everyone else: when you hear them talked about in industry, AI and machine learning may as well be synonyms and they have little to do with HLI.

In practice, you don’t have to classify your problem as AI or ML before you begin.

Here’s another reason why a practical cat like me can live with that: from an applied process standpoint, you needn’t classify your business problem as AI/ML/DL before you begin. Just try all the algorithms you can and iterate towards the better performer. If non-DL ML is the wrong approach, you’ll find out quickly and course correct. But it’s usually best to try the simpler option even if you doubt it’ll work. It only takes a few minutes. If you can fit a neural network to it, you can try putting a line through it too. (It’s what, 2–5 lines of code? Even if you’re not using a package and implementing from scratch, it’s easy. If you forgot the formula for regression, my friends from grad school made a catchy song about it for you so it sticks.) As a bonus, if the simple thing performs well, that means you got yourself a solution that will be easier to maintain in production. Good luck, have fun, and may the best algorithm win!

Thanks for reading! How about a YouTube course?

If you had fun here and you’re looking for an applied AI course designed to be fun for beginners and experts alike, here’s one I made for your amusement:

Enjoy the entire course playlist here: bit.ly/machinefriend

Liked the author? Connect with Cassie Kozyrkov

Let’s be friends! You can find me on Twitter, YouTube, Substack, and LinkedIn. Interested in having me speak at your event? Use this form to get in touch.

While we’re on the topic, I loved this flowchart by Karen Hao.

--

--

Cassie Kozyrkov

Chief Decision Scientist, Google. ❤️ Stats, ML/AI, data, puns, art, theatre, decision science. All views are my own. twitter.com/quaesita