Is this AI? See if you can spot the technology in your everyday life.

Artificial intelligence is suddenly everywhere. Fueled by huge technological advances in recent years and gobs of venture capitalist money, AI has become one of the hottest corporate buzzwords.

Roughly 1 in 7 public companies mentioned “artificial intelligence” in their annual filings last year, according to a Washington Post analysis. But the term is fuzzy.

“AI is purposefully ill-defined from a marketing perspective,” said Alex Hanna, director of research at Distributed AI Research Institute. It “has been composed of wishful thinking and hype from the beginning.”

So what is AI, really? To cut through the hype, we asked 16 experts to judge 10 everyday technologies. Try to spot the AI for yourself and see how you compare to readers and the experts.

Chatbots like ChatGPT

Auto-correct on mobile phones

Tap-to-pay credit cards

Google Translate

Personalized ads

Computer opponents in video games

GPS directions

Facial recognition software, like Apple Face ID

Microsoft’s Clippy

Virtual voice assistants, like Alexa or Siri

Even among experts, what counts as artificial intelligence is fuzzy.

“The term ‘AI’ has become so broadly used in practice that … it’s almost always better to use a more specific term,” said Nicholas Vincent, an assistant professor at Simon Fraser University.

Nothing was unanimously deemed AI by experts, and few products were definitely declared not AI. Most landed somewhere in the middle.

What readers and experts consider to be AI

Some experts don’t think anything we use today is AI. Current technology is “capable of specific tasks they are trained for but dysfunctional at unforeseen events,” said Pruthuvi Maheshakya Wijewardena, a data and applied scientist at Microsoft, who identified no product as definitely AI.

The “capabilities of an AI is a spectrum, and we are still at the lower end,” said Maheshakya Wijewardena.

For Emily M. Bender, a professor of linguistics at the University of Washington, calling anything AI is “a way to dodge accountability” for its creators.

What artificial intelligence generates, whether it’s auto-correct, chatbots or photos, is trained from large amounts of data, often pulled off the internet. When that data is flawed, inaccurate or offensive, the results can reflect — and even amplify — those flaws.

The term AI makes “the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies,” said Bender.

About this story

Emma Kumer contributed to this story.

The experts surveyed were Emily M. Bender, professor, University of Washington; Matthew Carrigan, machine-learning engineer, Hugging Face; Yali Du, lecturer, King’s College London; Hany Farid, professor, UC Berkeley; Florent Gbelidji, machine-learning engineer, Hugging Face; Alex Hanna, director of research, Distributed AI Research Institute; Nathan Lambert, research scientist, Allen Institute for AI; Pablo Montalvo, machine-learning engineer, Hugging Face; Alvaro Moran, machine-learning engineer, Hugging Face; Chinasa T. Okolo, fellow, Center for Technology Innovation at the Brookings Institution; Giada Pistilli, principal ethicist, Hugging Face; Daniela Rus, director, MIT Computer Science & Artificial Intelligence Laboratory; Mahesh Sathiamoorthy, formerly of Google DeepMind; Luca Soldaini, senior applied research scientist, Allen Institute for AI; Nicholas Vincent, assistant professor, Simon Fraser University; and Pruthuvi Maheshakya Wijewardena, data and applied scientist, Microsoft.

Leave a Reply

Your email address will not be published. Required fields are marked *