TRENDING

The New Version of Illiteracy Isn’t About Reading or Writing—It’s About Treating AI Like an Oracle, Not a Tool

The problem isn’t a lack of coding skills. It’s using AI models like Google or a calculator instead of a thinking partner.

The new illiteracy its treating AI like an oracle
No comments Twitter Flipboard E-mail
javier-lacort

Javier Lacort

Senior Writer
  • Adapted by:

  • Karen Alfaro

javier-lacort

Javier Lacort

Senior Writer

I write long-form content at Xataka about the intersection between technology, business and society. I also host the daily Spanish podcast Loop infinito (Infinite Loop), where we analyze Apple news and put it into perspective.

174 publications by Javier Lacort
karen-alfaro

Karen Alfaro

Writer

Communications professional with a decade of experience as a copywriter, proofreader, and editor. As a travel and science journalist, I've collaborated with several print and digital outlets around the world. I'm passionate about culture, music, food, history, and innovative technologies.

417 publications by Karen Alfaro

A century ago, illiteracy meant not knowing how to read or write. In the developed world, that problem still exists—but a different kind of illiteracy is emerging. It’s subtler, harder to detect, filled with gray areas, and potentially just as important: not knowing how to interact with an AI model.

This new literacy isn’t about knowing how to code or understanding machine learning algorithms. It’s more fundamental: knowing how to ask good questions, how to read the answers, and—most of all—how to distrust. Not in a paranoid way, but with discernment. It’s about knowing when you’re using AI and when AI is using you.

It’s the difference between being a passive user—someone who swallows without chewing—and someone who uses AI as a lever for thought, an extension of their analytical ability. Because used well, it can be just that: a cognitive multiplier.

There’s a big difference.

  • Some people use these systems like an enhanced Google or a calculator on steroids. They type a prompt, copy the answer, and move on.
  • Others are learning to converse with AI models, pushing boundaries and generating ideas that neither the person nor the machine could’ve created alone.

The tool isn’t the issue. What matters is how you use it—and that requires AI literacy.

It goes beyond ChatGPT. Systems like Deep Research are beginning to automate tasks that were once the foundation of many professions. Reports, summaries, preliminary analyses—this was the kind of work that helped people learn their field from the inside out. If AI takes that over, how do you learn to think like an expert?

This is the black hole many companies are facing. If they automate training tasks, how will they train new employees? Without a fast, effective redesign of how knowledge is passed on, we risk building generations with no real foundation—people with degrees but no standards.

Worse, this new illiteracy could become hereditary. Just as parents who don’t read often don’t raise readers, those who don’t use these tools well likely won’t teach others to use them well either. Learning will be left to the school—or the algorithm.

The paradox is that all this is easy to hide. A person can produce a flawless report, a slick presentation, or a solid-looking analysis without any real understanding. All they need is the ability to ask the right prompt.

The danger isn’t just that mediocrity will rise. It’s that no one will notice. The real problem isn’t that AI will think for us. It’s that we may gradually stop thinking for ourselves—and not realize until the atrophy sets in.

That’s why the digital literacy of the future won’t be just technical. It’ll be ethical, critical, and cognitive—like knowing when to ask AI to think for you.

And more importantly, knowing when to say no.

Image | Xataka On with ChatGPT

Related | ChatGPT Used to Be a Tool. Now That It Can Remember Your Conversations, It’ll Be Something More: A Relationship

Home o Index