Here’s a puzzle that’s been bothering me lately.
We all have access to the same AI tools now. ChatGPT, Claude, Gemini—pick your favorite. The interface is simple. Type what you want. Get results. A ten-year-old can use it just as easily as a forty-year-old software engineer.
Same tools. Same language. Same access.
So why is it that when a senior architect asks AI to help design a building, they get something that could actually be built—while when a student asks, they get something that looks impressive but would collapse in the first windstorm?
Why can a veteran programmer use AI to write production-ready code, while a beginner gets something that technically runs but will cause a security breach within a week?
The answer is uncomfortable: You don’t know what you don’t know.
The Invisible Gap
Let me give you a concrete example.
A professional chef asks AI: “Give me a recipe for coq au vin that serves 8, with a make-ahead component for the sauce, and substitutions for guests who don’t drink wine.”
A cooking novice asks AI: “Give me a chicken recipe.”
Both get answers. Both answers are technically correct. But the professional knew what to ask for. They knew coq au vin exists. They knew sauce components can be prepped ahead. They knew wine-free substitutions are a thing. They knew serving size matters.
The novice didn’t know what they didn’t know. So they couldn’t ask for it.
This is the fundamental paradox of AI tools: they amplify expertise rather than replace it.
What You Actually Need to Know
Here’s where it gets interesting for education.
If AI can generate code, write essays, solve equations, and create designs—what’s left for humans to learn?
The knee-jerk answer is “nothing.” Some people genuinely believe we can skip the learning phase now. Just tell AI what you want, get results, move on.
But this misses the entire point.
You need to know enough to:
- Ask the right questions
- Recognize when the answer is wrong
- Know what “good” looks like
- Understand the constraints that matter
- See the problems that need solving in the first place
Let’s break these down.
1. Asking the Right Questions
AI is an incredibly powerful answer machine. But it requires incredibly precise questions.
A student who’s never studied physics can’t ask AI to “optimize the trajectory accounting for air resistance at varying altitudes with a 15% safety margin.” They don’t know those parameters exist. They don’t know those parameters matter.
A professional who’s spent years in the field? They know exactly what to specify because they’ve seen what goes wrong when you don’t.
2. Recognizing Wrong Answers
AI hallucinates. It makes things up. It presents nonsense with complete confidence.
If you’re an expert, you catch this immediately. Something feels off. You verify. You push back.
If you’re a novice, you have no defense. The wrong answer sounds just as plausible as the right one. You accept it, use it, and suffer the consequences.
3. Knowing What “Good” Looks Like
I’ve seen students use AI to generate code that “works” but is a maintenance nightmare. It runs, sure. But any experienced developer would look at it and wince.
The student has no framework for quality. They don’t know what clean architecture looks like because they’ve never built or maintained a complex system. They can’t judge what AI gives them because they don’t know what they should be getting.
4. Understanding Constraints
Every real-world problem has constraints. Budget. Time. Physics. Human psychology. Regulations. Compatibility.
Experts carry these constraints in their head automatically. When they ask AI for help, those constraints shape their questions: “Design a solution that works within our existing infrastructure, doesn’t require retraining staff, and can be implemented in under two weeks.”
Students don’t know the constraints exist. So they get solutions that are theoretically perfect and practically impossible.
5. Seeing Problems Worth Solving
This might be the most important one.
Before you can solve a problem, you need to see it. And seeing problems requires experience, domain knowledge, and pattern recognition that only comes from deep engagement with a field.
AI can solve problems you give it. But it can’t tell you which problems matter. It can’t see the inefficiency in a workflow that’s been done wrong for decades. It can’t notice the market gap that represents a billion-dollar opportunity.
Problem identification is a uniquely human skill—and it requires knowing the domain deeply.
The New Meaning of Learning
So what should we be learning in 2026?
Not facts. AI has all the facts.
Not procedures. AI can execute procedures faster than we can.
Not basic skills in isolation. AI can perform most basic skills.
Instead, we need to learn:
Mental models. Frameworks for understanding how systems work. Physics intuition. Economic reasoning. Design principles. These help you ask better questions and evaluate answers.
Pattern recognition. The ability to see what’s similar and what’s different across situations. This comes from experience—from seeing enough examples that you develop intuition.
Judgment. The capacity to make decisions when data is incomplete, constraints conflict, and stakes are high. AI can list options. Humans must choose.
Constraint awareness. Understanding what matters in real-world implementation. Budgets. Timelines. Human factors. Politics. The messy reality that theory ignores.
Problem sensing. The ability to notice when something isn’t right, even if you can’t articulate why. This comes from deep familiarity with a domain.
The University Question
This brings us to something Elon Musk and others have been saying: the value of university education is changing.
The old model was simple: universities held knowledge. If you wanted access to that knowledge, you went to university. Professors taught, you listened, you graduated with information in your head that others didn’t have.
That model is dead. All the information is online now. Every lecture. Every textbook. Every research paper. You can learn anything a university teaches without ever stepping on campus.
So why go?
The honest answer is: for some people, maybe you shouldn’t.
But here’s what universities could still offer—if they adapt:
Structured struggle. Learning happens when you’re challenged at the edge of your abilities. Good educational environments create that challenge systematically. AI alone won’t push you. A well-designed curriculum will.
Expert feedback. Knowing you’re wrong is hard when you don’t know what right looks like. Experts can see your blind spots and point them out. AI will tell you you’re great even when you’re not.
Problem exposure. Universities can put students in front of real, messy, undefined problems. Capstone projects. Research. Internships. This is where you develop the judgment AI can’t give you.
Community and accountability. Learning is hard. Having others on the same journey—and expectations to meet—keeps you going when motivation fails.
The question is whether universities actually deliver these things or just deliver lectures that could be YouTube videos.
What This Means for Your Kids
If you’re a parent listening to this, here’s the practical takeaway:
Don’t skip the fundamentals. Your child still needs to understand math, science, writing, and how the world works. Not to perform calculations—AI does that—but to ask the right questions and catch wrong answers.
Prioritize deep over wide. Surface knowledge in many areas is worthless now. AI has surface knowledge of everything. What matters is going deep enough in something to develop real expertise. Judgment. Intuition. Pattern recognition.
Focus on problem-solving, not solution-following. The ability to figure things out when there’s no clear path—that’s the skill. Not following instructions. Not memorizing procedures. But wrestling with ambiguity and making progress anyway.
Build things. Knowledge without application is just trivia. The student who’s actually built something—a robot, a business, a piece of software—knows things the student who’s only studied can never know. They’ve hit the constraints. They’ve made the mistakes. They’ve developed judgment.
Embrace productive struggle. Learning feels hard because it is hard. That’s the point. If your child is never frustrated, never stuck, never struggling—they’re not learning. They’re just collecting information.
The Real Competition
Here’s the final thought.
The competition in the AI era isn’t between humans who use AI and humans who don’t. Everyone will use AI. That’s table stakes.
The competition is between humans who have deep expertise—who know what to ask for, what to look for, and what matters—and humans who have shallow knowledge amplified by AI.
The shallow path feels easier. Just ask AI, get answers, move on.
But the shallow path leads nowhere. You end up dependent on a tool you can’t evaluate, producing work you can’t judge, solving problems you can’t even see.
The deep path is harder. It requires years of struggle, building mental models, developing judgment. It feels inefficient when AI can “just do it for you.”
But the deep path leads to mastery. And in the AI era, mastery matters more than ever—because AI amplifies whatever you bring to it.
Bring expertise, get enhanced expertise.
Bring nothing, get impressive-looking nothing.
You don’t know what you don’t know. That’s why learning still matters. That’s why depth still matters. That’s why the struggle still matters.
The tools have changed. The fundamentals haven’t.
What do you think? Has AI changed what your kids should be learning? I’d love to hear your perspective.