A few days ago, I was talking with a friend — another parent — about the future of our children’s education. The kind of conversation that starts casually and then goes somewhere uncomfortable. His concern was simple and honest: what should our kids study, if machines are going to do everything?
It is a reasonable fear. And it is the wrong question.
The question is not whether machines will do everything. They will not. The question is what kind of human capability will still matter when information is no longer scarce, when answers are instant, and when execution can be delegated to an agent that never sleeps.
I found myself thinking about Umberto Eco. Not about semiotics or novels, but about three ideas he expressed at different times, which suddenly seem to converge into one.
Teaching choice, not information
Eco once observed that the task of teaching is no longer to transmit information. Information is everywhere. The task of teaching is to help people learn how to choose. How to evaluate. How to decide what matters and what does not.
This is an idea that was ahead of its time when Eco expressed it, and it has become quietly urgent now. We are not entering a world of scarce knowledge. We are entering a world of overwhelming abundance, where the ability to orient yourself — to judge, to filter, to prioritize — is more valuable than the ability to accumulate.
When my friend worried about his children’s future, this is the part he was missing. The risk is not that machines will replace what his kids know. The risk is that nobody teaches them how to choose what is worth knowing in the first place.
The library as a living thing
Eco also spoke about libraries. He argued that a library is not a passive archive — a warehouse of books waiting to be retrieved. A good library is a living structure. It organizes knowledge in ways that create connections, suggest paths, and make it possible for someone to find not only what they were looking for, but what they did not yet know they needed.
This distinction between a dead archive and a living structure is one of the most productive ideas I know, because it applies far beyond libraries.
It applies to software.
Most software is built as an archive. It stores data, retrieves it on demand, and presents it when asked. It is functional, sometimes efficient, but fundamentally passive. The user must already know what they want. The system merely delivers it.
But the best software — the kind that actually changes how people work and think — behaves more like Eco’s library. It organizes information in ways that reveal structure. It makes relationships visible. It helps people navigate complexity not by simplifying it away, but by making it intelligible. It does not just answer questions. It helps you understand which questions to ask.
Language as the architecture of what does not yet exist
The third idea is perhaps the deepest. Eco argued that the true power of human language does not lie in its ability to describe what exists. It lies in its ability to describe what does not yet exist. Language is not merely a recording instrument. It is a tool of projection, of imagination, of possibility.
This is where software and language meet in a way that most technical discourse completely ignores.
Every piece of software begins as language before it becomes code. It begins as intention expressed in words: what should happen, under what conditions, for what purpose. The architecture of a system is a linguistic act before it is a technical one. You are not merely describing a machine. You are articulating a possibility — something that does not yet exist, but that you are bringing into being through structure, logic, and decision.
Software, at its best, is one of the clearest examples of what Eco was describing. It is language used not to record reality, but to construct it.
The shift that is already here
These three ideas — teaching judgment instead of information, building living structures instead of passive archives, using language to project possibility instead of merely recording fact — are not abstract philosophy. They describe a transformation that is already happening in the way we write software.
Consider what is taking place right now with agentic programming.
For decades, programming meant writing explicit instructions. You told the machine exactly what to do, step by step, in precise and exhaustive detail. The programmer’s value was in knowing the steps, remembering the syntax, mastering the sequences. It was, in Eco’s terms, a practice of transmission: you transferred your knowledge of the procedure into the machine, line by line.
Agentic programming is fundamentally different. You are no longer writing step-by-step instructions. You are defining intentions, constraints, and criteria. You are telling an agent what you want to achieve, what boundaries it must respect, and how to evaluate whether the result is acceptable. The agent figures out the steps on its own.
This changes what it means to be a programmer.
In the old model, the programmer’s core skill was knowing how to do things — how to sort, how to query, how to parse, how to render. In the agentic model, the programmer’s core skill is knowing what to ask for and how to judge the result. It is, almost exactly, the shift Eco described in teaching: from transmitting procedures to forming judgment.
And the systems we build this way start to behave less like archives and more like Eco’s living libraries. An agentic system does not wait for precise queries. It interprets intentions, explores possibilities, and surfaces connections that the user may not have anticipated. It is not a passive retrieval mechanism. It is an active structure that helps people think. This is the same principle at work in building a self-improving image recognition pipeline: the system does not simply execute a fixed procedure — it operates inside an evidence-based loop where each iteration refines the decision logic based on structured feedback.
The language we use to instruct these agents — the prompts, the constraints, the architectural decisions — is language in Eco’s fullest sense. It does not describe an existing procedure. It projects a possibility. It defines something that does not yet exist and creates the conditions for it to come into being. Structured requirements documents like PRD.json files are language used this way — they articulate what should come into being, with enough clarity that an autonomous agent can navigate ambiguity without inventing features that were never intended.
What we should actually be worried about
So when my friend asks what his children should study, the answer is not a specific subject or technical skill. Programming languages will change. Frameworks will be replaced. The syntax that feels essential today will be irrelevant in five years.
What will not become irrelevant is the ability to choose well. To look at a problem and understand what matters. To organize complexity into something navigable. To use language — natural language, not just code — to articulate what does not yet exist but should.
These are not soft skills. They are the hardest skills there are. And they are exactly the skills that the current transformation in software is making more valuable, not less. The same insight applies to post-commit verification of AI-written code: the most consequential work is not generating the code but judging whether it actually does what it should under real conditions. Human judgment, applied with discipline and domain awareness, remains the irreplaceable element.
Eco understood this decades ago, in a world without large language models or autonomous agents. He understood that the challenge was never the quantity of information available. The challenge was always the quality of human judgment applied to it.
The age of AI does not change that insight. It proves it.