Tony Stark’s Iron Man suit can fly, lift cars, and fire repulsor beams. It is, by any measure, an extraordinary piece of technology. But the suit doesn’t make Tony Stark. Tony Stark makes the suit useful. He arrives with decades of engineering obsession, a restless intelligence, and, crucially, a problem he can’t put down. The suit is the amplifier. The prior knowledge, the hunger, the direction: that’s the signal.
Now imagine the suit is gifted to everyone. What happens? For some, it transforms capability overnight. The person with an unfinished project, a genuine question, a problem that won’t leave them alone. They put the suit on and fly. For others, the suit sits in the garage. Not because it’s defective. Because the wearer has no destination. The suit can take you anywhere, but it can’t tell you where to go.
This is the thing the current AI conversation keeps skating past. Not whether the tool works (it does). But whether the people holding it arrive with something worth amplifying.
The Prior Is the Signal
Let’s put a name on this. The prior, the accumulated understanding of a domain, a problem, a craft, is what makes any powerful tool useful. Without a prior, the tool is potential energy with nowhere to discharge. With one, the tool becomes transformative.
And this isn’t new. Every great amplifier of human capability has worked the same way. Writing didn’t create ideas. It amplified the thinking of people who already had something to say. The person with a prior worth recording became influential across centuries. The person without one just kept better grocery lists.
Steam didn’t create industrialists. It amplified the capabilities of people who already understood manufacturing, logistics, and markets. The factory owner who knew what to build and for whom became radically more productive. The one who didn’t went bankrupt faster.
Henry Adams understood this when he stood before the giant electrical dynamos at the 1900 Paris Exposition. He felt their power immediately: “silent and infinite,” a new kind of force that commanded something close to worship. But Adams also saw what most visitors missed. The dynamo was all force, no center. It could generate enormous energy, but it couldn’t tell you what the energy was for. He contrasted it with the Virgin of Chartres, the symbol that had organized an entire civilization’s creative energy into cathedrals, art, and shared meaning. “All the steam in the world,” Adams wrote, “could not, like the Virgin, build Chartres.” The dynamo needed something outside itself to give it direction. The prior was the point, even in 1900.
AI is the latest and most general-purpose of these amplifiers, arguably a general-purpose technology applied to thinking itself. But amplifiers have a property that matters here: they amplify whatever signal you feed them. Strong signal, strong output. No signal, no output. Noise in, noise out.
This plays out at three scales: individual, civilizational, and organizational. And the pattern is the same at each one.
Tourists in Their Own Lives
Iain M. Banks ran the thought experiment at civilizational scale. His Culture novels describe a post-scarcity society: unlimited energy, unlimited material wealth, AI “Minds” that handle all the hard cognitive and administrative work. Humans are free to do whatever they want.
And what do most of them do? Hobbies. Relationships. Mild hedonism. Body modification out of boredom. They are cared for, but not needed. They are, as Banks honestly portrays them, tourists in their own lives.
The ones who thrive are the people in Contact and Special Circumstances, the ones who have imported a reason to care by engaging with civilizations where things still matter, where the stakes are real. They chose a prior. They committed to something that required them.
The Greeks had a word for the underlying condition Banks describes: acedia. Not depression, but the soul’s failure to find anything worth wanting. Freedom without content is not liberation. It’s elegant paralysis. Necessity at least tells you what to do next. Remove necessity, and you’re thrown back on the hardest question: what is actually worth attending to?
Here’s the link to now. We are entering a partial post-scarcity of cognitive labor. Many intellectual tasks that used to require years of training can now be approximated by a tool. The question is not whether the tool works. The question is whether the humans holding it have cultivated desires worth amplifying.
“I Don’t Get It”
There’s a scene in the 1988 film Big where Josh (Tom Hanks, playing a twelve-year-old in an adult’s body) sits in a toy company boardroom watching executives nod along to a product presentation. He looks at the product, and he says, simply: “I don’t get it.”
The room freezes. Not because the criticism is sophisticated, but because it’s honest. The child’s directness cuts through the performance of understanding.
A lot of us I think keep having that reaction to the way organizations talk about deploying AI. “We’re going to use AI to run our legal department.” “We’re building an AI-powered sales team.” We don’t get it. Not because the technology can’t do impressive things (it can). But because the sentence is missing a subject. Who is using AI to run the legal department? Who reviews the output? Who catches the hallucination that looks plausible but would expose the company to liability? Who holds accountability when something goes wrong?
Here’s an analogy that can help, and it has been used before in augmentation/automation conversations. Chainsaws replaced hand-saws, not loggers. The chainsaw made the logger radically more productive but only because the logger community already understood trees, terrain, timing, and safety. Nobody in 1950 said “we’re going to hire some chainsaws.” They hired loggers who happened to be holding chainsaws. The person holding the chainsaw works for the company. The chainsaw does not.
If you’ve seen the film of Denis Johnson’s Train Dreams, you get a glimpse of this transition: the younger loggers holding chainsaws pushing past the old men, the mechanized world arriving whether anyone was ready or not. The film doesn’t dwell on what survived the transition. But we know that what survived was the principal function, the judgment about what to cut, where, when, and why. The tools changed. The commissioning, the evaluation, the accountability: those remained human.
Which raises an interesting question about the organizational gap. Where is the layer between “we bought an AI tool” and “the AI is doing useful work”? Right now, that layer is informal and unrecognized. It’s the curious employee who figured out prompting. The team lead who spent weekends learning the tool’s actual capabilities. It is not architected, budgeted, staffed, or even named.
The commissioning problem sits at the heart of this. AI can execute and it can make stuff. But who frames the question well enough for the tool to produce something useful? Who evaluates whether the output is good, adequate, or dangerously wrong? This is not a technical problem. It is a judgment problem. And judgment is exactly the thing that cannot be automated, because it requires understanding what matters in a given context. That understanding is a human capacity built over years of experience, domain knowledge, and accountability.
The Common Thread
Let’s look at the pattern across all three scales.
Scale: Individual / Tool: The Iron Man Suit / Prior: Engineering obsession, a real problem
Scale: Civilizational / Tool: Post-scarcity abundance / Prior: A cultivated reason to care
Scale: Organizational / Tool: AI capabilities / Prior: Domain judgement, comissioning skill
The common thread: AI amplifies the prior. No prior, no amplification. Just expensive potential sitting unused.
What is the prior, precisely? It is not just “a goal” or “a use case.” It is the accumulated understanding of a domain, a problem, a craft that allows someone to recognize what the tool should be doing and whether it’s doing it well. It’s the thing that takes years to build and cannot be downloaded. It is, in the language of the jagged frontier, the tacit knowledge that determines where AI amplifies you and where it just generates plausible noise.
Real World Implications
For businesses, the bottleneck to AI value is not the technology. It is the supply of people with priors worth amplifying. Companies that invest in developing human judgment, domain expertise, and the capacity to commission and evaluate AI work will outperform those that invest only in the tools themselves. The distance from Silicon Valley isn’t measured in miles. It’s measured in the depth of organizational priors.
For individuals, the implication is more personal. If you don’t arrive with a genuine hunger, an unfinished thought, a problem that won’t leave you alone, the most powerful tools in history will not supply one for you. The suit will sit in the garage.
There’s a musical version of this. An infinite keyboard with every possible harmony doesn’t help the player who hasn’t committed to a phrase. You have to start playing, commit to something real and particular, before the abundance becomes generative rather than paralyzing. The constraint is not the enemy of creativity. It’s the precondition.
The question “what is AI for?” is, in the end, a question about what you are for. The tool doesn’t supply the desire. The suit doesn’t make the hero. And a civilization of perfect tools and empty purposes is just the Culture without Contact: comfortable, capable, and adrift.
Start with the prior. The tool will follow.