It Depends What You Mean by the Word Agent
What Do You Mean by “Agent”?
Imagine you are on a Teams call discussing AI strategy. Someone says, “We need to invest in Agents.” And someone else says, “OK…Before we spend any money, what exactly do you mean by ‘agent’?”
This could be happening in Teams/Zoom/Slack and even IRL meeting rooms everywhere right now. And it turns out, it’s a game of ping pong philosophers have been having for ages.
The Philosophers Enter the Meeting
Five philosophers somehow ended up at a company in 2026 and joined our Teams/Zoom/Slack conversation about AI strategy. Here’s roughly how it could go:
Socrates would do what Socrates always does: plays his signature question “What is it?” (‘ti esti’). “You say we should adopt agents. But what is an agent? Can you define the essential quality that makes something an agent rather than merely a tool?” Aah classic Socrates. You kook.
Bertrand Russell would agree. Russell spent his career insisting that vague thinking is dangerous. It’s not just intellectually sloppy, but morally risky. “When you allow yourself to think inexactly,” he wrote, “your prejudices, your bias, your self-interest come in ways you don’t notice.” He’d want crisp definitions before anyone committed budget. Seems fair.
Wittgenstein (the later version) would push back. He’d point out that some of our most useful concepts don’t have neat definitions at all. His famous example: What is a game? Board games, ball games, card games, video games, the Olympic games…what single feature do they all share? Competition? Solitaire doesn’t have it. Rules? Children’s make-believe barely has them. There is no single feature. Games are connected by “family resemblances” of overlapping similarities, not shared essences. And yet we use the word “game” perfectly well every day. So we can safely use the word ‘Agent’ without clutching our pearls and insisting it comes with a definition. Well that makes sense too.
Heidegger would be even more radical. He’d argue that demanding too much precision too early actually falsifies the phenomenon you’re trying to understand. Some things need to be pointed at before they can be analyzed. If you insist on defining them first, you risk substituting a dead abstraction for a living reality. Goodness, well thanks for bringing us all down man. But yeah, I see what you are saying.
So we have a deadlock. Half the room wants precision says we can discuss Agents without defining Agents. The other half thinks too much precision at this point will waste time and kill the conversation before it starts.
Peirce Suggests Compromise
Enter Charles Sanders Peirce, the American pragmatist, with a useful resolution. He suggests the meaning of a concept just is the practical consequences that follow from it. What does “heavy” mean? It means: if you let go of this thing, it will fall.
If a concept makes a difference you can observe, it has meaning. If it doesn’t, it’s what Peirce called “senseless jargon.”
Applied to our meeting: “agent” doesn’t need a perfect philosophical definition. It needs to cash out in observable differences. An agent can take actions without you in the loop. An agent has some goal-directedness. An agent can make decisions about how to accomplish something, not just whether to do it. We can gesture at some things that are ‘Agent-ish’ and that’s enough to be getting on with.
Two Legitimate Approaches
This gives us a framework. There are two legitimate approaches to concepts, and they serve different purposes:
Approach 1: Concepts as pointers. Here, the concept is a gesture toward something real but complex. Demanding precision too early forecloses investigation. You need to point at the thing before you can analyze it. This is how we explore new territory.
Approach 2: Concepts as tools for joint reasoning. Here, precision matters because ambiguity leads to people talking past each other. The “it depends what you mean” move is hygienic, it prevents those pseudo-agreements where everyone nods but imagines something different.
Both are legitimate. The question is: which context are you in?
The Practical Rule
Here’s the pattern that emerged from our philosophical meeting:
When exploring, use Approach 1. If you’re in discovery mode, so you are trying to understand what AI can do and where the opportunities might be, then demanding definitions too early is counterproductive. You’ll foreclose options before you’ve even seen them. Point at ‘Agent-ish’ things. Play with the idea. Let the concept remain fuzzy while you build intuition.
When spending money, use Approach 2. Once you’re making decisions with real resources attached (budget, headcount, opportunity cost) you need precision. “We’re going to invest in agents” isn’t a strategy. “We’re going to automate the first-pass review of RFP responses using an LLM that can pull data from three internal systems and draft responses for human review” is a strategy. You can measure it. You can know if it worked.
This maps to something I’ve written about before: the difference between means and ends. AI is a means. Your business outcomes are ends. At the exploratory stage, you can afford to be vague about means while you figure out which ones exist. At the commitment stage, vagueness about means is just a recipe for wasted money and disappointed stakeholders.
The Spherical Cow Reappears
If you’ve read earlier posts, you’ll recognize this pattern. A spherical cow, that is a simplified model, is tremendously useful for building intuition and exploring possibilities. But you don’t write engineering specs for spherical cows. At some point you need to account for the actual shape of actual cows.
“Agent” is a spherical cow. It’s useful for pointing at a cluster of capabilities that differ from simple prompts or fixed workflows. But when the purchase order needs to be signed, you’d better know what you’re actually building.
What Should You Do?
Next time you’re in a meeting and someone says “agent” (or “transformation” or “platform” or any other term that makes half the room twitch), try this:
- Ask yourself: Are we exploring or committing?
- If exploring, let the term remain fuzzy. Use it to point. Build shared intuition.
- If committing, insist on precision. What exactly will this do? How will we know if it worked?
The philosophers, it turns out, were all right. They were just answering different questions.
Related posts:
- Spherical Cows — On the value of simplified models
- What Is Our AI Strategy? — Why AI is a means, not an end
- Prompt, Workflow, Agent — A practical spectrum for AI complexity