Companies Are Things Made of Language That Do Things

There’s a Dutch artist named Theo Jansen who builds enormous beautiful skeletal structures out of plastic tubes and releases them on beaches. He calls them Strandbeests. They look like the ribcages of impossible animals. And when the wind blows, they walk.

No motors or electronics. Just thoughtful structure meeting environment, and something that looks remarkably like life emerges.

I keep thinking about those walking skeletons when I think about what we’re building with language models.

The Language-In, Language-Out Machine

At the most reductive level, an LLM takes words in and produces words out. The mechanism in between—tokenization, attention, next-token prediction—is fascinating, but for our purposes today, treat it as a black box. Language goes in. Language comes out. The language that comes out is shaped by the language that went in.

Here’s what’s interesting: we’ve had things that do things with language for a very long time and we do not give that idea a second thought.

Algorithms, in their original form, were written instructions describing step-by-step procedures. The word itself comes from the 9th-century Persian mathematician al-Khwarizmi. Before there was code, there were recipes for calculation written in natural language.

Procedures, workflows, operating manuals—these are all natural language structures that coordinate behavior. They’re not executed by computers. They’re executed by humans who read them, interpret them, and act.

Which raises a question: if natural language structures have always coordinated human behavior, what changes when we have machines that can interpret natural language reliably?

Language Organisms

Here’s a concept worth playing with like a model strandbeest on your desk: a language organism is a persistent, bounded network of natural language structures that shapes behavior through interpretation and interacts with its environment through its interpreters.

The Roman Catholic Church is a language organism. Canon law, papal encyclicals, the Catechism, liturgical texts—a vast network of documents that has coordinated the behavior of millions of people across centuries. The documents persist. The interpreters (priests, theologians, believers) come and go.

English Common Law is a language organism. Statutes, precedents, procedures—structures that shape behavior through interpretation by judges, lawyers, and citizens.

The United States of America exists primarily as a set of documents. The Constitution, statutes, case law, regulations. The country is, in a meaningful sense, a language organism with 330 million human interpreters.

A company is also a kind of language organism. Articles of incorporation, bylaws, contracts, job descriptions, policy manuals, org charts. The company persists even as employees cycle through. What endures is the structure, which is (hopefully) written down somewhere.

Human organizations have always been language organisms made of structures of natural language interpreted by humans.

The New Interpreter

LLMs change the substrate. Previously, natural language structures required human interpretation to become action. Now they can be interpreted by machines—probabilistically, imperfectly, but increasingly reliably.

When you work with Claude, you’re not querying a database or running a program in the traditional sense. You’re engaging with a structure made of natural language (system prompts, training patterns, RLHF-shaped behaviors) that has been shaped to behave in certain ways.

This is actually cleaner than most human organizations, where linguistic structures (policies, procedures) are entangled with tacit knowledge, relationships, and physical resources. An AI assistant is closer to a pure language organism—almost entirely specified in text, almost entirely enacted through text.

The Million-Dollar Solopreneur

So. Tools based on LLMs are a kind of language organism, and companies are a kind of language organism, so maybe we can just build a company out of language structures interpreted by machines. Can the billion-dollar solopreneur actually happen?

The answer is: partly yes, partly not quite (yet).

What exists already: ChatDev simulates a software company with CEO, CTO, Programmer, and Tester agents coordinating through natural language. Frameworks like CrewAI, LangGraph, and AutoGen provide orchestration. People are running real businesses this way at small scale.

What’s genuinely possible today: A human + language organism hybrid. The AI handles routine, well-specified tasks—research, content generation, data analysis, scheduling, drafting communications. The human handles exceptions, relationships, strategy, and high-stakes decisions. State accumulates in files. Procedures refine based on feedback.

In this configuration, you can operate at the capacity of what would have required 5-15 employees. This isn’t speculation. It’s happening.

What’s not yet possible: The fully autonomous language organism company. Here’s why:

Reliability at scale. A company needs 99.9%+ reliability on routine tasks. Current agents fail in ways that compound. One hallucinated email, one botched order, one misunderstood customer request—these cascade. At scale, even 95% reliability produces thousands of failures.

State management. A real company maintains complex, interdependent state: customer relationships, inventory, accounts receivable, commitments made. File-based state is primitive. You need something more like a database plus memory system that the language organism can reliably query and update.

Real-world interface brittleness. A company sends emails, processes payments, ships products, signs contracts. Each integration is a failure point. The language organism can specify what should happen. Actually making it happen reliably in the physical world is where things break.

Trust and accountability. Customers need to trust someone. Suppliers need to know who’s liable. Regulators need someone to hold accountable. There’s a reason companies have humans signing contracts.

True adaptation. Companies face genuinely novel situations—competitors make unexpected moves, suppliers fail, markets shift. Current language organisms can follow procedures. They struggle when the procedures don’t cover the situation.

Where This Leaves Us

The Strandbeest walks when the wind blows. The structure is there. It needs an environment to animate it.

Language organisms are the same. The structures—procedures, roles, decision rules, norms—can exist in natural language. But they need interpretation to become action. For millennia, the only interpreters were humans. Now there’s an alternative.

This doesn’t mean autonomous AI companies are imminent. The gaps I’ve outlined are real. But it does mean something important for anyone thinking about AI strategy: the building blocks have changed.

Natural language structures that once required human interpretation can now be interpreted by machines. Not perfectly. Not autonomously. But reliably enough to extend what one person—or a small team—can accomplish.

That’s where the interesting work begins.

https://www.strandbeest.com/

https://chatdev.ai/