Thinking as a General Purpose Technology
Steam power took 80 years from James Watt’s improvements to deliver meaningful productivity gains. Electricity required 40 years from Edison’s Pearl Street Station to transform American manufacturing. The computer needed decades before Robert Solow’s famous quip about seeing computers everywhere except in productivity statistics was finally proven wrong.
Now consider thinking itself. Thinking not as a human activity, but as a technology. If Large Language Models qualify as General Purpose Technologies because they manipulate language and reasoning, then perhaps we should stand back and state the obvious: Thinking, in all its messy glory, has always been the ultimate General Purpose Technology.
This isn’t just wordplay. Understanding thinking as a General Purpose Technology reframes how we approach both human cognition and artificial intelligence, revealing patterns that stretch from Aristotle’s categories to OpenAI’s latest models.
The Three Criteria
Economists Timothy Bresnahan and Manuel Trajtenberg defined General Purpose Technologies by three essential characteristics: pervasiveness across sectors, continuous improvement over time, and the capacity to spawn complementary innovations. Let’s examine thinking through this lens.
Pervasiveness Thinking infiltrates every human endeavor. From the physicist wrestling with quantum mechanics to the carpenter judging wood grain, from the programmer debugging code to the nurse assessing patient symptoms, thinking provides the substrate for all knowledge work. The OpenAI and University of Pennsylvania paper “GPTs are GPTs” found that 80% of the U.S. workforce could have at least 10% of their work tasks affected by LLMs. But that’s because these jobs already require thinking. The LLMs are simply offering to augment or automate portions of that cognitive work.
Continuous improvement manifests in both individual development and collective advancement. A novice programmer thinks differently than a senior architect, not just about different things, but with fundamentally different cognitive patterns. Expertise involves developing “pattern libraries”. These are vast repositories of recognized situations that enable intuitive decision-making. Meanwhile, humanity collectively improves its thinking through education systems, methodologies, and now, perhaps, through AI assistance.
Innovation spawning might be thinking’s greatest strength. Every tool, every methodology, every framework we’ve developed to enhance productivity ultimately serves to augment human thinking. Double-entry bookkeeping didn’t just track money, it created a new way to think about business. The scientific method didn’t just test hypotheses, it structured how we think about causation and evidence.
The Productivity J-Curve of Thought
Erik Brynjolfsson’s Productivity J-Curve explains why General Purpose Technologies initially depress measured productivity before delivering gains. Organizations must invest in complementary assets such as new processes, structures and training, that are expensed rather than capitalized, making productivity appear to decline.
Apply this to thinking, and interesting patterns emerge. When we learn new cognitive frameworks—whether mastering statistical thinking, adopting systems thinking, or learning to code—we initially become less productive. The cognitive load of consciously applying new mental models slows us down. Only after these patterns become internalized, shifting from Kahneman’s System 2 deliberate processing to System 1’s automatic patterns, do we see productivity gains.
This mirrors what Anthropic discovered in their mechanistic interpretability research. When Claude processes multi-step reasoning problems, researchers can trace intermediate conceptual activations—the model thinking “Dallas is in Texas” before connecting to “the capital of Texas is Austin.” The initial computational overhead of building these conceptual chains resembles the J-curve’s initial dip.
The Architecture of Augmented Thinking
Daniel Kahneman’s dual-process theory, System 1’s fast, intuitive processing and System 2’s slow, deliberate reasoning, provides a framework for understanding both human and artificial thinking. But the resemblance between human cognitive architecture and LLM processing goes deeper than simple analogy.
LLMs develop what researchers call “forward planning” capabilities. When writing poetry, models think ahead to potential rhyming words and construct lines to reach those destinations. They build internal representations that capture causal structures, develop cross-linguistic conceptual frameworks that transcend specific languages, and engage in compositional reasoning rather than simple pattern matching.
Yet the Apple researchers’ finding that LLM performance can drop 65% when irrelevant details are added to problems reveals a crucial limitation. While humans can maintain robust world models that survive perturbation, LLMs construct what Harvard and MIT researchers call “patchworks of conflicting best guesses”.
Tacit Knowledge: The Missing Gradient
Harry Collins’ study of the Q factor of sapphire for LIGO mirrors provides a perfect metaphor for what’s missing in artificial thinking. Russian researchers achieved sapphire Q factors of 4×10^8 that Western labs couldn’t replicate for two decades, despite published procedures and even exchanging the same sapphire crystals.
The difference lay in tacit knowledge—the unwritten, embodied understanding developed through practice. Subtle factors like the “feel” for sufficient cleanliness, intuitive adjustments during setup, and judgments about noise sources made orders of magnitude difference in results. Knowledge transfer required weeks of hands-on collaboration, not just reading papers.
This tacit dimension explains both the power and limitations of current AI. LLMs excel at manipulating explicit knowledge—the codified, documented, formally structured information that populates the internet. But they struggle with the experiential, intuitive, context-dependent knowledge that comes from embodied interaction with the world.
Michael Polanyi observed that “we know more than we can tell.” LLMs, in contrast, can only “know” what has been told, and for that matter told in text. They operate in what some philosophers call the “symbol grounding problem,” manipulating linguistic tokens whose connections to physical reality remain indirect at best.
Categories All the Way Down
Different thinkers have carved up thinking in different ways, each revealing something essential:
Aristotle distinguished between three thinking activities and five types of knowledge. Here is a scorecard of sorts, based on our previous post on Knowledge Work, showing where LLMs currently sit. (Arbitrary scale based on my earlier suggestions, from zero robot heads to three).
| Category | Name | Description | Can LLM Do? |
|---|---|---|---|
| Thinking Activities | Theoria | Contemplative/theoretical thinking | |
| Thinking Activities | Praxis | Practical reasoning about action | 🤖 🤖 |
| Thinking Activities | Poiesis | Productive thinking involved in making | 🤖 🤖 |
| Types of Knowledge | Episteme | Scientific Knowledge | 🤖 🤖 🤖 |
| Types of Knowledge | Techne | Craft knowledge/skill | 🤖 🤖 🤖 |
| Types of Knowledge | Phronesis | Practical Wisdom | 🤖 |
| Types of Knowledge | Nous | Intuitive Insight | |
| Types of Knowledge | Sophia | Theoretical Wisdom |
Steven Pinker’s computational approach treats thought as information processing, with modules for language, spatial reasoning, and social cognition. This maps elegantly onto neural network architectures, where different layers and attention heads specialize in different types of processing. But the integration, how these modules combine to create unified experience and judgment, differs fundamentally between brains and transformers. And as the Aristotle inspired analysis above suggests, there are going to be components of the technology of thought that cannot be integrated easily.
Daniel Dennett’s functionalist framework suggests that what matters isn’t the substrate but the patterns of information processing. By this view, LLMs might genuinely think if they implement the right functional relationships. His “intentional stance”, so treating systems as if they have beliefs and desires when this helps predict behavior, offers a pragmatic approach to AI consciousness questions.
Hannah Arendt’s distinction between thinking (contemplation and search for meaning), willing (responsible autonomy), and judging (determining particular cases) illuminates what’s at stake. LLMs can simulate contemplation, generating thoughtful responses about meaning. But willing and judging—requiring genuine agency and contextual wisdom—remain distinctly human.
The Organizational Challenge
Paul David’s research on factory electrification offers the crucial lesson: productivity gains came not from simply replacing steam engines with electric motors, but from completely redesigning factories around electricity’s possibilities. Single-floor layouts with individual motors at each workstation, which would have been unthinkable with steam power, unlocked electricity’s potential.
The same pattern appears with thinking as technology. Simply automating existing cognitive tasks with AI, such as by having ChatGPT write emails we would have written or summarize documents we would have read, captures minimal value. The real gains come from restructuring knowledge work around augmented thinking’s possibilities.
What might this restructuring look like? The current conversation around organizing knowledge work around AI, and how to connect the components of thought that are still human with those that can be machine led, is active (see for example ‘Will AI Kill the Firm?’). Resolving this means recognizing the components of ’thinking technology’, understanding how they fit together and then arranging them in ways that are economically meaningful
Thinking About Thinking’s Future
The “GPTs are GPTs” paper’s finding that higher-wage, higher-education jobs face more LLM exposure inverts historical automation patterns. Previous waves primarily displaced routine physical and clerical work. This wave targets the thinking itself—the core activity that defined knowledge work.
But thinking has always been a technology of augmentation rather than pure replacement. Writing didn’t replace memory; it augmented it. Calculators didn’t replace mathematical thinking; they redirected it toward higher-level problems. Each tool that automated one aspect of thinking freed cognitive resources for other dimensions.
The question isn’t whether AI will replace thinking but how thinking itself will evolve. Just as the calculator eliminated the need for mental arithmetic but created demand for statistical thinking, LLMs might eliminate certain cognitive tasks while creating demand for new forms of thought we haven’t yet imagined.
The High-Entropy Frontier
I’ve been exploring what I call “high-entropy” technical markets—domains characterized by uncertainty, rapid change, and confusion. These spaces resist automation because they require continuous sensemaking, not just pattern matching.
This is where human thinking maintains its edge: navigating uncertainty, making meaning from chaos, exercising judgment in novel situations. These capabilities emerge from our embodied experience, our emotional engagement, our social embedding, our tacit knowledge. These dimensions are currently foreign to LLMs.
The Bottom Line
Recognizing thinking as a General Purpose Technology isn’t just an intellectual exercise—it’s a strategic framework for navigating the AI transformation. Like previous GPTs, thinking exhibits the productivity J-curve, requires extensive co-invention, and transforms economies through complementary innovations.
The current moment—with LLMs achieving remarkable capabilities while revealing fundamental limitations—represents an inflection point. We’re somewhere in the middle of thinking’s J-curve, having invested heavily in cognitive infrastructure but not yet realizing full returns.
https://www.nber.org/papers/w4148
https://arxiv.org/abs/2303.10130
https://mcginniscommawill.com/posts/2025-11-03-recognition-primed-decisions/
https://www.nber.org/system/files/working_papers/w25148/w25148.pdf
https://www.anthropic.com/research/tracing-thoughts-language-model
https://machinelearning.apple.com/research/illusion-of-thinking
https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/
https://www.jstor.org/stable/285818
https://www.jstor.org/stable/1050490
https://arxiv.org/html/cs/9906002
https://peter-matthews.com/blog/work_and_ai/
https://plato.stanford.edu/entries/computational-mind/
https://plato.stanford.edu/entries/intentionality/
https://plato.stanford.edu/entries/arendt/#LifeMindMoraSign
https://www.bbc.com/news/business-40673694
https://us.sagepub.com/sites/default/files/upm-binaries/42924_1.pdf