Where is our AI measurement plane?

Often when we start talking about something complex it is helpful to understand where we are standing relative to that topic first. A useful concept we can borrow from engineering is that of a measurement plane. It is a specifically defined physical or imaginary surface that marks where a measurement it to be taken.

Tape

In electronic testing for example the measurement plane could be the input or output connectors of a device, of a specific pin on a chip. In RF system testing the same concept is applied, especially when we are talking about signal parameters that vary with location. Related to the concept of a Reference Plane which allows us to make measurements that would otherwise be inside a piece of physical hardware, the point is to make measurements repeatable, we have to be very clear about where the measurement is being taken.

What I want to do in this post and some more to follow is start to build an understanding of Artificial Intelligence as it applies to the work done in the office. So we are going to start out with defining a suitable measurement plane and then build from there.

The ‘Average’ Knowledge Worker, a snapshot

23 to 54 years old, degree educated and are as likely to be working at a small to medium business as a large enterprise. About half use generative AI in their jobs, with the amount of workers using generative AI increasing with decreasing age. Most that do use such tools understand using generative AI at practical level, especially when using it for routine and text-based processes. Examples of these processes include drafting emails and messages, summarizing documents, automated meeting note taking and content generation.

Setting our measurement plane there, let’s start to sketch out some of the knowledge that is at this location and then we can look at things that are slightly beyond - as we discussed in a previous post knowledge and use of generative AI is going to vary depending on the tasks individuals perform, so setting the plane here may seem arbitrary but it will serve as a foundation.

View From This Measurement Plane

What Generative AI is

Generative AI refers to systems that can create new content — text, images, code, audio, even structured data — based on patterns learned from vast datasets. Unlike traditional software, which follows explicit rules, generative models learn statistical relationships and use them to produce outputs that feel original, even though they’re derived from prior examples. A model is the ‘brain’ doing the work, and there are different interfaces available to use different models.

How it is made

At the core are large machine learning models, often neural networks with billions of parameters. These models are trained on massive datasets — books, articles, code repositories, images — using high-performance computing infrastructure. The training process adjusts the model’s parameters so it can predict the next word, pixel, or token in a sequence. Over time, this predictive ability becomes generative capability.

Recently companies and journalists have started using the term ‘AI Factory’ to refer to the purpose built, large scale data centers where models are trained.

Where it comes from

Most enterprise-ready generative AI originates from a handful of research labs and technology companies with the resources to train large-scale models. These “foundation models” are then adapted — fine-tuned or prompt-engineered — for specific industries, domains, or use cases. Increasingly, enterprises are building on top of these models rather than training their own from scratch.

Typical usage

A knowledge worker uses one of the mainstream generative AI solutions as packaged up by Open AI (Chat GPT), Microsoft (Copilot), Anthropic (Claude) or Perplexity (Perplexity and Comet) as a sort of filling between the digital tools they currently use (Word, Excel, Powerpoint, Outlook and the Internet).

‘Work’ is still delivered through the digital tools that were used back in 2012 when the knowledge worker might have heard a story during their commute that aired on NPR about google building a neural network that could identify cats.

This is not a criticism by any means - I was sitting in traffic when I heard that story and it has only been in the last couple of years that the impact has caught up with me and I am still trying to figure out where generative AI goes next to the digital tools I used in 2012 and still use now.

I think it has to do with no matter the make of the car, the steering wheel is usually in the same place. So are the pedals. So when we sit down to ‘drive’ in our jobs, we are (or at least I am) still trying to figure out where generative AI ‘goes’ in amongst the tools we reach for day in and day out.

When controls are poorly matched with the task, usability suffers. There’s a reason the steering wheel is the preferred control for turning a car to the left or right. So far, no automobile designer has been so ignorant as to use a joystick for steering the car.

Jakob Nielsen on UX

Components adjacent to the current measurement plane

If we Google, or ask AI, or read (I highly recommend Co-Intelligence by Ethan Mollick and How AI Works by Ronald Kneusel) we find the following sorts of suggestions on problems generative AI can solve. Generative AI is not a magic wand, but it can:

  • Scale expertise — make specialized knowledge accessible across the organization.
  • Automate repetitive creation — free human capacity for higher-value work.
  • Bridge language and format gaps — translate, reframe, and repackage information.
  • Surface patterns in complexity — reveal insights hidden in unstructured data.

There is also an increasing need to be able to distinguish between the different forms ‘Generative AI Solutions’ can take:

  • Generative AI Tools

    Standalone applications that let you interact with a model directly — e.g., a text-to-image generator, a code assistant, or a summarization app. Useful for discrete tasks, but isolated from broader processes.

  • Generative AI Workflows

    Sequences of tasks where generative AI is embedded at specific points to accelerate or enhance outcomes. For example, a market research workflow where AI drafts summaries, generates competitor profiles, and formats findings into a dashboard — all within a repeatable process.

  • Generative AI Agents

    Autonomous or semi-autonomous systems that can plan, decide, and act across multiple steps toward a goal. Agents can chain together tools and workflows, monitor progress, and adapt based on feedback. They move from assistive to proactive.

My LinkedIn feed is full of very interesting, very intelligently written posts about the third step (and beyond?!). I still think there is work to be done in helping people at the measurement plane we set above understand where in the toolbox the first step goes. What I plan to be able to do in future posts is look a bit more closely at that.

https://www.ni.com/docs/en-US/bundle/ni-vna/page/vnahelp/reference_plane.html

https://cds.cern.ch/record/1415639/files/p67.pdf

https://www.forbes.com/sites/janakirammsv/2025/03/23/what-is-ai-factory-and-why-is-nvidia-betting-on-it/

https://jakobnielsenphd.substack.com/p/heuristics-cartoons

https://www.penguinrandomhouse.com/books/741805/co-intelligence-by-ethan-mollick/

https://nostarch.com/how-ai-works

Photo by William Warby on Unsplash