TL;DR
- Digital media measurement has gone through three eras of accountability: viewability (being seen), suitability (being seen in the right context), citability (being represented accurately in AI answers).
- With AI, for the first time in the history of communications, reading the response is practically guaranteed. The challenge shifts to representation.
- Citability is measured through three metrics: PRESENCE (is the brand cited?), GRAVITY (is the citation accurate and are sources credible?), LIFE (how is the brand perceived and positioned?).
- Measuring citability without a rigorous method for building the questions is like measuring reach without defining the audience.
What happens when AI gets your brand wrong?
Every day, millions of people ask AI systems about products, services and brands. The answers they receive shape trust, consideration and purchase decisions, often without the user ever visiting the brand’s website. When those answers contain errors, omissions or outdated information, the impact is direct: lost demand, reputational damage, missed revenue.
A brand’s ability to be present, accurate and positive inside AI answers is what we call citability. It’s not an abstract concept: it’s the metric that determines whether AI is working for you or against you.
Citability didn’t emerge from nowhere. It’s the culmination of three eras of measurement in digital media, each with its own paradigm, limits and lessons. Here’s the journey.
First wave — Viewability: being seen
Viewability was the first major revolution in digital media measurement: objective proof that an ad was genuinely visible on screen. I’ve had the privilege of living through this transformation first-hand, and the lesson was clear: progress doesn’t come from inventing a new metric, but from building a method to interpret it.
Viewability exploded in the mid-2010s. When I was at smartclip, the battle to prove that an ad was genuinely visible hit the market like a tsunami, shaking the trust between advertisers and publishers.
A new metric was born, purely technical and objective (MRC standards, viewable pixels, seconds in view) measuring something that had previously been taken for granted. And that, as it turned out, wasn’t guaranteed at all.
The problem? Viewability measured visibility, not attention or impact. And crucially, it offered no solution: the measurers didn’t have delivery data to help optimise. It was, and still is, entirely postbid. Publishers were left to redesign layouts and formats to comply with standards set by others. Value eroded, CPMs fell, trust suffered.
Result: visibility likely; attention uncertain.
Second wave — Suitability: being seen in the right context
After the 2017 YouTube crisis, the industry realised that being seen isn’t enough: you must be seen in the right, safe and relevant context. Thus began the brand suitability era, an evolution of brand safety, with inclusion lists, taxonomies, GARM and contextual algorithms.
When I joined Channel Factory, this was exactly the challenge: no longer where an ad appears, but next to what. Suitability is a subjective problem: what’s suitable for one brand may be unacceptable for another. But for the first time the entities measuring also had the data to help fix delivery, moving from postbid to prebid. Control returned to brands.
The structural limit remained, however: even in the safest, most coherent context, no one could guarantee the message was actually received.
Result: context (subjectively) guaranteed; reception uncertain.
Third wave — Citability: being represented
Citability is a brand’s ability to be present, accurate and positive inside AI answers. It marks the third wave of digital media measurement, and for the first time the paradigm changes radically: AIs don’t promise impressions, they promise answers. And every answer that’s read becomes a fragment of reputation.
People no longer search; they ask. And when they ask, they read the answer. It’s the first medium in the history of communications where reception is essentially guaranteed: there’s no competition for attention, no dispersion. Which is precisely why responsibility shifts: it’s no longer about creating impact, but about earning it through correct representation.
Citability is measured through three metrics:
PRESENCE: is the brand cited? GRAVITY: is the citation accurate and are the sources credible? LIFE: how is the brand perceived, evaluated, positioned?
Citability no longer measures visibility; it measures trust. It’s the composite metric that tells us how a brand exists inside answers, how it is described, associated, interpreted.
Result: reading guaranteed; representation uncertain.
|
Era |
Focus |
Measure |
Solution |
Guarantee |
KPI |
|
Viewability |
Being seen |
Objective |
None (publishers’ burden) |
Visibility likely; attention uncertain |
Technical |
|
Suitability |
Right context |
Subjective |
Inclusion lists, GARM, prebid |
Context guaranteed; reception uncertain |
Contextual |
|
Citability |
Being represented |
Objective |
Emerging methodologies |
Reading guaranteed; representation uncertain |
Reputational |
Why counting answers is not enough
Generative AI systems are non-deterministic: the same question, asked twice to the same engine, can produce different answers. The brand that appeared in the first response may vanish in the second. The tone may shift. The sources cited may change entirely.
This has a fundamental consequence for anyone trying to measure brand presence in AI: a single measurement is a snapshot that may never be replicated. Without a method that accounts for this instability, any data point is anecdote, not research.
In recent months, several players have emerged proposing ways to measure specific aspects of citability: AI visibility, Generative Mentions, Brand Recall. The fact that so many are appearing is a positive signal, but most of these approaches share a structural weakness: they count answers without governing the questions.
Why questions matter more than answers
If answers are inherently unstable, the only fixed point in the equation is the question. The quality of any citability measurement depends entirely on the quality of the questions it’s built on.
Without a rigorous method for building questions that are meaningful, representative and replicable, no measure of citability can be considered reliable. It’s like trying to measure reach without defining the audience: it doesn’t work.
Measuring AI is not about extracting data from answers. It’s about designing the right questions. That’s where real accountability begins, because counting answers is data extraction; measuring them is research.
GEO: from findability to citability
This evolution marks the birth of a new discipline: Generative Engine Optimization (GEO). It's the shift from the findability of SEO to the readability, interpretability and citability of content inside generative answers. The shift from search to conversation that made this inevitable is explored in Be cited, not just found.
Search engines rank links. Generative engines craft answers. In this context, citability is the first tangible KPI of GEO: it measures how a brand exists inside AI answers, not just whether it appears, but how it’s represented. A telling figure: according to Surfer, nearly 68% of sources cited by AI systems don’t appear in Google’s top 10 results. Ranking and citability are two different games.
Major media holding companies have begun exploring this space: WPP talks about "SEO is dead. Long live GEO?", OMG calls it a new paradigm, Dentsu has launched a GEO optimisation service, Havas has presented an AI brand tracking tool. The debate is open, but a shared language and a truly scientific metric are still missing.
The citability metrics, PRESENCE, GRAVITY and LIFE, are the foundation on which I built Symios: an operating system that turns measurement into governance of representation.
This is the beginning of a new accountability: less about visibility, more about algorithmic credibility.