AI citation behavior is not random. It is not purely a function of domain authority, content length, or keyword density. There is a structural logic to how large language models evaluate and surface content — and it is measurable. Most of the industry is still treating this as intuition. We have been modeling it for two years.
Over the past two years, we have been building the infrastructure — the research framework, the software, the operational model — to run AI search optimization the way serious work gets done. Systematically. With data behind every decision. And with tools we built ourselves because the ones that exist are too expensive, too generic, and not built for the specific demands of AI search optimization.
This article is about what that looks like in practice, what it means for our clients, and where it is heading.
Contents
The CARL Model
Everything we do in AI search optimization runs through the CARL Model — a continuously updated framework that defines how content needs to be structured, positioned, and maintained to earn consistent citation in large language model responses.
CARL stands for Cognitive AI Ranking Laboratory, or CARL Lab for short. It is the research operation embedded inside Xponent21 that produced this model and continues to refine it. The model itself is not a static checklist. It is a living set of principles derived from ongoing analysis of real citation data across ChatGPT, Claude, Gemini, and Perplexity — updated as the platforms evolve, as our data set grows, and as new findings come out of the laboratory.
The core of the CARL Model is a framework we call Synthetic Epistemology. It describes, in precise terms, how large language models evaluate and surface information. One of its central principles — Semantic Centroid Distance — holds that content closer to the semantic center of a query cluster earns AI citations at measurably higher rates.
We validated that relationship at a Spearman correlation of 0.976 across more than 156,000 prompt-brand citation pairs across 4 large language models. For context, a correlation above 0.9 is considered exceptional in social science research. At 0.976, across more than 156,000 real-world data points, this is not a directional signal — it is a finding. And it shapes how we write, structure, and optimize every piece of content we produce.
Beyond content structure, the CARL Model also addresses entity density — how named references within content compound citation probability. It accounts for how AI systems anticipate what a user needs two steps ahead of their current query, and how to align content with that downstream intent. It tracks the authority acquisition curve — the ramp up period during which a brand moves from uncited to consistently cited — and identifies what accelerates it.
Every client campaign we run is shaped by the CARL Model. Every time the model is updated based on new research, those updates flow through to active client strategies. This is what it means to work with an agency that is also running a research laboratory.
Shaping the Conversation, Not Just Joining It
There is a dimension to this work that goes beyond optimization.
When we build content programs aligned with the CARL Model, we are not just helping brands respond to existing AI query patterns. We are helping them establish the semantic territory around topics before competitors arrive. We are positioning them as the reference point that AI cites when someone asks a question that is just beginning to gain momentum — before that question shows up in keyword tools, before it trends, before other brands know it is worth pursuing.
Because CARL Lab tracks query behavior across industries and platforms over time, we can see these patterns forming. Topics that are gaining traction in AI responses before they surface in traditional search volume. Consumer concerns building in one sector that will migrate to adjacent ones. Questions that nobody is answering well yet, which means whoever answers them first will own that space.
The campaigns we build and launch for clients are not only generating appearances in LLM responses today. They are shaping the conversation in their category — establishing the language, the framing, the entity associations — in ways that create structural advantages as AI search continues to mature.
This is the work that most agencies are not in a position to do yet, because it requires the research infrastructure, the data, and the operational model to execute it with any consistency. We have spent two years building that infrastructure. The results are showing up in client outcomes and in the data coming out of CARL Lab.
Why We Built Our Own Tools
The tools we needed to run AI search optimization at this level did not exist. The closest options were expensive, built for traditional SEO workflows, and required significant customization to do even a fraction of what we needed. We were spending time and money bending third-party software to fit a workflow it was not designed for.
So we built our own.
The approach is deliberate. We develop internally first. We use the tools ourselves, refine them against real client work, identify what breaks, fix it, and improve. When the tools are stable and genuinely useful, we open them for collaboration with select clients and partners. When they are world-class — tested, proven, and polished — we make them available to outside teams and agencies on a subscription basis.
We are not building software as a side project. We are building it because it is the only way to operationalize the CARL Model at the level our clients deserve. The research produces insights. The model translates those insights into strategy. The software executes the strategy with consistency, speed, and visibility that manual processes cannot match.
The first of these tools to reach the public will be CARL Intelligence — our prompt tracking platform that monitors how a brand appears across AI responses over time. Additional tools in the suite address content production, program management, and the infrastructure required to perform in AI search environments. Each one was built by the team running these campaigns every day, for the specific problems we kept running into.
Research at CARL Lab is led by Courtney Turrin, our Principal Investigator, whose background spans conservation biology at William & Mary’s Center for Conservation Biology and neuroscience research at Yale. That foundation — designing rigorous studies, managing data at scale, asking the right questions before drawing conclusions — is what separates CARL’s findings from the speculation that dominates most AI marketing commentary.
What This Does for Clients
The impact of this infrastructure shows up in ways clients feel immediately, and in ways that compound over time.
The most visible change is speed. Our previous average time from strategy to published content was approximately 60 days. With the operational model and software we have built, that has dropped to under 30 days. Faster publishing means faster appearance in LLM responses. In AI search, the brands that establish semantic authority on a topic early hold that ground longer. A 30-day acceleration in time to market is not just an operational improvement — it is a compounding strategic advantage.
Beyond speed, the client experience itself has been rebuilt around access and clarity. Clients have real-time visibility into performance data. They receive notifications when something meaningful shifts in how their brand is appearing in AI responses. Insights surface proactively — not on a monthly report cadence, but as they become relevant. Tasks that need attention come with context and guidance rather than requiring a client to interpret raw data on their own.
The more significant shift is in where our strategists spend their time. When the operational overhead of a campaign — the tracking, the content coordination, the performance monitoring, the reporting — is handled by software designed specifically for that purpose, the hours that used to go into clicking buttons and typing words go somewhere better. They go into strategy. Into the research-backed decisions that actually determine whether a brand earns AI visibility or not. Into the conversations with clients that advance their position rather than report on it.
The results compound across industries:
- International e-commerce: Within 3 months — organic clicks up 5x, AI-referred sessions up 125%, purchases up 45%, revenue from AI traffic up 70%.
- Regional home improvement: Became the top-cited source for pool permit information by state. Winter sales now exceed prior summer peaks.
- Online education: 90% enrollment capacity hit two weeks ahead of stretch goal. Primary enrollment target met five weeks early.
- B2B national distribution: 2.1 million organic impressions from a single video content sprint. Every video ranking in AI Overviews, featured snippets, or video carousels.
These are early results, not ceilings. The strategy compounds over time.
The Next Era of This Work
We are in the early period of AI search as a discipline. The platforms are evolving. The citation patterns are shifting. The brands that are winning are the ones treating this as a long-term structural investment rather than a campaign tactic. (Find out what happens when a top-cited brand neglects its AI SEO content engine).
At Xponent21, we are building for that long term — on every dimension. The CARL Model gets updated as the research produces new findings. The software gets better as we refine it against real campaigns. The laboratory continues to ask and answer the questions that determine what works and why.
The tools we are building for our own operations will become available to outside teams and agencies — because we believe the practitioners doing this work deserve software built specifically for it, at a price that does not require an enterprise budget. We are refining first, testing internally, and building toward world-class before we open the door.
The Semantic Centroid Distance finding is one validated hypothesis. There are others in testing. The model will evolve as the platforms do, as brands respond to what they see in citations, and as the data tells us something new. That is the point. This is not a framework we built once. It is a research program that runs continuously — and the findings will be published here as they emerge.
If you are a brand trying to understand where you stand in AI search, or a marketer trying to get a handle on what this shift actually requires, Xponent21 is the team doing this work at the research level and the execution level simultaneously.

