The Dealer FAQ Page Is Dead: What 23,745 AI Citations Reveal About Dealership GEO

A rigorous new GEO dataset of 23,745 AI citations argues most of the dealer AEO playbook is wrong. FAQ pages underperform. Numbers, definitions, and comparisons are what AI actually absorbs.

Tim Boyle··11 min
Notion-style illustration showing FAQ pages being replaced by a rising data chart — the dealer FAQ page is dead

Quick Summary

A new open-source GEO dataset of 23,745 AI citations finds the dealer FAQ page is the single worst-performing content structure in local-services queries. Pages with numbers, definitions, and comparisons earn 32–74% more citation depth. ChatGPT, Google AIO, and Perplexity each reward different content — one page cannot win on all three.

What You Should Know

For Dealer Principals

  • The AEO playbook your vendor is selling — build an FAQ page, add Q&A schema — is the one structure the data says underperforms everything else.
  • The real investment is content that defines, compares, and cites specific numbers. That's harder to produce, which is why it's a defensible moat.
  • Different AI engines reward different content. A single one-size-fits-all page strategy leaves measurable AI citation share on the table.

For GMs

  • Local is the hardest vertical for AI to absorb. Dealers start from a worse baseline than technology, healthcare, or commerce content.
  • The dealers winning AI citations are running a different content operation than the default platform-generated model pages most stores ship with.
  • Reallocating some aggregator-listing budget toward hyperlocal earned coverage (local news, TV affiliate sites, chamber features) is a higher-leverage move than piling on more directory profiles.

For Marketing Directors

  • Audit your top 10 model pages for five structural features: a definition opener, embedded numbers, a comparison element, multiple H2 sections, and 13–24 paragraphs.
  • For Perplexity-heavy markets, build short focused pages (~600 words). For ChatGPT and Google AIO, build longer comprehensive pages. Don't assume one version wins all three.
  • Paragraph coverage is the predictor. Bundle model-page content — spec, financing, trade-in, incentives, inventory — on one URL instead of unbundling it across five thin pages.
Tim Boyle

The default dealer content playbook was written for an earlier version of the AI engines. This dataset is the first real evidence that FAQ pages — the one move every platform recommends — is the worst-performing content structure in local services. That's a hard finding to ignore.

Tim Boyle

Founder & President, A3 Brands

Dealer AEO strategy in 2026 almost always starts with the same two moves: build an FAQ page, add FAQ schema. Every dealer website platform recommends it. Every SEO vendor pitching an AEO add-on pitches it. Most of the trade press writes about it as settled practice.

A new open-source research dataset suggests it's wrong.

The study — published by researcher Zhang Kai, with secondary analysis and open-sourcing by Yao Jingang — ran 602 prompts across ChatGPT, Google AI Overview, and Perplexity, logged 21,143 citations in the answers those engines produced, and re-scraped 18,151 of the cited pages to extract 72 structural features per page. It's the most rigorous empirical GEO study we've seen to date. Most of what passes for GEO research on LinkedIn is ten-prompt anecdote. This is something else.

We pulled the data, cut it by industry, and looked at it from one angle: what does it tell a dealer's marketing team?

One caveat first, and it matters. The dataset contains no automotive category. The industries covered are healthcare, technology, news, finance, commerce, and local services. For the automotive translation, local services is the closest analog to "dealer near me" queries, and commerce is the closest analog to "best SUV under $40k" shopping queries. Most of what follows uses those two cuts. The actual automotive citation landscape is likely harder than either proxy, because dealerships also compete with a dense stack of aggregators — Cars.com, Edmunds, KBB, CarGurus — that don't exist at the same intensity in other local verticals.

With that on the table, here's what the data shows.

Finding 1 — Local Is the Hardest Vertical for AI to Absorb

Dealers are starting from a worse baseline than most industries.

The study defines a metric called "influence score" that estimates how deeply a cited page is actually absorbed into the AI's final answer, versus just listed as a reference at the bottom. A page whose ideas, numbers, and phrasing end up shaping the response scores high. A page that's only name-checked scores near zero.

Across the six industries in the study, local-services content scores lowest.

Technology content sits at an average influence of 0.127 per citation. Healthcare, commerce, finance, and news cluster between 0.095 and 0.102. Local services sits at 0.092 — last place.

This confirms something most automotive SEOs suspect but rarely say out loud. AI answer engines aren't particularly motivated to absorb dealer-site content. They skim it, they list it as a source, and then they build their answer using higher-authority aggregators, editorial sites, and OEM resources. Dealer content gets the reference slot, not the substance slot.

The read isn't that dealer GEO is impossible. The read is that dealer content has to look and read more like the winning content in technology or healthcare verticals — defined, evidenced, and structurally clear — and less like the generic "Why Choose Us" and "Meet the Team" blocks most dealer sites run by default.

Bar chart — AI Search Citation Depth by Industry. Technology leads at 0.127; Local services last at 0.092.
AI Search Citation Depth — By Industry. Local-services content is the hardest industry to get deeply absorbed by AI search engines.

Finding 2 — Q&A-Format Pages Are the Only Content Structure That Underperforms

This is where the data contradicts current practice most directly.

The study tagged every cited page for five structural features: whether the page contains numbers or statistics, whether it contains definitions, whether it contains comparisons, whether it contains how-to or procedural content, and whether it uses a Q&A format. Then it calculated the influence lift for pages that had each feature versus those that didn't.

Here's the local-services vertical:

  • Pages with numbers or statistics: +32.4% influence
  • Pages with definitions: +41.1%
  • Pages with comparisons: +30.5%
  • Pages with how-to or step content: +8.7%
  • Pages in Q&A format: −11.4%

The Q&A format is the only structure in the dataset that correlates with *less* citation depth, not more.

Commerce shows a weaker version of the same pattern: Q&A gives only a +5.7% lift, compared to +73.9% for numbers and +48.5% for definitions.

The reason shows up in a separate tag the researchers added: what role each citation plays in the final answer. Somewhere between 46% and 60% of cited pages across industries get tagged as "factual basis," meaning the AI uses the page as raw evidence it then assembles into its own answer. Another 30–35% get tagged as "supplementary" or "paraphrase." Very few citations come from pages that look like pre-packaged answers. If a page already reads as a list of Q&A pairs, it looks less like a source the engine can draw from and more like a competing summary.

For a dealer, this means rethinking the editorial unit. A page that defines what a plug-in hybrid is and cites EPA range numbers will get absorbed. A page that compares three trims of a specific model with towing capacity, ground clearance, and MSRP differences will get absorbed. A page that walks a shopper through the trade-in appraisal steps will get absorbed. A page titled "Frequently Asked Questions About Financing" with twelve short Q&A entries will not.

Bar chart — Influence score lift by content structure. Numbers, definitions, and comparisons lift citation depth; Q&A format is the only feature with a negative lift in local services.
Which Content Structures Actually Get Absorbed By AI. Q&A format is the only structure in the dataset with a negative lift in the local-services vertical.

Finding 3 — Optimal Content Length Differs Drastically by Engine

If there's a single practical finding that should change how dealer content gets produced, it's this one.

The three major AI answer engines behave differently enough that optimizing one page for all of them leaves measurable absorption on the table.

Within local-services queries, each engine has a different sweet spot:

  • ChatGPT rewards length up to a point. Influence climbs steadily from short pages through 1,200–2,000 words, then plateaus.
  • Google AI Overview is the most length-hungry of the three. The curve keeps climbing past 2,000 words.
  • Perplexity inverts the pattern. Influence peaks at 301–700 words, then declines. Pages over 2,000 words actually underperform shorter ones.

A single 2,500-word dealership pillar page optimized for ChatGPT and Google will be materially less effective on Perplexity than a tight 600-word focused page would be. For dealers where Perplexity is a meaningful traffic source, longer isn't better — it's worse.

Line chart — Optimal content length by engine. Perplexity peaks at 301-700 words; ChatGPT and Google AIO keep climbing with length.
Optimal Content Length Differs Drastically by Engine. Perplexity peaks early (301–700 words) while ChatGPT and Google AIO keep rewarding depth.

Finding 4 — Being Cited First Is Disproportionately Valuable on Some Engines

How much does being the first source cited matter, versus being the twelfth?

  • On ChatGPT, a first-cited source has an average influence of 0.385. The twelfth-cited source: 0.166. A 2.3x falloff.
  • On Perplexity, first-cited: 0.172. Twelfth-cited: 0.032. A 5.3x falloff.
  • On Google AI Overview, first-cited: 0.066. Twelfth-cited: 0.057. Essentially flat.

What that means in practice:

On ChatGPT and especially Perplexity, the first citation does most of the work. The game is being the single most authoritative and relevant page for a given query. Everything after position three or four contributes progressively less.

On Google AI Overview, the position of the citation barely matters. Being included in the citation list at all is the win. AIO treats its ten to fifteen sources relatively democratically, which means breadth of relevant pages across many related queries matters more than winning position one on any single query.

Different engines favor different strategies. ChatGPT rewards a concentration approach: build the canonical page for a specific shopper intent and win it. Google AIO rewards a coverage approach: be present and well-structured across many related intents. Perplexity combines both, with the added constraint that shorter wins.

Line chart — Citation position decay by engine. ChatGPT and Perplexity show steep falloff from first-cited to twelfth; Google AIO stays nearly flat.
Being Cited First Is Disproportionately Valuable — On Some Engines. Google AIO stays nearly flat: being cited at all is the game. ChatGPT and Perplexity decay hard: position #1 is what you fight for.

Finding 5 — Hyperlocal News and Earned Coverage Beats Aggregator Listings

The conventional local SEO playbook for dealerships says: build out the aggregator stack.

Yelp, TripAdvisor, BBB, Cars.com, Nextdoor, Google Business Profile. More listings means more citations means more signal.

The dataset doesn't support that as the highest-leverage investment.

When we looked at the top 10% of cited pages in the local-services vertical — the ones the AI engines absorbed most heavily — the dominant pattern wasn't aggregators. It was hyperlocal news coverage and substantive local-business content.

The domain list at the top of the influence distribution includes patch.com (hyperlocal community news), journalreview.com (small-market daily papers), hobokengirl.com (hyperlocal lifestyle blogs), cbsnews.com, abc7.com, fox5ny.com (local TV affiliate digital properties), municipal .gov sites, and individual local business sites with real editorial depth.

Aggregators, across the whole local-vertical dataset, account for roughly 8% of citations. In the top-decile influence bucket, they barely appear.

For a dealership PR and content program, the implication is a reallocation, not an abandonment. Aggregator presence is table stakes. The differentiated investment is in the next layer up: earned coverage in hyperlocal outlets, chamber of commerce directories, and substantive owned content. A single feature in the local Patch or a story on the ABC affiliate's digital site does more for AI citation depth than another ten aggregator profile entries.

Finding 6 — Paragraph Coverage Is the Single Strongest Predictor

Of the 72 features the study extracted per page, the one that correlates most strongly with influence score isn't authority. It's paragraph coverage.

The researchers called it "paragraph coverage ratio" — the share of paragraphs in the AI's final answer that can be traced back to a single source page.

The correlation in local-services is r = 0.76. In commerce it's r = 0.72. Both are strong enough that paragraph coverage behaves as a near-deterministic signal.

The practical implication is simple and underappreciated in dealer content architecture. A thin page that answers one narrow slice of a question — like "Is the 2026 Tacoma TRD Pro in stock?" — contributes one paragraph to the AI's answer and gets treated as peripheral. A dense page that answers a full cluster of related sub-questions on a single URL — trim differences, financing terms, trade-in process, current incentives, inventory availability — contributes several paragraphs and gets treated as central.

For dealers running dealer-platform-generated model pages, most of those pages score badly on paragraph coverage. They cover one thing thinly. The content that wins on this metric bundles an entire shopper intent into one dense URL.

The Dealership GEO Playbook

Seven actions this data supports, ordered by how much they'd change what a typical dealer marketing operation is doing today.

1. Stop commissioning standalone FAQ pages.

FAQ schema on contextually-relevant pages is still fine. The stand-alone "Top 10 Financing Questions" page with twelve Q&A pairs is the wrong unit of content. Replace it with pages that define, compare, and evidence.

2. Bundle model pages, don't unbundle them.

One 900-word page per trim covering spec, financing, trade-in, incentives, and availability outperforms five 200-word pages on the same topics. The goal isn't keyword coverage — it's paragraph coverage of a full shopper intent.

3. Put real numbers on every page.** EPA MPG, towing capacity, ground clearance, bed length, APR ranges, trade-in value thresholds, current incentive amounts. Pages with numbers get a 32% to 74% influence lift depending on the vertical. **Generic marketing language has no pull.

4. Open each page with a definition.

Not "Welcome to our dealership." An actual category definition: "The 2026 [model] is a [class] built for [specific use case]." Definitional openings produced the largest single feature lift in the dataset — +41% to +70% depending on the engine.

5. Build platform-specific versions of your top ten highest-intent pages.

A longer comprehensive version aimed at ChatGPT and Google AIO, and a short focused version aimed at Perplexity. This doesn't need to scale across every page. It needs to scale across the queries the dealership most wants to own.

6. Reallocate budget from aggregator listings toward hyperlocal earned coverage.

Aggregator presence is maintenance work. The differentiated signal comes from features in community news outlets, local TV affiliate digital properties, and chamber directories.

7. Audit existing model pages for structural features.

The median winning page in local services has between three and six H2 sections, thirteen to twenty-four paragraphs, embedded numbers, and a clear definition. Most auto-generated dealer platform pages have one of those five things. The gap is the audit target.

For how content structure connects to earning specific AI citations for your store, the dedicated post goes deeper. For the broader GEO for dealerships strategy, see the pillar guide.

What We Don't Know Yet

A few caveats that matter.

No automotive industry slice.

The findings here are extrapolated from local-services and commerce proxies. The actual automotive citation landscape is more concentrated around Cars.com, Edmunds, KBB, CarGurus, and OEM sites than either proxy. If anything, that makes the "win the first-citation spot" effect stronger for dealers on ChatGPT and Perplexity, not weaker.

The data is a static snapshot.

Google AIO and ChatGPT Search both shifted their citation behavior meaningfully between mid-2025 and early 2026. The directional findings should hold; the specific percentages will drift.

Influence score is a constructed metric.

It's the best proxy available in the study, but it measures absorption depth, not conversions.

Sample sizes are modest for specific cuts.

The top-decile local profile on ChatGPT is built from 35 pages. On Perplexity, 104. The results are directional, not definitive.

None of this is settled science. The alternative, though, is running dealer GEO on vendor pitches and anecdote — which is where most of the industry currently sits.

Most of the AI-search advice circulating in the dealer world right now was written for a 2024 version of the engines: build FAQ pages, add Q&A schema, optimize one page for everything, chase aggregator listings. The data from Zhang and Yao's study argues that each of those assumptions is either wrong or badly misallocated.

What the data rewards instead is evidence. Pages that define concepts, cite specific numbers, compare alternatives, and bundle a full shopper intent under one URL. Content structured differently for different engines. Earned coverage in hyperlocal outlets rather than incremental aggregator entries.

That's a harder content operation to run than the default dealer playbook. Our read is that the dealers who build it will have a structural advantage in AI search for the next several years, because most of their competition will still be building FAQ pages.

Key Takeaways

  • FAQ-format pages are the only content structure in the dataset that correlates with less citation depth, not more — a −11.4% penalty in local-services queries and only a +5.7% lift in commerce.
  • Pages with numbers, definitions, and comparisons earn +32% to +74% more citation depth. Dealer content needs to look more like technology or healthcare content and less like standard marketing copy.
  • ChatGPT, Google AIO, and Perplexity reward different content. Perplexity peaks at 600 words; ChatGPT and Google AIO keep climbing past 2,000. One page cannot win on all three.
  • Citation position matters dramatically on ChatGPT and Perplexity (2–5x falloff from first to twelfth) but barely matters on Google AIO (essentially flat). Different engines demand different strategies.
  • Paragraph coverage — bundling a full shopper intent under one dense URL — is the single strongest predictor of citation depth (r = 0.76 in local services).
Tim Boyle

Tim Boyle

Founder & President, A3 Brands

Tim spent a decade distributing products to 3,000+ dealerships, ran the Internet Sales department at Baker Automotive Group, and served as Acura's Field Program Manager and Digital Strategist at Shift Digital before founding A3 Brands — the only SEO agency built exclusively for car dealerships.

Frequently Asked Questions

Does this mean dealerships should remove FAQ schema entirely?
No. FAQ schema on a contextually-relevant page — e.g., a model landing page that includes 3–4 buyer questions inline with the rest of the content — is still useful. The data argues against standalone FAQ pages built as twelve Q&A pairs with nothing else. Integrate FAQ signals into pages that also define, compare, and cite numbers.
The study doesn't cover automotive. How confident can we be this translates to dealers?
Medium-high confidence on direction, lower on exact magnitudes. Local-services is the closest proxy to 'dealer near me' queries and commerce is the closest proxy to model-comparison queries. The actual automotive landscape is likely harder because dealers compete with a dense aggregator stack (Cars.com, Edmunds, KBB, CarGurus) that doesn't exist at the same intensity in other verticals. That makes the 'win first position' and 'bundle paragraph coverage' findings stronger for dealers, not weaker.
What content format should replace standalone FAQ pages?
Bundle the answers into pages organized around a full shopper intent. A trim-specific model page covering spec, financing, trade-in process, and current incentives — with a clear definition of the vehicle in the opener and embedded numbers throughout — earns more citation depth than a separate FAQ page with the same information sliced into Q&A format.
How do we optimize one page for three engines with different length preferences?
For most pages, you don't — you pick the dominant engine for your market and optimize for it. For your top 5–10 highest-intent pages, it's worth building two versions: a longer comprehensive version for ChatGPT and Google AIO, and a tight focused version for Perplexity. Different URL structures or a dynamic template can handle both.
What's the fastest single change that would improve our AI citation rate?
Rewrite the opening paragraph of your top 10 model pages to lead with a definition plus one specific number. Example: 'The 2026 Tacoma TRD Pro is Toyota's mid-size off-road pickup, built around a 2.4-liter turbo hybrid with 326 horsepower and 33-inch tires.' Definitional openings produced the largest single feature lift in the dataset (+41% to +70%).

Sources & References

  • Zhang Kai — Original GEO Citation ResearchDataset of 602 prompts, 21,143 AI search citations, and 18,151 re-scraped source pages across ChatGPT, Google AI Overview, and Perplexity.
  • Yao Jingang — geo-citation-lab (open-sourced analysis)Secondary analysis and open-sourcing of the Zhang Kai dataset; primary source for the industry-cut statistics used in this post.

Want to See Where Your Pages Actually Rank in AI Search?

We benchmark your model pages against the structural features this dataset says win. You get a page-by-page report showing which ones have a definition, which ones have numbers, which ones are thin — and which competitor in your market is already getting cited.

Related Articles

DROP YOUR URL — WE'LL SHOW YOU WHO'S OUTRANKING YOU AND WHAT IT'S COSTING YOU.

See where you stand in AI search.