Most “Topical Authorities” Are Keyword Lists in a Trench Coat
Ask ten SEOs to define a topical map and nine of them will describe a spreadsheet: a list of keywords grouped by subtopic, maybe organized into a pillar-and-cluster structure, with a content calendar attached. They call it a topical map. What they’ve actually built is keyword research with extra steps.
The distinction matters because a keyword list and a topical map answer different questions. A keyword list answers: what terms should we rank for? A topical map answers: what does a search engine need to understand about this domain to recognize this website as the authoritative source on this subject?
Those are not the same question. The first optimizes for individual queries. The second optimizes for how Google and AI systems build a mental model of your entire website’s expertise.
Koray Tugberk Gubur, whose Holistic SEO framework is the most rigorously developed semantic SEO methodology in the industry, defines topical authority as “a semantic SEO methodology to rank higher on search engine result pages by processing connected topics and entailed search queries with accurate, unique, and expert information.” The operative word is connected. Authority doesn’t come from individual pages. It comes from the semantic relationships between pages and how completely those pages cover a domain.
The formula he established is: Topical Authority = Topical Coverage × Historical Data. Coverage is how completely you address a subject. Historical data is the accumulated user behavior, engagement signals, and ranking stability that search engines record over time. You can accelerate coverage through systematic architecture. You can’t shortcut historical data. Building the right architecture now, before your competitors do, creates a compounding advantage that becomes harder to close every month.
This guide covers the methodology I’ve implemented across client engagements, built directly from Koray’s framework and extended with Claude Code workflows that use live Ahrefs and GSC MCP connections. The entity SEO foundation from the previous post in this series is the prerequisite. Topical authority determines whether your entity’s semantic neighborhood is credible. Entity structure determines whether machines can read and cite what you’ve built.
The Five Structural Components of a Topical Map
A properly structured topical map has five components. Most practitioners know the last two (core content and supporting content) and skip the first three, which is why their topical maps produce keyword coverage without authority signals.
Source Context is the foundational definition of why your website exists in search. It’s not your tagline or your value proposition. It’s the lens through which every content decision gets filtered. A mortgage broker’s source context might be: “An Arizona-licensed mortgage originator serving first-time homebuyers in Maricopa County with FHA, VA, and USDA programs.” Every topic that belongs in the topical map must connect logically to this source context. Every topic that doesn’t connect should not exist on the site, regardless of search volume.
Source context also tells you what your website is not. A mortgage broker writing about home renovation trends or neighborhood lifestyle guides is diluting their source context, distributing semantic signals across topics that don’t reinforce their authority domain. That content might generate traffic. It won’t build topical authority.
Central Entity is the concept that appears across every section of the topical map. For a business, it’s usually the organization itself or the primary service category. For a personal brand, it’s the author. The central entity appears in the H1 of the homepage, in the entity schema, in the About page entity home, and as an explicit reference point in every content cluster. This connects directly to the entity graph work covered in the entity SEO post. Your central entity is the same entity you’re building the knowledge graph around.
Central Search Intent is the unification of source context and central entity into a single primary query pattern. For the mortgage broker example, the central search intent is something like “FHA mortgage broker Arizona.” This is the query your homepage should rank for, and it defines the gravitational center of your entire content network. Every piece of content either supports this central intent or extends its authority into a related subtopic.
Core Section is where your ranking power concentrates and where your business monetization happens. For a mortgage broker, the core section includes service pages for each loan type, location pages for each service area, and the key comparison and decision-stage content buyers need. These are your quality nodes: the pages closest to the homepage in crawl depth, built with complete EAV coverage, full schema, and intent-progressive internal links pointing back to the central entity.
Outer Section is where topical coverage breadth lives. These are the informational, educational, and question-answering pages that demonstrate deep subject matter expertise. For a mortgage broker, the outer section includes guides to loan requirements, credit score explanations, first-time homebuyer programs, down payment assistance resources, and FAQ-style content. Outer section pages serve two functions: they capture long-tail informational queries, and they build the historical data layer that search engines use to validate the core section’s authority.
The outer section is not where you put your best content. It’s where you build the topical surface area that makes your core section credible.
Coverage Sequencing: Build the Outer Section Before the Core
This is the rule that most practitioners get backward, and it has significant consequences for how quickly topical authority builds.
The conventional approach is to start with the pillar page (the high-competition, high-value keyword) and then fill in the supporting cluster content around it. Koray’s framework reverses this order, and the logic is straightforward once you understand how crawl priority and authority accumulation work.
Quality nodes (your core section pages targeting competitive queries) need to be close to the homepage in crawl depth so search engines find and prioritize them. But they also need topical support from the outer section to rank for competitive queries. If you publish a pillar page about FHA loans in Arizona before you’ve established any topical authority on FHA loans at all, that page competes on thin ground. It has content but no semantic network supporting it.
If you publish 15 to 20 outer section pages first, covering credit score requirements, down payment minimums, FHA inspection standards, first-time buyer programs by county, and common FHA misconceptions, you build a semantic neighborhood around FHA lending before the pillar page ever goes live. When the pillar page publishes, it enters a domain that already has topical signals on the subject. The historical data has a head start.
Koray’s own publishing cadence reflects this logic. He documented publishing one article every three days initially, accelerating to one per day, then three per day as the site built authority. The acceleration mirrors how crawl demand responds to consistent content publication: as search engines allocate more crawl budget to the domain, you publish more to fill that budget with relevant content.
For practitioners working on existing sites rather than new builds, the sequencing rule applies to expansion decisions. Before targeting a new primary keyword cluster, build three to five outer section pieces in that cluster first. Establish the topical neighborhood before committing to the pillar.
Vertical Depth Beats Horizontal Breadth by 340%
The instinct when building topical authority is to spread wide, covering as many relevant topics as possible to maximize topical coverage breadth. The data from 2026 site analysis suggests this is the wrong priority order.
An analysis of 247 high-authority sites across competitive niches found that vertical scaling (going deeper on fewer topics) outperformed horizontal expansion (broader coverage of more topics) by 340% in organic traffic growth. The specific configuration that produced the best results was 3 to 5 core topic clusters, each containing 25 to 40 interconnected pieces of content. Sites with 15 to 20 shallow clusters covering the same total keyword volume consistently underperformed.
This finding maps directly to how AI systems evaluate topical authority for citation purposes. Claude, Perplexity, and Google AI Overviews all favor sources that demonstrate deep, interconnected expertise in a specific domain over sources that demonstrate shallow coverage of many domains. A site with 35 tightly integrated pieces on FHA lending in Arizona will get cited in response to FHA lending queries more reliably than a site with 200 pieces loosely covering all mortgage types nationwide.
The practical implication for a new or restructuring site: define your 3 to 5 core clusters first and go deep before going wide. Map every question a buyer could have within those clusters. Cover the entity’s attributes at every level of specificity. Build the inner semantic neighborhood completely before expanding the topical map into adjacent clusters.
The timeline for vertical depth to produce results is also faster. Sites using deep vertical cluster strategies see ranking improvements in 90 to 120 days. Horizontal expansion attempts typically take 6 to 9 months for the same improvements. Depth compounds quickly because search engines can resolve your authority on a topic conclusively. Breadth takes longer because authority on any individual topic remains ambiguous.
The Semantic Content Brief: The Bridge Between Map and Page
The topical map tells you what to create. The semantic content brief tells you how to create each piece within the map. Most content operations skip the brief entirely and go straight from keyword to draft. The result is content that exists within the topical map by keyword but contributes nothing to the semantic content network by structure.
A semantic content brief is a pre-writing specification document that operationalizes the topical map into page-level instructions. Every page in the topical map gets a brief before a word of content is written. The brief contains:
Macro context: The single primary topic this page covers. One sentence maximum. This maps directly to the page’s H1 and title. If the macro context can’t be stated in one sentence, the page is trying to cover too much and needs to be split.
Target entities: Every real-world entity the page must reference explicitly. For an FHA loan page, these include: FHA 203(b) loan program, HUD, NMLS, Arizona Department of Financial Institutions, Maricopa County, and the specific organization providing the loan. These entities are written as EAV triples in the content, not mentioned incidentally.
Required Question H2s: The headings for the page, formatted as direct questions. “What credit score do you need for an FHA loan in Arizona?” “How long does FHA loan approval take?” “What is the FHA loan limit for Maricopa County in 2026?” These headings define the page’s question coverage and serve as extractive answer targets for featured snippets and AI citations.
40-word extractive answers: A pre-written 40-word answer for each Question H2 that stands completely alone without surrounding context. This is the content that AI systems extract as citation-worthy passage content. Writing it in the brief rather than during content creation forces the right level of specificity before the writer starts. The answer engine optimization guide covers exactly how AI systems select and extract these passage-level answers across Google, ChatGPT, and Perplexity.
Rare and unique attributes: The specific EAV data points competitors have not covered. For the FHA page, this might be: exact FHA loan limits by Arizona county for the current year, specific FICO score thresholds used by this particular lender, or the processing timeline breakdown specific to this office. These differentiate the page semantically from competing pages covering the same core topic.
Internal linking spec: Which pages this page links to (with anchor text), which pages link to this page (with anchor text), and which page is the hub for this content cluster. Not a general directive to “add internal links.” This means specific source page, target page, and the exact anchor text that reflects the target’s macro context.
The brief structure follows Koray’s EAV model and the 41 authorship rules from his Entity-Attribute-Value methodology. Building the brief in Claude Code means the entity context from CLAUDE.md feeds directly into every brief without re-explaining the entity architecture.
Internal Linking Architecture: I-Nodes and Intent Progression
Koray’s internal linking framework distinguishes three types of links by where they appear and how much semantic weight they carry:
S-nodes are site-wide navigation links: the links in your header, footer, and sidebar that appear on every page. Search engines treat these as structural signals, not topical authority signals. They communicate site architecture, not subject expertise.
C-nodes are block-level content links: links placed in content sections, callout boxes, or related-content modules. They carry moderate topical signal.
I-nodes are individual contextual links placed within sentences in the main content body. A specific sentence making a specific claim links to the page that expands on that claim, with anchor text that directly reflects the target page’s macro context. Koray identified 3 contextual I-node links per article as the target, and he uses I-nodes exclusively in his implementations because they carry the highest topical authority signal of any internal link type.
The reason I-nodes outperform other link types is that they appear within the semantic context of a specific claim. The surrounding sentence tells the search engine exactly what the linked page is about and why this page considers that page relevant. S-node and C-node links lack that contextual grounding.
Intent progression is the directional principle for how internal links flow across the content network. Informational outer section content links toward commercial core section content. Comparison and decision-stage content links toward conversion pages. The flow always moves toward commercial intent, distributing ranking signal from high-traffic informational pages toward the pages that generate business value.
The 3 contextual bridges rule applies when expanding the topical map into a new cluster. Before creating a new cluster of content, identify at least 3 existing pages where a natural I-node link can point toward the new cluster. Those 3 bridges establish the semantic connection between your existing content network and the new cluster before the new cluster pages exist. When the new pages go live, they enter a network that already references them.
The 6-Month Configuration Audit
Topical maps aren’t static documents. Query patterns shift as language evolves, new entities emerge, and search engine NLP models update their understanding of semantic relationships. A topical map built in Q4 2025 and left unchanged through Q2 2026 will have accumulated drift between its architecture and the current query landscape.
Koray specifies a minimum 6-month content configuration audit cycle. The audit has three components:
First, identify pages where the primary entity’s salience has dropped or where new competing entities have displaced it in Google’s NLP scoring. Run GSC impressions data for these pages and compare against the query clusters they were built to serve. Declining impressions on specific queries signal that the page’s topical alignment has drifted from those queries.
Second, identify new query patterns in your semantic neighborhood that didn’t exist when the topical map was built. GSC’s query data is the primary source for this. Queries generating impressions but no clicks typically indicate pages with weak or missing coverage for the exact question being asked. These become additions to the outer section of the topical map.
Third, identify internal linking decay: pages that were quality nodes in the original architecture but whose link equity has eroded as the site expanded around them without adding new I-node links pointing to them. These pages need additional contextual bridges from newer content.
The configuration audit is where the Claude Code workflow earns its keep in ongoing site management. Running the audit manually across a 100-page site takes the better part of a day. A properly built Claude Code skill running against live GSC and Ahrefs data completes it in minutes and produces a prioritized action list.
Why This Workflow Cannot Be Replicated in ChatGPT
Several tools exist specifically to generate topical maps: Topical Map AI, MarketMuse, and a handful of others. They’re useful for initial keyword discovery. They have a fundamental limitation for practitioner-level work: they operate on generic keyword data, not your specific site’s performance data.
A generic topical map generator doesn’t know which clusters your site already has partial authority in. It doesn’t know which of your pages Google already associates with specific entities. It doesn’t know which competitor pages are outranking yours and exactly which entity attributes they cover that you don’t. It generates a keyword-based content outline based on industry norms, not a semantic architecture based on your actual competitive position.
But the deeper limitation isn’t the tool category. It’s the underlying paradigm. A ChatGPT workflow for topical mapping looks like this: export a keyword CSV from Ahrefs, format it for the prompt, paste it into the conversation, get output, copy that output into a spreadsheet, start next session from scratch with no memory of prior decisions. Every session requires the same manual data transfer. Every output is disconnected from what came before. The topical map lives in a document you maintain by hand.
The Claude Code workflow described in this post is architecturally different in six specific ways:
Live MCP data, not exports. When /topical-map-builder runs, it calls the Ahrefs API directly via MCP and the GSC API directly via MCP. The data is live at the moment of execution. No CSV export, no formatting, no paste. A ChatGPT prompt requires fresh manual exports every time you run the analysis.
Persistent agent memory across sessions. CLAUDE.md loads automatically at the start of every Claude Code session. The agent knows your complete topical architecture, your entity graph, your active clusters, and the gaps from the last audit without you providing any context. ChatGPT has no memory between sessions. Every conversation starts blank, and you re-explain the same architecture every time or paste in context you’ve maintained elsewhere.
Autonomous scheduled execution. /loop 7d /topical-gap-monitor tells Claude Code to re-execute the full gap audit every seven days without you opening the tool. The topical map in CLAUDE.md updates while you’re working on other things. ChatGPT cannot execute on a schedule. It runs when you prompt it and stops when you close the window.
Skills as permanent saved workflows. The three skills in this post are saved once to .claude/skills/ and become permanent slash commands. /topical-map-builder runs the same multi-step, multi-source workflow every time from a single command. In a ChatGPT workflow, the prompt is re-written or re-pasted each session. There is no equivalent to a saved skill that persists and executes on demand.
Parallel subagent execution. When /topical-map-builder pulls organic keywords, competitor top pages, and GSC data, it can spawn subagents to call all three simultaneously. The analysis that would require 15 sequential ChatGPT tool calls completes in the time it takes to run two or three. For sites with large content inventories, this difference is not marginal.
Direct file system writes. When the gap monitor identifies new coverage gaps, it writes them to CLAUDE.md immediately. The findings persist automatically. The topical map is always current. In a ChatGPT workflow, you copy the output into wherever you store your topical map. That manual step is where architectural drift begins, because the copy doesn’t always happen, the storage isn’t always updated, and the next session doesn’t have the latest state.
The result is a topical authority system that runs as infrastructure rather than as a project. The map maintains itself. Gaps surface automatically. Every content decision the agent makes references the live state of the architecture, not a snapshot from the last time you remembered to update a spreadsheet.
The CLAUDE.md Topical Map Format
Add this section to the CLAUDE.md in every client SEO project. It’s the living topical map that every downstream skill reads before executing:
# Topical Map
## Source Context
[One sentence defining why this website exists in search — not marketing language,
the actual semantic purpose. Example: "An FHA-approved mortgage originator serving
first-time homebuyers in Maricopa and Pima counties with government-backed loan programs."]
## Central Entity
[The primary entity this site is built around, matching the @id in the entity schema.
Example: "Arnaiz Mortgage — FHA-approved mortgage broker, Surprise AZ"]
## Central Search Intent
[The primary query this site aims to own.
Example: "FHA mortgage broker Maricopa County Arizona"]
## Core Section (Quality Nodes)
Pages closest to homepage in crawl depth. Ranked by priority.
- /services/fha-loans/ — [brief macro context]
- /services/va-loans/ — [brief macro context]
- /services/usda-loans/ — [brief macro context]
- /locations/maricopa-county/ — [brief macro context]
## Outer Section (Coverage Nodes)
Informational and educational content supporting core section authority.
Published: (list completed URLs)
- /blog/fha-credit-score-requirements/
- /blog/fha-loan-limits-arizona-2026/
Planned: (list topics not yet published)
- FHA down payment assistance programs Arizona
- First-time homebuyer checklist Arizona
- FHA vs conventional loan comparison
## Active Topic Clusters
[List each cluster with status]
Cluster 1: FHA Loans Arizona — 14 pages, 8 published, 6 planned
Cluster 2: VA Loans Arizona — 6 pages, 3 published, 3 planned
Cluster 3: First-Time Homebuyer Programs — 10 pages, 4 published, 6 planned
## Topical Gaps (from last audit)
[Date of last audit]
- Missing: FHA loan limits by Arizona county (current year)
- Missing: FHA inspection requirements Arizona
- Thin: /blog/va-loan-eligibility/ — covers topic but no EAV structure
## Internal Linking Priorities
Pages needing additional I-node links pointing to them:
- /services/fha-loans/ — needs 2 more I-node links from outer section
- /locations/maricopa-county/ — needs I-node links from FHA content cluster
## 3 Contextual Bridges (planned expansion)
Before adding USDA cluster, establish bridges from:
- /blog/fha-vs-usda-comparison/ (planned)
- /services/fha-loans/ (existing, add USDA mention)
- /locations/rural-arizona/ (planned)
This file is the single source of truth for the site’s topical architecture. Every skill reads it. Every brief references it. Every gap audit updates it.
The Claude Code Workflow: Three Skills
Skill 1: Topical Map Builder
Save as .claude/skills/topical-map-builder.md:
# /topical-map-builder
Build or update the topical map for this site using live Ahrefs and GSC data.
Reads and updates CLAUDE.md. Run this before any content planning session.
## Steps 1-3: Parallel Data Pull (spawn three subagents simultaneously)
These three sources are fully independent. Do not run them sequentially.
Spawn three subagents in parallel and wait for all three to complete before Step 4.
On a 100-keyword site, sequential execution takes ~6 minutes. Parallel takes ~2 minutes.
ChatGPT executes tool calls one at a time. This is not possible there.
Subagent A — Current Site Coverage:
Use the ahrefs site-explorer-organic-keywords MCP tool to retrieve all keywords
this domain ranks for in positions 1-20. Group by parent topic.
This shows what topical clusters already exist with ranking signals.
Subagent B — Competitor Coverage:
Using site context from CLAUDE.md, use ahrefs site-explorer-organic-competitors MCP
to identify the top 5 organic competitors.
For each competitor, use ahrefs site-explorer-top-pages MCP to pull their top 20 pages
by organic traffic. Use WebFetch MCP to retrieve each page.
From each page, extract:
- Primary topic (macro context)
- Secondary entities covered
- H2 headings (these reveal the subtopic coverage depth)
- Whether content is informational, commercial, or transactional
Subagent C — GSC Query Gap Analysis:
Use GSC MCP (gsc-keywords or gsc-pages tool) to pull:
- Pages with impressions but CTR under 3% (topical coverage exists, content not extractable enough)
- Queries generating impressions with no ranking page (topical gaps)
- Queries where you rank positions 6-20 (partial coverage needing depth work)
## Step 4: Merge subagent results
## Step 5: Three-Type Gap Classification
Classify every identified gap into one of three categories:
Coverage Gap: Topic exists on competitor sites but not on this site at all.
Action: Add to Outer Section planned list in CLAUDE.md.
Depth Gap: Topic exists on this site but competitor covers more entity attributes, more H2 questions,
or more specific values.
Action: Add to Thin coverage list in CLAUDE.md with specific missing attributes noted.
Opportunity Gap: No competitor has fully covered this topic.
Action: Flag as priority — these are the rare and unique attribute opportunities from Koray's framework.
## Step 6: Update CLAUDE.md
Update the Topical Map section in CLAUDE.md with:
- Any new coverage gaps identified (add to planned list with gap type)
- Any thin coverage pages identified (add to gaps section)
- Revised cluster status counts
- New contextual bridge opportunities
## Step 7: Output Summary
Print a prioritized content roadmap:
- Top 5 coverage gaps (competitor coverage depth + estimated traffic opportunity)
- Top 3 depth gaps (specific missing attributes that would most improve entity salience)
- Top 2 opportunity gaps (topics with no strong competition)
Ordered by: [opportunity gaps first] [high-traffic coverage gaps] [depth gaps]
Run with: /topical-map-builder
This skill does in under 10 minutes what a manual content gap session takes a full day to do. The three data pulls run as parallel subagents simultaneously — Ahrefs site coverage, competitor pages, and GSC queries all execute at once and converge at the gap classification step. That’s structurally impossible in ChatGPT, which executes tool calls one at a time. Results get classified by gap type, written directly to CLAUDE.md, and are available in every future session without you touching a spreadsheet.
Skill 2: Semantic Content Brief Generator
Save as .claude/skills/content-brief.md:
# /content-brief
Generate a semantic content brief for a specific page in the topical map.
Usage: /content-brief [topic or URL]
Reads topical map and entity architecture from CLAUDE.md before generating.
## Step 1: Confirm Topic Placement
Identify whether this topic belongs in the Core Section or Outer Section based on CLAUDE.md.
Identify which cluster it belongs to.
Identify which quality node it supports (for outer section) or whether it is itself a quality node.
## Step 2: Competitor Content Analysis
Use WebFetch MCP to retrieve the top 3 ranking pages for the target query.
For each, extract:
- H1 and all H2/H3 headings
- Word count estimate
- Schema types present (from JSON-LD if visible)
- Entity coverage (what entities are explicitly named)
- Any attributes our CLAUDE.md entity does not currently cover
## Step 3: Keyword and Entity Data
Use ahrefs keywords-explorer-matching-terms MCP for the primary topic keyword.
Pull: search volume, keyword difficulty, top-ranking URL, related terms.
Use ahrefs keywords-explorer-related-terms for semantic expansion.
Extract any entities or attributes from related terms not yet in CLAUDE.md entity architecture.
## Step 4: Build the Brief
Output a semantic content brief containing:
Page Type: [Core / Outer]
Cluster: [cluster name from CLAUDE.md]
Target URL Slug: [suggested]
Primary Keyword: [with volume and KD]
Macro Context (one sentence):
[Single declarative sentence stating exactly what this page is about]
Target Entities (explicit EAV required for each):
- [Entity 1]: attributes to cover = [list]
- [Entity 2]: attributes to cover = [list]
Rare/Unique Attributes (from competitor gap analysis):
- [Attribute competitor A misses]
- [Attribute competitor B misses]
Required H2 Questions (with 40-word pre-written answers):
H2: [Question 1]
Answer: [Exactly 40 words. Standalone. Direct answer. No preamble.]
H2: [Question 2]
Answer: [Exactly 40 words. Standalone. Direct answer. No preamble.]
[Continue for all required H2s — minimum 4, maximum 8]
Schema Required:
- [Schema types needed for this page type]
Internal Links FROM this page (I-nodes):
- Link to [target URL] with anchor text "[anchor]" in context: "[sentence where link goes]"
- [repeat for each — minimum 3]
Internal Links TO this page (I-nodes, from existing pages):
- [source URL] should link here with anchor text "[anchor]"
- [repeat for each — minimum 2]
Competitive Differentiation Note:
[One paragraph on what this page must do that the top 3 competitors don't — specific to the
rare/unique attributes identified and the opportunity gaps in CLAUDE.md]
Run with: /content-brief FHA loan limits Arizona 2026
The brief comes out pre-loaded with your entity architecture from CLAUDE.md. The agent already knows your central entity, your active clusters, your existing internal link targets, and the gap types identified in the last map-builder run. A writer receives a brief that a generalist AI tool couldn’t produce without access to all of that context simultaneously.
Skill 3: Topical Gap Monitor with Weekly Loop
Save as .claude/skills/topical-gap-monitor.md:
# /topical-gap-monitor
Weekly topical coverage audit. Reads CLAUDE.md, pulls fresh GSC and Ahrefs data,
identifies drift and new gaps since last audit. Updates CLAUDE.md gap section.
## Steps 1-3: Parallel Audit (spawn three subagents simultaneously)
These three checks are independent. Run them in parallel and merge at Step 4.
Subagent A — GSC Performance Delta:
Use GSC MCP to pull the last 28 days vs. prior 28 days for:
- Impressions by page (identify pages losing impressions — topical drift signal)
- New queries generating impressions (new gap opportunities)
- CTR by page (below 3% = content not extractable enough)
Subagent B — Competitor Movement Check:
Use ahrefs site-explorer-organic-competitors MCP to check if competitor ranking positions
have shifted significantly for the clusters in CLAUDE.md.
Flag any cluster where a competitor has gained 20+ positions on core queries this period.
Use ahrefs site-explorer-top-pages for those competitors to identify new pages they've published.
Fetch those pages via WebFetch to identify what new entity attributes or topics they've added.
Subagent C — CLAUDE.md Drift Check:
Review each page in the CLAUDE.md Core Section list.
Use WebFetch to retrieve the current live version of each page.
Check: does the page still reflect the macro context defined in CLAUDE.md?
Flag any pages where content has been edited in ways that weaken the primary entity's salience
or that dilute the macro context.
## Step 4: Merge subagent results and update CLAUDE.md
Update the Topical Gaps section with any new findings.
Update the date stamp on the last audit field.
## Step 5: Weekly Report
Output:
- Pages losing impressions (with likely cause: drift, competitor gain, or algorithm)
- New query opportunities identified this week
- Competitor new content that represents a threat to any cluster
- One priority action: the single highest-impact fix identified this week
Keep the report to one screen. Flag critical issues only.
Set on a weekly schedule:
/loop 7d /topical-gap-monitor
The loop runs every seven days and updates CLAUDE.md automatically. If this is your first time using the /loop command, the full guide covers how to build monitoring workflows and convert them to scheduled triggers. When you open a new session mid-week, the topical map in CLAUDE.md already reflects the most recent gap analysis. You never start a content planning session from stale data.
For how the internal link architecture in your topical map stays structurally sound as you publish new content, the AI internal linking guide covers the Ahrefs MCP audit workflows that cross-reference your topical map links against the live site structure.
The Compound Effect: Why This Takes 12 Months and Why That’s Good News
The full timeline for topical authority to produce measurable results runs 12 to 18 months from a clean start. Initial impression growth typically appears in 4 to 8 weeks. Meaningful ranking improvements on competitive queries take 3 to 6 months. Full topical authority recognition by Google and AI systems takes 12 to 18 months. Historical data cannot be rushed.
This is strategically valuable, not a limitation. The 12 to 18 month build time means topical authority creates a moat that competitors cannot replicate quickly. A competitor who starts building topical authority today is 12 to 18 months behind you if you started six months ago. A competitor who hasn’t started yet is two years out. The barrier to entry rises continuously as your historical data layer compounds.
The practitioners who fail at topical authority do so at month three, when the results aren’t yet visible and the content investment feels disproportionate to the traffic returns. The ones who succeed treat it as infrastructure, not as a content campaign. Infrastructure compounds. Campaigns end.
The /loop 7d /topical-gap-monitor running in the background means you maintain and expand the architecture without manual oversight. The topical map in CLAUDE.md stays current. Gaps get flagged when they appear, not after they’ve cost you months of traffic. The compound effect runs continuously, not just when you remember to check.
Frequently Asked Questions
What is the difference between a topical map and a content calendar?
A topical map is the strategic architecture: the complete inventory of topics, their relationships, their cluster assignments, and their internal linking structure. It defines what the content ecosystem needs to look like when complete. A content calendar is the execution schedule for building that architecture. The topical map answers “what do we need?” The calendar answers “when do we publish it?” Most content operations have calendars but no topical map, which means their calendars are scheduling activity rather than building architecture. You need the map before the calendar. Publishing content on a schedule without a topical map is like laying bricks without blueprints.
How many pages does a topical map need before it starts producing ranking results?
There is no universal threshold, but the vertical depth research suggests 25 to 40 interconnected pieces per topic cluster as the range where authority signals become strong enough to produce consistent ranking improvements. The more useful question is coverage completeness within your defined clusters, not total page count. A 25-page cluster that covers every significant entity attribute and question in the domain will outperform a 60-page cluster with shallow, overlapping coverage. Before measuring page count, measure how many of the required Question H2s and EAV triples in your semantic content briefs are actually covered across your existing pages.
What is source context and how does it affect what topics belong in a topical map?
Source context, as defined in Koray Tugberk’s framework, is the fundamental purpose your website serves in search, the lens through which every content decision gets filtered. Every topic in your topical map should connect logically to your source context. If a topic doesn’t reinforce or extend the semantic neighborhood defined by your source context, it creates dilution rather than authority. The practical test: can you write a one-sentence source context definition for your site, and does the proposed topic belong inside that sentence’s scope? If it requires a stretch to justify the connection, it probably dilutes rather than builds.
How does topical authority interact with domain authority and backlinks?
They are partially independent signals. Topical authority comes from content architecture, semantic coverage, and historical engagement data. Domain authority and backlinks come from external endorsement signals. Koray’s framework demonstrates that topical authority can produce competitive rankings without backlink dependence when the semantic content network is sufficiently complete. This is documented in his own case studies. The two do compound together. A site with strong topical authority and strong external links outperforms a site with strong topical authority alone. The strategic sequencing: build topical authority first because it’s fully within your control, then build external signals around the authority you’ve established. Backlinks to pages with strong topical authority produce more ranking impact than backlinks to pages with weak topical context.
Can you build topical authority on an existing site with legacy content, or does it require starting fresh?
Existing sites can build topical authority, but the process requires a configuration audit before any new content is created. Legacy content often has structural problems that actively prevent topical authority: pages covering multiple macro contexts (topical dilution), no internal link architecture, weak entity salience, and thin EAV coverage. Publishing new topical content on top of these structural issues adds pages without improving the semantic network. Run the /topical-map-builder skill to inventory what you have, identify which existing pages can be reconfigured rather than replaced, and build the CLAUDE.md topical map before any new publishing. For most existing sites, reconfiguring 10 to 15 existing pages is higher ROI than publishing 10 to 15 new ones.
How do you identify which of the three gap types (coverage, depth, opportunity) to prioritize?
Opportunity gaps first, then depth gaps, then coverage gaps. One important modifier: opportunity gaps are topics where neither you nor your top competitors have fully built authoritative coverage. These are rare and represent a window that closes as competitors discover the same gap. Depth gaps (topics where you have coverage but competitors go deeper on entity attributes) produce faster improvements because you’re improving existing pages, not waiting for new pages to accumulate historical data. Coverage gaps (topics competitors have that you don’t) require publishing new pages and waiting for authority to build. The /topical-map-builder skill classifies every identified gap by type automatically, so prioritization is built into the output rather than requiring separate analysis.

