The stack I
actually use.
Actual client work stack. Not sponsored. Not a wish list — every tool here has a specific role in how I run GEO, AEO, and SEO automation engagements.
Research & Keywords
Primary tool for keyword research, backlink analysis, and topical gap mapping across client sites. Content Explorer and Site Explorer are the two most-used modules — keyword discovery and authority benchmarking happen in Ahrefs before any content strategy is built.
Performance tracking for every client property — impressions, clicks, position, and Core Web Vitals from real user data. The indexing report and URL inspection tool are the first stop when a page isn't ranking where it should.
Competitor analysis and keyword difficulty benchmarking, particularly for domain overview and competitive positioning work. Used alongside Ahrefs rather than as a replacement — the two tools surface different data points.
SERP data API for pulling live search results at scale — useful for keyword clustering validation and competitor ranking snapshots without manual searching. Integrated directly into n8n automation workflows for scheduled rank tracking.
Technical SEO
Full-site crawls are the starting point for every technical SEO engagement — redirect chains, broken internal links, duplicate title tags, missing schema, and thin content all surface in the first crawl. Redirect audit and custom extraction modes are particularly useful for Shopify sites with complex URL structures.
Structured data generation and JSON-LD validation for pages that need Schema.org markup beyond what CMS plugins provide. Used for custom schema types — DefinedTermSet, Speakable, SpeakableSpecification — that most generators don't support.
The crawl stats report and manual actions panel are the two most-used technical features — crawl budget consumption and any manual penalties are visible here before they show up anywhere else. Core Web Vitals report gives real-user field data that Lighthouse alone can't provide.
AI Search Tracking
Automated monitoring for AI Overview and Perplexity citation appearances across a defined query set — runs daily checks so citation share changes are caught without manual prompt testing. The primary dashboard for reporting AI search visibility to clients in a reproducible way.
Prompt-by-prompt citation audits for target commercial queries — run systematically at the start of each engagement to establish baseline citation share, and monthly to track changes. Manual testing surfaces nuances that automated tools miss, particularly in how AI models frame citations versus cite them directly.
UTM-parameterised links embedded in indexed content allow ChatGPT and Perplexity referral sessions to be tracked in GA4 — making AI-referred traffic measurable and attributable. This is how #1 ChatGPT citations are verified with data rather than screenshot evidence.
Content & Briefs
Primary AI tool for content brief generation, metadata writing, GEO strategy drafting, and n8n workflow logic design. System prompts are calibrated per client brand voice — output that matches the brand reliably rather than requiring heavy editing is the standard being maintained across every client workflow.
Citation testing for GEO work and entity research cross-checking — used as a secondary AI tool rather than a content production tool. ChatGPT's Bing-backed web search makes it useful for testing which sources it currently cites for target queries.
NLP-based on-page scoring for competitive content benchmarking — used to validate that a piece of content covers the right topical terms at the right density before publishing. Useful for clients who want a quantified content score to track over time alongside qualitative GEO signals.
Automation
Open-source workflow automation platform that orchestrates all automation work — Claude API calls, Ahrefs data pulls, Google Sheets output, and scheduled reporting are all routed through n8n. Self-hosted setup gives full control over data routing and avoids API rate limit issues that plague SaaS automation tools.
Programmatic access to Claude for tasks at volume — keyword clustering across hundreds of terms, metadata generation for entire product catalogues, and content brief creation from structured keyword data. The API is what makes the automation system scale: one workflow can process hundreds of inputs with consistent quality.
Structured output destination for automation workflows — keyword clusters, briefs, and metadata outputs all land in Sheets so clients can review, approve, and annotate without needing access to the automation system itself. Acts as the human review layer between automated output and implementation.
Analytics & Reporting
Client-facing SEO dashboards pulling from Google Search Console, GA4, and custom data sources — built once per client and set to auto-refresh so reporting requires no manual data compilation. Dashboards are designed around the KPIs that matter for the engagement type: AI citation share, organic traffic, and conversion for GEO clients; rankings and topical coverage for traditional SEO clients.
Traffic analysis, AI referral tracking, and conversion measurement across all client properties. The acquisition report segmented by source/medium is the primary view for measuring AI search traffic growth — chatgpt.com and perplexity.ai referral sessions are tracked here alongside traditional organic channel data.
Want to know which tools apply to your problem?
The right stack depends on what you're trying to accomplish — GEO visibility, technical cleanup, or automation at scale. Start with a conversation.