Build a Brand Visibility Dashboard: Track Yourself in Google + ChatGPT + Gemini (Full Code)
Tracking your brand in 2026 is a two-surface job. Half of your money keywords show up in Google AI Overviews. The other half live inside ChatGPT, Gemini, Claude, and Perplexity. Most existing brand monitoring tools track one surface or the other — not both, and not in one screen.
This tutorial builds a brand visibility dashboard from scratch. It pulls Google SERP data plus AI Overview source lists, queries the four major LLMs for citation matches, stores everything in SQLite, and renders a Next.js dashboard that updates daily. You can run it on a $5 VPS or a free Vercel project.
Why Build This?
Off-the-shelf brand monitors (Brand24, Brandwatch, etc.) cost from $99 to $1,500 per month and almost none of them touch the AI surfaces in 2026. The custom dashboard takes an afternoon to build, costs around $0.50 per month to run, and gives you exact citation data the SaaS tools do not surface.
You can also do things commercial dashboards refuse to: track your competitors, run unlimited keywords, and dump the raw data into your data warehouse for joins.
The Stack
- SERP API: Serpent API for Google organic + AI Overview source list. Single endpoint returns both.
- AI Ranking API: Serpent's AI Ranking endpoint hits ChatGPT, Gemini, Claude, and Perplexity in parallel.
- Storage: SQLite for the prototype. Postgres if you outgrow a single file (you probably will not).
- Schedule: Plain Linux cron, or Vercel cron if you are deploying serverless.
- Frontend: Next.js 15 App Router. Tailwind for styling. ECharts or Recharts for the time-series.
- Email: Resend or Postmark for weekly digest emails.
Data Model
Two tables get you 80 percent of the way there:
CREATE TABLE brand_keywords (
id INTEGER PRIMARY KEY,
keyword TEXT NOT NULL,
brand_domain TEXT NOT NULL,
country TEXT DEFAULT 'us',
added_at TEXT DEFAULT (datetime('now'))
);
CREATE TABLE visibility_snapshots (
id INTEGER PRIMARY KEY,
keyword_id INTEGER REFERENCES brand_keywords(id),
snap_date TEXT NOT NULL,
google_position INTEGER, -- our rank, NULL if not top 100
in_aio_sources INTEGER, -- 1 if our domain in AIO sources
aio_source_position INTEGER, -- position inside AIO sources
chatgpt_cited INTEGER, -- 1 if our domain cited
gemini_cited INTEGER,
claude_cited INTEGER,
perplexity_cited INTEGER,
raw_aio_text TEXT,
raw_aio_sources TEXT, -- JSON array
UNIQUE (keyword_id, snap_date)
);
Step 1: SERP Collector
One function, takes a keyword and a brand domain, returns the Google snapshot.
// lib/collectors/serp.ts
import { db } from "@/lib/db";
const SERP_KEY = process.env.SERPENT_API_KEY!;
export async function collectGoogleSnapshot(
keywordId: number,
keyword: string,
brandDomain: string,
country = "us"
) {
const url = new URL("https://apiserpent.com/api/search");
url.searchParams.set("q", keyword);
url.searchParams.set("engine", "google");
url.searchParams.set("country", country);
url.searchParams.set("api_key", SERP_KEY);
const r = await fetch(url, { cache: "no-store" });
const data = await r.json();
const organic = data.organic_results ?? [];
const ourPosition =
organic.findIndex((o: any) => (o.domain ?? "").endsWith(brandDomain)) + 1;
const aio = data.ai_overview;
const aioSources: any[] = aio?.sources ?? [];
const aioMatch =
aioSources.findIndex((s) => (s.domain ?? "").endsWith(brandDomain)) + 1;
return {
google_position: ourPosition || null,
in_aio_sources: aioMatch ? 1 : 0,
aio_source_position: aioMatch || null,
raw_aio_text: aio?.text ?? null,
raw_aio_sources: JSON.stringify(aioSources.map((s) => s.domain)),
};
}
That gives you everything from the Google side. Position in organic results, presence inside AIO source list, and the raw AIO text for downstream NLP.
Step 2: AI Citation Collector
The AI Ranking endpoint takes a query and returns citation URLs from each of the four LLMs.
// lib/collectors/ai.ts
const AI_KEY = process.env.SERPENT_API_KEY!;
const ENGINES = ["chatgpt", "gemini", "claude", "perplexity"] as const;
export async function collectAiSnapshot(
keyword: string,
brandDomain: string
) {
const results: Record<string, boolean> = {};
await Promise.all(
ENGINES.map(async (engine) => {
const url = new URL(`https://apiserpent.com/api/ai/rank/${engine}`);
url.searchParams.set("q", keyword);
url.searchParams.set("api_key", AI_KEY);
const r = await fetch(url, { cache: "no-store" });
const data = await r.json();
const citations: any[] = data.citations ?? [];
results[engine] = citations.some((c) =>
(c.domain ?? "").endsWith(brandDomain)
);
})
);
return {
chatgpt_cited: results.chatgpt ? 1 : 0,
gemini_cited: results.gemini ? 1 : 0,
claude_cited: results.claude ? 1 : 0,
perplexity_cited: results.perplexity ? 1 : 0,
};
}
Note the Promise.all — the four LLMs are queried in parallel, so the whole snapshot for a keyword takes around three to five seconds wall-clock.
Step 3: Cron Job
One function ties them together and writes a row.
// scripts/run-snapshot.ts
import { db } from "@/lib/db";
import { collectGoogleSnapshot } from "@/lib/collectors/serp";
import { collectAiSnapshot } from "@/lib/collectors/ai";
async function main() {
const today = new Date().toISOString().slice(0, 10);
const keywords = db.prepare("SELECT * FROM brand_keywords").all();
for (const kw of keywords) {
try {
const serp = await collectGoogleSnapshot(
kw.id, kw.keyword, kw.brand_domain, kw.country
);
const ai = await collectAiSnapshot(kw.keyword, kw.brand_domain);
db.prepare(`
INSERT OR REPLACE INTO visibility_snapshots (
keyword_id, snap_date,
google_position, in_aio_sources, aio_source_position,
chatgpt_cited, gemini_cited, claude_cited, perplexity_cited,
raw_aio_text, raw_aio_sources
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`).run(
kw.id, today,
serp.google_position, serp.in_aio_sources, serp.aio_source_position,
ai.chatgpt_cited, ai.gemini_cited, ai.claude_cited, ai.perplexity_cited,
serp.raw_aio_text, serp.raw_aio_sources
);
console.log(`OK ${kw.keyword}`);
} catch (e) {
console.error(`FAIL ${kw.keyword}`, e);
}
}
}
main();
Schedule it with cron:
# /etc/crontab
0 4 * * * cd /opt/brand-dashboard && /usr/bin/node dist/scripts/run-snapshot.js
Or with Vercel cron in vercel.json:
{ "crons": [{ "path": "/api/snapshot", "schedule": "0 4 * * *" }] }
Step 4: Next.js Dashboard
The minimum viable dashboard has three views: a per-keyword scoreboard, a time-series chart, and a citation matrix. Here is the scoreboard query and component.
// app/page.tsx
import { db } from "@/lib/db";
export default async function HomePage() {
const rows = db.prepare(`
SELECT k.keyword, k.brand_domain,
v.google_position, v.in_aio_sources,
v.chatgpt_cited, v.gemini_cited,
v.claude_cited, v.perplexity_cited
FROM brand_keywords k
JOIN visibility_snapshots v
ON v.keyword_id = k.id
AND v.snap_date = (SELECT MAX(snap_date) FROM visibility_snapshots)
`).all();
return (
<table className="w-full">
<thead>
<tr><th>Keyword</th><th>Google</th><th>AIO</th>
<th>ChatGPT</th><th>Gemini</th><th>Claude</th><th>Perplexity</th></tr>
</thead>
<tbody>
{rows.map((r: any) => (
<tr key={r.keyword}>
<td>{r.keyword}</td>
<td>{r.google_position ?? "—"}</td>
<td>{r.in_aio_sources ? "✓" : "×"}</td>
<td>{r.chatgpt_cited ? "✓" : "×"}</td>
<td>{r.gemini_cited ? "✓" : "×"}</td>
<td>{r.claude_cited ? "✓" : "×"}</td>
<td>{r.perplexity_cited ? "✓" : "×"}</td>
</tr>
))}
</tbody>
</table>
);
}
That is the scoreboard. For the time-series chart, query the last 90 days of snapshots for a single keyword and feed the result to Recharts. For the citation matrix, group by date and engine and stacked-bar the totals.
Step 5: Email Alerts
Most teams want one alert: "we lost a citation we previously had." That is a one-query check at the end of the cron job.
// scripts/check-regressions.ts
const lossesQuery = `
SELECT k.keyword,
CASE WHEN today.in_aio_sources = 0 AND yesterday.in_aio_sources = 1
THEN 'AIO' END AS aio_loss,
CASE WHEN today.chatgpt_cited = 0 AND yesterday.chatgpt_cited = 1
THEN 'ChatGPT' END AS chatgpt_loss
FROM brand_keywords k
JOIN visibility_snapshots today ON today.keyword_id = k.id
JOIN visibility_snapshots yesterday ON yesterday.keyword_id = k.id
WHERE today.snap_date = date('now')
AND yesterday.snap_date = date('now', '-1 day')
`;
const losses = db.prepare(lossesQuery).all();
if (losses.length) {
await fetch("https://api.resend.com/emails", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.RESEND_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
from: "alerts@yourdomain.com",
to: "you@yourdomain.com",
subject: `Brand visibility regressions (${losses.length})`,
text: losses.map((l: any) =>
`${l.keyword}: lost ${[l.aio_loss, l.chatgpt_loss].filter(Boolean).join(", ")}`
).join("\n"),
}),
});
}
Wire that into the same cron and you have a working brand-monitoring system that emails you the second your domain falls out of an AI Overview or an LLM citation list.
Cost
- 100 keywords × 1 snapshot per week × 4 weeks = 400 SERP calls + 1,600 AI Ranking calls per month.
- Serpent API SERP at Scale tier ($0.30 per 1,000 quick searches): $0.12
- Serpent AI Ranking (custom pricing — check pricing page): roughly $0.40
- Total monthly: under $1
For comparison, Brand24's smallest plan is $99/month and does not track AI Overview citations at all. Brandwatch starts at $1,500/month. The custom dashboard is two orders of magnitude cheaper than the cheapest commercial option that does less.
Build It This Weekend
Both APIs you need (SERP and AI Ranking) are on Serpent's free tier — 10 free Google searches with every account, no credit card required. The full 100-keyword setup will not cost more than $1 a month at production volume.
Get Your Free API KeyExplore: SERP API · AI Ranking API · Playground · AI Overview extraction guide
FAQ
What is a brand visibility dashboard?
A single screen that shows where your brand currently appears across Google search results and AI-generated answers from ChatGPT, Gemini, Claude, and Perplexity. Tracks citation rate, AIO source list inclusion, organic ranking, and changes over time.
What APIs do I need to build one?
Two APIs cover the full picture: a SERP API like Serpent that returns Google organic results plus AI Overview text and source citations, and an AI Ranking API that queries the four major LLMs in parallel.
How often should I refresh the data?
Daily for high-priority money keywords, weekly for the long tail. AIO source lists drift roughly 22 percent week-over-week and LLM citation patterns drift faster. Daily is overkill for most teams; weekly is the sweet spot.
Can I use this dashboard for a competitor brand?
Yes. The same code works for any domain. Many SEO teams run a parallel dashboard for the top 3 competitors and diff against their own.
How much will running this dashboard cost?
At Serpent API Scale tier, tracking 100 brand keywords across Google plus 4 LLMs once a week costs roughly $0.50 a month. Most teams spend more on database hosting than on the API calls.
Can I deploy this serverless?
Yes. Vercel cron + a hosted Postgres (Neon, Supabase) replaces the VPS-and-SQLite combo cleanly. The whole stack fits on a free-tier Vercel project for small-to-medium keyword lists.

