How to Build a Keyword Rank Tracker with a SERP API (Full Python Tutorial)
Commercial rank tracking tools like Ahrefs, SEMrush, and AccuRanker charge $99–$499 per month for keyword position monitoring. These tools are excellent, but if your needs are straightforward—track N keywords on a daily schedule and view trends over time—you can build the same thing yourself for less than $2 per day using a SERP API and about 200 lines of Python.
This tutorial walks through the complete process: setting up the API, writing the tracking script, storing results in a database, scheduling daily runs, and building a basic dashboard to visualize ranking trends. By the end, you will have a functional rank tracker that costs a fraction of any commercial alternative.
Why Build vs Buy a Rank Tracker
Reasons to Build
- Cost savings at moderate scale. Tracking 1,000 keywords daily with Serpent API's DuckDuckGo engine costs approximately $0.01 per day ($0.30/month) at Scale tier. The cheapest Ahrefs plan that includes rank tracking is $129/month, and it limits you to 750 keywords.
- Full data ownership. Your ranking data lives in your own database. You can query it however you want, join it with other data sources, and never worry about export limits or data portability.
- Custom logic. You can add competitor tracking, SERP feature detection, custom alerting rules, and multi-engine tracking that commercial tools either do not support or gate behind expensive plans.
- No vendor lock-in. If you switch SERP API providers, your database, scripts, and dashboards all remain the same. Only the API call changes.
Reasons to Buy
- You need Google-specific rankings. If tracking Google organic positions specifically is a requirement, commercial tools with established Google scraping infrastructure are more reliable.
- You need polished reporting. If you send rank tracking reports to clients, commercial tools offer presentation-ready PDFs and white-label options out of the box.
- You have limited engineering resources. Building and maintaining a custom tool requires ongoing development time.
Architecture Overview
The rank tracker has four components that work together in a simple pipeline:
- Keyword list — A text file or database table containing the keywords you want to track and the domain you are monitoring.
- Tracking script — A Python script that queries Serpent API for each keyword, finds your domain's position in the results, and stores the data.
- Database — SQLite for simplicity (or PostgreSQL for production). Stores every ranking check with a timestamp, enabling historical trend analysis.
- Scheduler — A cron job that runs the tracking script once per day at a consistent time.
Optionally, you can add a fifth component: a dashboard built with Flask or Streamlit that visualizes ranking trends over time.
Step 1: Set Up Your API Key
Sign up at apiserpent.com to get your API key. New accounts receive 100 free searches, which is enough to test the rank tracker with 100 keywords before spending anything.
Install the required Python packages:
pip install requests python-dotenv
Create a .env file in your project directory with your API key:
SERPENT_API_KEY=your_api_key_here
TARGET_DOMAIN=yourdomain.com
Step 2: Write the Tracking Script
Create a file called tracker.py. This script reads your keyword list, queries Serpent API for each one, and returns the position of your target domain in the results:
import requests
import time
import os
from dotenv import load_dotenv
load_dotenv()
API_KEY = os.getenv("SERPENT_API_KEY")
TARGET_DOMAIN = os.getenv("TARGET_DOMAIN")
API_URL = "https://apiserpent.com/api/search"
def check_ranking(keyword, engine="ddg", num_results=30):
"""
Search for a keyword and return the target domain's position.
Returns None if the domain is not found in the top results.
"""
try:
response = requests.get(API_URL, params={
"q": keyword,
"engine": engine,
"num": num_results,
"apiKey": API_KEY
}, timeout=30)
response.raise_for_status()
data = response.json()
organic = data.get("results", {}).get("organic", [])
for result in organic:
url = result.get("url", "")
if TARGET_DOMAIN.lower() in url.lower():
return {
"position": result["position"],
"url": url,
"title": result.get("title", ""),
"snippet": result.get("snippet", "")
}
return None # Not found in results
except requests.exceptions.RequestException as e:
print(f" Error searching '{keyword}': {e}")
return None
def load_keywords(filepath="keywords.txt"):
"""Load keywords from a text file, one per line."""
with open(filepath, "r") as f:
return [line.strip() for line in f if line.strip()]
def run_tracking():
"""Run a full tracking cycle for all keywords."""
keywords = load_keywords()
print(f"Tracking {len(keywords)} keywords for {TARGET_DOMAIN}")
print(f"{'='*60}")
results = []
for i, keyword in enumerate(keywords, 1):
print(f"[{i}/{len(keywords)}] Checking: {keyword}...", end=" ")
ranking = check_ranking(keyword)
if ranking:
print(f"Position {ranking['position']}")
else:
print("Not found in top 30")
results.append({
"keyword": keyword,
"ranking": ranking
})
# Rate limit: stay well within API limits
time.sleep(0.5)
# Summary
found = sum(1 for r in results if r["ranking"])
print(f"\n{'='*60}")
print(f"Results: {found}/{len(keywords)} keywords ranked in top 30")
return results
if __name__ == "__main__":
run_tracking()
Create a keywords.txt file with one keyword per line:
serp api pricing
cheapest serp api
rank tracking api
keyword position tracker
serp api comparison
Step 3: Store Results in SQLite
Create a file called database.py that handles all database operations. SQLite requires no setup—the database file is created automatically:
import sqlite3
from datetime import datetime
DB_PATH = "rankings.db"
def init_db():
"""Create the rankings table if it does not exist."""
conn = sqlite3.connect(DB_PATH)
conn.execute("""
CREATE TABLE IF NOT EXISTS rankings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword TEXT NOT NULL,
position INTEGER,
url TEXT,
title TEXT,
checked_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
conn.execute("""
CREATE INDEX IF NOT EXISTS idx_keyword_date
ON rankings(keyword, checked_at)
""")
conn.commit()
conn.close()
def save_ranking(keyword, position, url=None, title=None):
"""Save a single ranking result to the database."""
conn = sqlite3.connect(DB_PATH)
conn.execute(
"INSERT INTO rankings (keyword, position, url, title) VALUES (?, ?, ?, ?)",
(keyword, position, url, title)
)
conn.commit()
conn.close()
def get_ranking_history(keyword, days=30):
"""Get ranking history for a keyword over the last N days."""
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
rows = conn.execute("""
SELECT position, checked_at
FROM rankings
WHERE keyword = ?
AND checked_at >= datetime('now', ?)
ORDER BY checked_at ASC
""", (keyword, f"-{days} days")).fetchall()
conn.close()
return [dict(row) for row in rows]
def get_latest_rankings():
"""Get the most recent ranking for each keyword."""
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
rows = conn.execute("""
SELECT keyword, position, url, checked_at
FROM rankings r1
WHERE checked_at = (
SELECT MAX(checked_at) FROM rankings r2
WHERE r2.keyword = r1.keyword
)
ORDER BY position ASC NULLS LAST
""").fetchall()
conn.close()
return [dict(row) for row in rows]
# Initialize the database on import
init_db()
Now update tracker.py to save results to the database. Add this import and modify the run_tracking function:
from database import save_ranking
# Inside run_tracking(), after getting the ranking:
if ranking:
save_ranking(keyword, ranking["position"], ranking["url"], ranking["title"])
else:
save_ranking(keyword, None) # Record that we checked but found nothing
Step 4: Schedule Daily Runs with Cron
On Linux or macOS, use cron to run the tracker daily. Open your crontab:
crontab -e
Add this line to run the tracker every day at 6:00 AM:
0 6 * * * cd /path/to/rank-tracker && /usr/bin/python3 tracker.py >> tracker.log 2>&1
For cloud-hosted setups, you can use a scheduled Cloud Function, an AWS Lambda with EventBridge, or a simple VPS with cron. The script is lightweight enough to run on a $5/month VPS.
On Windows, use Task Scheduler to create a daily task that executes the Python script.
Step 5: Build a Simple Dashboard
Create a file called dashboard.py using Flask to serve a simple web dashboard that displays current rankings and trends:
from flask import Flask, jsonify, render_template_string
from database import get_latest_rankings, get_ranking_history
app = Flask(__name__)
DASHBOARD_HTML = """
<!DOCTYPE html>
<html>
<head>
<title>Rank Tracker Dashboard</title>
<style>
body { font-family: system-ui, sans-serif; max-width: 900px;
margin: 2rem auto; padding: 0 1rem; }
table { width: 100%; border-collapse: collapse; margin-top: 1rem; }
th, td { padding: 10px 14px; text-align: left; border-bottom: 1px solid #e5e7eb; }
th { background: #f9fafb; font-weight: 600; }
.pos-good { color: #0d9488; font-weight: 700; }
.pos-ok { color: #d97706; font-weight: 600; }
.pos-bad { color: #dc2626; }
.not-found { color: #9ca3af; font-style: italic; }
</style>
</head>
<body>
<h1>Keyword Rank Tracker</h1>
<p>Latest rankings for {{ domain }}</p>
<table>
<thead>
<tr><th>Keyword</th><th>Position</th><th>URL</th><th>Last Checked</th></tr>
</thead>
<tbody>
{% for r in rankings %}
<tr>
<td>{{ r.keyword }}</td>
<td class="{{ 'pos-good' if r.position and r.position <= 5
else 'pos-ok' if r.position and r.position <= 15
else 'pos-bad' if r.position
else 'not-found' }}">
{{ r.position if r.position else 'Not found' }}
</td>
<td>{{ r.url or '-' }}</td>
<td>{{ r.checked_at[:16] }}</td>
</tr>
{% endfor %}
</tbody>
</table>
</body>
</html>
"""
@app.route("/")
def dashboard():
rankings = get_latest_rankings()
domain = os.getenv("TARGET_DOMAIN", "yourdomain.com")
return render_template_string(DASHBOARD_HTML,
rankings=rankings, domain=domain)
@app.route("/api/history/<keyword>")
def history(keyword):
data = get_ranking_history(keyword, days=90)
return jsonify(data)
if __name__ == "__main__":
import os
from dotenv import load_dotenv
load_dotenv()
app.run(port=5000, debug=True)
Run the dashboard with python dashboard.py and visit http://localhost:5000. You will see a table of your latest keyword rankings with color-coded positions: green for top 5, amber for top 15, and red for everything else.
Extend the tracker to send email or Slack alerts when a keyword drops more than 5 positions or enters the top 3 for the first time. This turns your tracker from a passive dashboard into an active monitoring tool.
Adding Change Detection
A rank tracker becomes genuinely useful when it shows you what changed, not just where you stand right now. Add a function that compares today's results against yesterday's and highlights significant movements:
def detect_changes(current_results, previous_results):
"""Compare current rankings against previous day and flag changes."""
changes = []
prev_map = {r["keyword"]: r.get("position") for r in previous_results}
for result in current_results:
kw = result["keyword"]
curr_pos = result.get("ranking", {}).get("position") if result.get("ranking") else None
prev_pos = prev_map.get(kw)
if curr_pos and prev_pos:
delta = prev_pos - curr_pos # Positive = improved
if abs(delta) >= 3:
changes.append({
"keyword": kw,
"previous": prev_pos,
"current": curr_pos,
"change": delta,
"direction": "improved" if delta > 0 else "dropped"
})
elif curr_pos and not prev_pos:
changes.append({
"keyword": kw, "previous": None, "current": curr_pos,
"change": None, "direction": "new_entry"
})
elif prev_pos and not curr_pos:
changes.append({
"keyword": kw, "previous": prev_pos, "current": None,
"change": None, "direction": "lost"
})
return changes
This function flags any keyword that moved three or more positions in either direction, as well as keywords that newly entered or dropped out of the top results entirely. Logging these changes over time reveals patterns—like consistent drops after algorithm updates or improvements after publishing new content.
Multi-Engine Tracking
One of the most powerful advantages of building your own tracker is the ability to monitor rankings across multiple search engines simultaneously. This is especially valuable in 2026, where AI systems like ChatGPT pull from Bing's index while Perplexity uses its own crawlers alongside search indexes. Tracking DuckDuckGo and Yahoo/Bing together gives you visibility into how your content appears across the indexes that power these AI tools.
Modifying the tracker for multi-engine support is straightforward. Loop through each engine for every keyword and store the engine name alongside the result in your database. The cost increase is linear: tracking 1,000 keywords on two engines (DDG + Yahoo) costs $0.03 per day instead of $0.01 (Scale tier)—still extraordinarily cheap compared to commercial tools.
Understanding the Data You Collect
After running the tracker for two weeks, you will have enough historical data to start drawing conclusions. Look for these patterns in your data:
- Volatility by keyword: Some keywords will show stable rankings that barely move day to day. Others will fluctuate wildly. High volatility keywords are typically competitive queries where many sites are producing similar content. Low volatility keywords are either very competitive (top sites are entrenched) or very niche (few competitors).
- Engine disagreement: If a keyword ranks position 3 on DuckDuckGo but position 18 on Yahoo, that tells you something about how different engines evaluate your content. These discrepancies often point to specific ranking factor differences between engines.
- Trend direction: A keyword that has moved from position 15 to position 8 over two weeks is a positive signal—your content is gaining authority. A keyword sliding from position 5 to position 12 needs investigation. Is a competitor outranking you with newer content? Has the SERP layout changed?
- Correlation with content updates: When you publish new content or update existing pages, watch for ranking changes in the following 7–14 days. This feedback loop tells you whether your content changes are having the intended effect.
Cost Calculation
Here is exactly what this rank tracker costs to run using Serpent API's DuckDuckGo engine at $0.01/1K (Scale tier):
| Keywords Tracked | Daily Cost | Monthly Cost | Annual Cost |
|---|---|---|---|
| 100 keywords | $0.001 | $0.03 | $0.37 |
| 500 keywords | $0.005 | $0.15 | $1.83 |
| 1,000 keywords | $0.01 | $0.30 | $3.65 |
| 5,000 keywords | $0.05 | $1.50 | $18.25 |
| 10,000 keywords | $0.10 | $3.00 | $36.50 |
For comparison, Ahrefs' Lite plan costs $129/month and tracks 750 keywords. Our custom tracker handles 1,000 keywords for $0.30/month (DDG Scale tier)—that is 430x cheaper. Even tracking 10,000 keywords daily costs just $3.00/month—less than any commercial tool's entry-level plan.
If you use Yahoo/Bing engine instead of DuckDuckGo (from $0.02/1K at Scale tier), the cost is slightly higher but still dramatically cheaper than any alternative.
Scaling to PostgreSQL and Production
SQLite works perfectly for up to about 10,000 keywords. Beyond that, or if you need concurrent access from multiple processes, migrate to PostgreSQL. The schema is identical—change the connection string and you are done.
Production Enhancements
- Retry logic: Add exponential backoff for failed API calls. The script above handles errors gracefully, but retrying failed keywords at the end of the run ensures complete data.
- Multi-engine tracking: Run the same keywords through both DuckDuckGo and Yahoo to get cross-engine visibility. This doubles your API cost but provides much richer data.
- Competitor tracking: Add multiple target domains and track where each one ranks for every keyword. This transforms the rank tracker into a competitive intelligence tool.
- SERP feature detection: Parse the full API response to identify when your target appears in featured snippets, PAA boxes, or other SERP features beyond standard organic results.
- Streamlit dashboard: For a more polished visualization, replace the Flask dashboard with Streamlit. Streamlit generates interactive charts with minimal code and is ideal for data dashboards.
For an alternative implementation using Node.js instead of Python, see our Node.js rank tracker tutorial. For advanced competitive analysis techniques, check out our competitor analysis guide.
Get Your API Key
Start tracking keyword rankings today. 100 free searches included, no credit card required.
Get Your Free API KeyExplore: SERP API · News API · Image Search API · Try in Playground