How to Build a Python Rank Tracker with SERP API in 2026
Python is the go-to language for SEO automation, and for good reason. Its clean syntax, rich ecosystem of libraries, and first-class support for HTTP requests make it ideal for building tools that interact with APIs. In this tutorial, you will build a complete keyword rank tracker in Python that fetches search results from the Serpent API, parses your domain's position, stores historical data, and runs on a schedule.
By the end, you will have a production-ready script that costs a fraction of what commercial rank trackers charge -- and you will own every line of code.
What You Will Build
The finished rank tracker will handle the following:
- Query the Serpent API for each keyword in your tracking list
- Find your domain's exact position in the top 100 results
- Save every check to a SQLite database for historical analysis
- Compare current rankings against previous checks and flag changes
- Export ranking data to CSV for sharing or import into spreadsheets
- Run automatically on a daily schedule using
scheduleor cron
The entire project is around 200 lines of Python. No heavy frameworks, no complex setup.
Prerequisites
- Python 3.9 or later
- A Serpent API key -- sign up free to get 100 searches at no cost
- Basic familiarity with Python and the command line
Project Setup
Create a new directory and set up a virtual environment:
mkdir python-rank-tracker
cd python-rank-tracker
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Install the required packages:
pip install requests schedule
Create a config.py file with your tracking settings:
# config.py
API_KEY = "YOUR_SERPENT_API_KEY"
DOMAIN = "yourdomain.com"
KEYWORDS = [
"best project management tools",
"project management software",
"team collaboration app",
"kanban board online",
"agile project tracking",
]
COUNTRY = "us"
LANGUAGE = "en"
ENGINE = "google" # google, yahoo, bing, or ddg
DB_PATH = "rankings.db"
Replace the API key and domain with your own values. The ENGINE setting determines which search engine to query. DuckDuckGo (ddg) is the cheapest option at $0.02 per 1,000 requests on the default tier ($0.01 on Scale).
Fetching SERP Data
The core of the tracker is a function that queries the Serpent API for a single keyword and returns the full list of organic results. Create tracker.py:
import requests
import sqlite3
import csv
import time
from datetime import datetime
from config import API_KEY, DOMAIN, KEYWORDS, COUNTRY, LANGUAGE, ENGINE, DB_PATH
BASE_URL = "https://apiserpent.com/api/search"
def fetch_serp(keyword: str) -> dict:
"""Fetch search results for a keyword from Serpent API."""
params = {
"q": keyword,
"apikey": API_KEY,
"num": 100,
"gl": COUNTRY,
"hl": LANGUAGE,
"engine": ENGINE,
}
response = requests.get(BASE_URL, params=params, timeout=60)
response.raise_for_status()
return response.json()
This function sends a GET request with your keyword, API key, and targeting parameters. The num=100 parameter tells the API to return up to 100 results, giving you visibility into rankings beyond the first page.
Handling Rate Limits
Serpent API enforces rate limits based on your tier. To avoid hitting them, add a small delay between requests:
def fetch_with_retry(keyword: str, retries: int = 3) -> dict:
"""Fetch SERP data with retry logic for rate limits."""
for attempt in range(retries):
try:
return fetch_serp(keyword)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 429:
wait = 2 ** attempt # Exponential backoff
print(f" Rate limited. Waiting {wait}s...")
time.sleep(wait)
else:
raise
raise Exception(f"Failed after {retries} retries for '{keyword}'")
Parsing Keyword Positions
Once you have the SERP data, you need to find where your domain ranks. The API returns organic results in a list, each with a position, link, and title field:
def find_position(serp_data: dict, domain: str) -> dict:
"""Find the domain's position in the organic results."""
results = serp_data.get("organic_results", [])
for result in results:
link = result.get("link", "")
if domain in link:
return {
"position": result.get("position"),
"url": link,
"title": result.get("title", ""),
}
return {"position": None, "url": None, "title": None}
def check_keyword(keyword: str) -> dict:
"""Check ranking for a single keyword."""
serp_data = fetch_with_retry(keyword)
position_data = find_position(serp_data, DOMAIN)
return {
"keyword": keyword,
"position": position_data["position"],
"url": position_data["url"],
"title": position_data["title"],
"checked_at": datetime.utcnow().isoformat(),
}
The find_position function iterates through all organic results and checks if your domain appears in the link URL. If it finds a match, it returns the exact position, URL, and title. If your domain is not in the top 100, position is None.
Tracking Multiple SERP Features
Serpent API also returns SERP features like featured snippets, People Also Ask, and ads. You can extend the parser to capture these:
def extract_serp_features(serp_data: dict, domain: str) -> dict:
"""Check if domain appears in SERP features."""
features = {}
# Check featured snippet
snippet = serp_data.get("featured_snippet", {})
if snippet and domain in snippet.get("link", ""):
features["featured_snippet"] = True
# Check People Also Ask
paa = serp_data.get("people_also_ask", [])
features["paa_count"] = len(paa)
# Check if domain appears in ads
ads = serp_data.get("ads", [])
features["competitor_ads"] = len(ads)
return features
Storing Ranking History
A JSON file works for small projects, but SQLite is a better choice for rank tracking. It handles concurrent writes, supports complex queries, and comes built into Python:
def init_db():
"""Create the rankings table if it doesn't exist."""
conn = sqlite3.connect(DB_PATH)
conn.execute("""
CREATE TABLE IF NOT EXISTS rankings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword TEXT NOT NULL,
position INTEGER,
url TEXT,
title TEXT,
checked_at TEXT NOT NULL
)
""")
conn.execute("""
CREATE INDEX IF NOT EXISTS idx_keyword_date
ON rankings (keyword, checked_at)
""")
conn.commit()
conn.close()
def save_ranking(ranking: dict):
"""Save a single ranking check to the database."""
conn = sqlite3.connect(DB_PATH)
conn.execute(
"INSERT INTO rankings (keyword, position, url, title, checked_at) "
"VALUES (?, ?, ?, ?, ?)",
(
ranking["keyword"],
ranking["position"],
ranking["url"],
ranking["title"],
ranking["checked_at"],
),
)
conn.commit()
conn.close()
def get_previous_ranking(keyword: str) -> dict | None:
"""Get the most recent ranking for a keyword."""
conn = sqlite3.connect(DB_PATH)
row = conn.execute(
"SELECT position, url, checked_at FROM rankings "
"WHERE keyword = ? ORDER BY checked_at DESC LIMIT 1",
(keyword,),
).fetchone()
conn.close()
if row:
return {"position": row[0], "url": row[1], "checked_at": row[2]}
return None
The index on (keyword, checked_at) makes historical lookups fast, even with thousands of data points.
Detecting Ranking Changes
The real value of a rank tracker is knowing when things change. Here is how to compare current results against the most recent check:
def detect_change(keyword: str, current_pos: int | None) -> str:
"""Compare current position to previous and return a change summary."""
prev = get_previous_ranking(keyword)
if prev is None:
if current_pos:
return f"NEW: ranking #{current_pos}"
return "NEW: not in top 100"
prev_pos = prev["position"]
if prev_pos == current_pos:
return "no change"
if prev_pos is None and current_pos is not None:
return f"ENTERED at #{current_pos}"
if prev_pos is not None and current_pos is None:
return f"DROPPED (was #{prev_pos})"
if current_pos < prev_pos:
diff = prev_pos - current_pos
return f"UP +{diff} (#{prev_pos} -> #{current_pos})"
else:
diff = current_pos - prev_pos
return f"DOWN -{diff} (#{prev_pos} -> #{current_pos})"
This produces clear, human-readable summaries like UP +3 (#8 -> #5) or DROPPED (was #12).
Scheduling Automated Checks
Tie everything together in a main function, then schedule it to run automatically:
def run_check():
"""Run a full ranking check for all keywords."""
print(f"\n{'='*50}")
print(f"Rank Check - {datetime.now().strftime('%Y-%m-%d %H:%M')}")
print(f"Domain: {DOMAIN} | Engine: {ENGINE}")
print(f"{'='*50}")
init_db()
results = []
for keyword in KEYWORDS:
ranking = check_keyword(keyword)
change = detect_change(keyword, ranking["position"])
save_ranking(ranking)
results.append((ranking, change))
pos_str = f"#{ranking['position']}" if ranking["position"] else "N/A"
print(f" {keyword:<40} {pos_str:>6} ({change})")
time.sleep(1) # Respect rate limits
# Summary
ranked = [r for r, _ in results if r["position"] is not None]
print(f"\nRanking: {len(ranked)}/{len(results)} keywords in top 100")
if ranked:
avg = sum(r["position"] for r in ranked) / len(ranked)
print(f"Average position: #{avg:.1f}")
if __name__ == "__main__":
run_check()
Option 1: Using the schedule Library
For a self-contained solution that runs as a long-lived process:
import schedule
schedule.every().day.at("08:00").do(run_check)
print("Rank tracker started. Checking daily at 08:00.")
run_check() # Run immediately on start
while True:
schedule.run_pending()
time.sleep(60)
Option 2: Using Cron
For servers, a cron job is more reliable since it does not require a persistent process:
# Edit crontab
crontab -e
# Add this line to run daily at 8 AM
0 8 * * * cd /path/to/python-rank-tracker && /path/to/venv/bin/python tracker.py >> tracker.log 2>&1
Exporting CSV Reports
Stakeholders often want ranking data in a spreadsheet. Add a function to export your data:
def export_csv(output_path: str = "rankings_report.csv"):
"""Export all ranking history to a CSV file."""
conn = sqlite3.connect(DB_PATH)
rows = conn.execute(
"SELECT keyword, position, url, checked_at "
"FROM rankings ORDER BY checked_at DESC, keyword"
).fetchall()
conn.close()
with open(output_path, "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["Keyword", "Position", "URL", "Checked At"])
writer.writerows(rows)
print(f"Exported {len(rows)} records to {output_path}")
Run python tracker.py --export to generate the CSV, or call export_csv() at the end of each check to maintain a rolling report.
Going Further
You now have a functional, cost-effective rank tracker. Here are some ways to extend it:
- Multi-engine tracking -- Track the same keywords across Google, Bing, and DuckDuckGo to see how rankings differ. Just loop over engines in
run_check(). - Competitor monitoring -- Pass multiple domains to
find_position()and track how your competitors rank alongside you. See our Node.js rank tracker tutorial for a different approach. - Slack or email alerts -- Send a notification when any keyword drops more than 5 positions. Use the
requestslibrary to post to a Slack webhook. - Visualization -- Use matplotlib or plotly to chart ranking trends over time. Build a dashboard with our guide on automating SEO reports.
- SERP feature tracking -- Monitor featured snippets, People Also Ask, and ads alongside organic positions.
Cost comparison: Tracking 100 keywords daily with any web search engine uses 3,000 API calls per month. At Serpent API's Scale tier (from $0.01/1K for DDG), that is just $0.03/month for DuckDuckGo or $0.15/month for Google Quick. Compare engines based on your needs. Compare that to $49-299/month for Ahrefs, SEMrush, or similar tools.
Frequently Asked Questions
Can I use Python to track keyword rankings with a SERP API?
Yes. Python's requests library makes it straightforward to query a SERP API like Serpent API, parse the JSON response, and extract your domain's position for any keyword. The complete process is covered step by step in this tutorial.
How often should I check keyword rankings?
For most websites, daily checks strike a good balance between staying informed and keeping API costs low. High-competition niches may benefit from checking twice daily, while smaller sites can check weekly.
How many keywords can I track with Serpent API?
There is no hard limit on the number of keywords. Your tracking capacity depends on your plan's rate limits. The free tier allows 30 requests per minute, while the Scale tier supports up to 600 requests per minute.
What does it cost to track 100 keywords daily?
Tracking 100 keywords daily using DuckDuckGo Web search costs approximately $0.45 per month on the default tier. With the Scale tier, that drops to about $0.24 per month -- far cheaper than commercial rank trackers.
Can I track rankings across different countries and languages?
Yes. Serpent API supports geo-targeted searches across 112 countries for Yahoo/Bing and 72 regions for DuckDuckGo. Pass the gl (country) and hl (language) parameters to track local rankings in any supported market.
Start Building Your Python Rank Tracker
Sign up for Serpent API and get 100 free searches. No credit card required.
Get Your Free API KeyExplore: SERP API · News API · Image Search API · Try in Playground