Local SEO Rank Tracking via API: City + Lat-Long Targeting Tutorial (2026)

By Anurag Pathak· · 11 min read

Local SEO is geo-physical. A dental practice in Austin ranks differently in the local pack depending on whether you are searching from downtown or from a suburb fifteen miles north. A single national rank-tracker reading does not capture that. To track local rankings honestly you have to query Google from many lat-long points across your service area and aggregate.

This tutorial shows how to do that with a SERP API in Python. We build a grid of coordinates around a target city, query each one, parse the local pack and Google Maps results, and roll the data up into a visibility heatmap your client can actually look at.

What "Local Rank" Even Means in 2026

Three surfaces matter:

For most local businesses the local pack matters most. It gets the highest CTR, drives the most calls, and is where Google routes users with phrases like "near me" or implicit-local queries like "best plumber".

Why a National Rank Tracker Lies

National trackers usually pull rankings from a single coordinate (often a generic state-capital or country-centroid). For a single-location business, that gives a number but it is not the number that matters. Your customers are searching from their actual neighbourhood, three miles from your storefront — not from a generic centroid.

The fix is a grid. Pull rankings from many points around your service area and visualise the result as a heatmap. Then you know exactly where your visibility is strong and where it falls off.

The API Call

Most modern SERP APIs accept either a location string ("Austin, Texas, United States") or an explicit ll lat-long pair. Both work; lat-long is more precise.

import requests

API_KEY = "sk_live_your_key_here"

def fetch_local_pack(query, lat, lng, country="us"):
    r = requests.get("https://apiserpent.com/api/search", params={
        "q": query,
        "engine": "google",
        "country": country,
        "ll": f"{lat},{lng}",
        "api_key": API_KEY,
    }, timeout=30)
    data = r.json()
    return data.get("local_pack", []), data.get("organic_results", [])

# Example: pull the local pack for "best dentist" from a downtown Austin coord
pack, organic = fetch_local_pack("best dentist", 30.2672, -97.7431)
for biz in pack:
    print(biz["position"], biz["name"], biz["rating"], biz["address"])

The local_pack field is a structured array. Each entry typically looks like:

{
  "position": 1,
  "name": "Smith Family Dental",
  "rating": 4.8,
  "reviews": 312,
  "address": "1100 Congress Ave, Austin, TX 78701",
  "phone": "+1 512-555-0142",
  "website": "https://smithfamilydentalatx.com",
  "place_id": "ChIJ...",
  "service_options": ["online_appointments", "wheelchair_accessible"]
}

Building the Grid

The grid is just a list of (lat, lng) tuples spaced evenly across your service area. A naive helper that builds an N×N grid centred on a coordinate at a given mile spacing:

import math

def build_grid(center_lat, center_lng, miles=10, step_miles=1):
    """Return a square grid of coords around center.
    miles = half-side of the square; step_miles = spacing between points."""
    LAT_PER_MILE = 1 / 69.0  # rough conversion
    LNG_PER_MILE = 1 / (69.0 * math.cos(math.radians(center_lat)))

    coords = []
    steps = int(miles / step_miles)
    for i in range(-steps, steps + 1):
        for j in range(-steps, steps + 1):
            lat = center_lat + i * step_miles * LAT_PER_MILE
            lng = center_lng + j * step_miles * LNG_PER_MILE
            coords.append((round(lat, 6), round(lng, 6)))
    return coords

# 21x21 = 441 points, 1-mile spacing, 10-mile half-side around Austin
grid = build_grid(30.2672, -97.7431, miles=10, step_miles=1)
print(f"{len(grid)} points to query")

For a typical service area, 100 to 500 grid points is enough. You do not need a continuous map — the local pack is fairly stable within a one-mile cell.

Running the Sweep

Sequential calls take forever. Use asyncio + httpx with a semaphore set to your rate limit.

import asyncio
import httpx

SEMAPHORE = asyncio.Semaphore(8)  # tune to your tier

async def fetch_one(client, query, lat, lng):
    async with SEMAPHORE:
        r = await client.get(
            "https://apiserpent.com/api/search",
            params={"q": query, "engine": "google", "country": "us",
                    "ll": f"{lat},{lng}", "api_key": API_KEY},
            timeout=30,
        )
        return (lat, lng), r.json()

async def sweep(query, grid):
    async with httpx.AsyncClient() as client:
        tasks = [fetch_one(client, query, lat, lng) for lat, lng in grid]
        return await asyncio.gather(*tasks, return_exceptions=True)

results = asyncio.run(sweep("best dentist", grid))

For a 441-point grid at 8 concurrent requests, the sweep finishes in about 90 seconds. Keep your concurrency under your provider's rate limit; on Serpent's Scale tier (600 req/min), 8 is conservative.

Parsing Your Position at Each Point

Reduce the raw responses to "what position does our business hold at this coordinate":

def my_position_at(local_pack, target_place_id=None, target_name=None):
    for biz in local_pack:
        if target_place_id and biz.get("place_id") == target_place_id:
            return biz["position"]
        if target_name and biz["name"].lower() == target_name.lower():
            return biz["position"]
    return None  # not in top 3

scores = []
for (lat, lng), payload in results:
    pack = payload.get("local_pack", [])
    pos = my_position_at(pack, target_name="Smith Family Dental")
    scores.append({"lat": lat, "lng": lng, "position": pos})

Now scores is a list of (lat, lng, position) tuples ready to render as a heatmap.

Rendering the Heatmap

Two easy options:

  1. Folium (Python). Render a Leaflet heatmap directly from the scores list. Output is a self-contained HTML file you can drop into a client report.
  2. Mapbox or Google Maps. Better for interactive client portals. Push the scores to a geojson collection and render a choropleth or heatmap layer.
import folium
from folium.plugins import HeatMap

m = folium.Map(location=[30.2672, -97.7431], zoom_start=12)

# Lower position is better; convert to weight (3 = strongest, 1 = weakest)
heat_data = []
for s in scores:
    if s["position"] is not None:
        weight = 4 - s["position"]
        heat_data.append([s["lat"], s["lng"], weight])

HeatMap(heat_data, radius=20).add_to(m)
m.save("local_heatmap.html")

Open the HTML in a browser. You will see a green-to-red heatmap of your local pack visibility across the entire service area. Coloured-red zones are where you do not appear in the local pack at all — those are the areas you target with content, citations, and review acquisition.

Persisting Snapshots Over Time

One sweep is interesting; weekly sweeps are actionable. Add SQLite storage and a date column:

import sqlite3
from datetime import date

conn = sqlite3.connect("local_rank.db")
conn.execute("""
  CREATE TABLE IF NOT EXISTS snapshots (
    snap_date TEXT, query TEXT,
    lat REAL, lng REAL, position INTEGER,
    PRIMARY KEY (snap_date, query, lat, lng)
  )
""")

today = date.today().isoformat()
for s in scores:
    conn.execute(
        "INSERT OR REPLACE INTO snapshots VALUES (?, ?, ?, ?, ?)",
        (today, "best dentist", s["lat"], s["lng"], s["position"]),
    )
conn.commit()

To compare two sweeps and diff cells where you gained or lost positions, JOIN the table on (query, lat, lng) across two snap_date values and compute the delta.

The Cost

Three things drive cost: grid density, keyword count, and refresh cadence. For a typical local business:

At Serpent API Scale tier ($0.30 per 1,000 quick searches): the first row is $0.24/month, the second $2.40/month, the third $12/month. The most expensive franchise setup costs less than a single LocalFalcon seat.

Common Pitfalls

  1. Grid too sparse. A 5-mile spacing across a single zip code misses real visibility cliffs. Start at 1-mile and only widen if you see noisy data.
  2. Mixing English-language queries with non-English locations. Pass hl (host language) explicitly. A query in English from a Spanish-speaking neighbourhood returns different results than the same query in Spanish.
  3. Ignoring Google Maps results. The local pack and Maps share a ranking but not a layout. Track both if you care about visibility on mobile (where map results dominate).
  4. Sampling at the wrong time. Local pack ordering shifts during business-hours peaks. Run sweeps at the same hour each week.
  5. Forgetting to deduplicate. The same business can appear in multiple grid cells. Aggregate by place_id, not name.

Build Your Local Rank Heatmap

Serpent API supports lat-long location targeting on every Google search call, with flat per-call pricing from $0.30 per 1,000 queries at Scale tier. 10 free Google searches with every new account.

Get Your Free API Key

Explore: SERP API · Playground · Local SEO ranking guide

FAQ

How do I track local pack rankings via API?

Pass the user's location to a SERP API as either a city/state pair or a precise lat-long coordinate. The API returns the local pack as a structured array. Loop a grid of coordinates around your service area to map visibility geographically.

What is uule and why does it matter?

uule is Google's encoding of a location string. Most SERP APIs handle it automatically when you pass a city or coordinate. Without a location signal, Google defaults to your IP location, which gives misleading results when tracking another market.

How granular can lat-long targeting be?

Six decimal places, which is sub-metre. In practice, the local pack stops changing meaningfully below a one-block grid for dense urban areas; 0.5 to 1 mile is right for suburban or rural areas.

Can I track Google Maps results separately from the local pack?

Yes. Modern SERP APIs expose both as separate fields or endpoints. The local pack is the 3-pack on a standard SERP; Maps is the 20-business list inside maps.google.com.

How often should I re-track local rankings?

Weekly for most service businesses. The local pack is more stable than national organic. Daily is overkill outside competitive niches like personal injury law and dental in major metros.

Can I track competitors with the same setup?

Yes. Change the matching condition in my_position_at() to look for the competitor's place_id or business name. Run the same grid sweep and you have their visibility heatmap.