Developer Guide

Build a Multi-Engine Search Aggregator with Serpent API

By Serpent API Team · · 13 min read

Every search engine has its own index, its own ranking algorithm, and its own biases. A page that ranks first on Google might not even appear in DuckDuckGo's top 20. A result that Bing surfaces prominently might be buried on Yahoo. By querying multiple engines for the same query and comparing the results, you get a much richer picture of how the web views a given topic.

This tutorial walks you through building a multi-engine search aggregator using Serpent API and Node.js. The aggregator queries Google, Yahoo/Bing, and DuckDuckGo simultaneously, merges the results, deduplicates them, and produces a consensus-scored ranking that reflects cross-engine agreement. By the end, you will have a working tool that can power SEO research, competitive analysis, content audits, and AI search optimization workflows.

Why Query Multiple Search Engines?

Reducing Single-Engine Bias

Google handles roughly 90% of global search traffic, which makes it the default optimization target. But optimizing for Google alone creates blind spots. Bing powers ChatGPT's search feature, so your Bing rankings directly affect whether ChatGPT cites you. DuckDuckGo has a loyal privacy-focused user base of 40+ million daily searches. Yahoo shares Bing's index but applies its own ranking layer.

A multi-engine view reveals which of your pages have broad search consensus (ranking well everywhere) and which are engine-dependent (ranking well on one engine but not others). Pages with broad consensus are more robust: they are less likely to lose visibility if any single engine updates its algorithm.

Better Data for AI Search Optimization

As we covered in our AI search optimization guide, different AI platforms use different search indexes. Gemini uses Google. ChatGPT uses Bing. Perplexity uses multiple sources. A multi-engine aggregator gives you the foundation data needed to understand your visibility across all the indexes that feed AI search.

Use Cases

Architecture Overview

The aggregator follows a straightforward pipeline:

Query Input
    |
    v
[Parallel API Calls] ---> Google, Yahoo, DuckDuckGo
    |
    v
[Normalize Results] ---> Uniform format per engine
    |
    v
[Deduplicate] ----------> Match URLs across engines
    |
    v
[Consensus Score] ------> Weight by position + engine count
    |
    v
[Comparison Report] ----> Merged ranking + per-engine data

Each engine returns results in the same Serpent API format, which makes normalization simple. The key challenge is deduplication: the same page can appear with slightly different URLs across engines (trailing slashes, www vs. non-www, HTTP vs. HTTPS, different query parameters).

Project Setup and Dependencies

This project requires Node.js 18+ and a Serpent API key. The only dependency is a fetch library (Node.js 18+ has built-in fetch).

mkdir search-aggregator && cd search-aggregator
npm init -y

// Create aggregator.js - no external dependencies needed with Node 18+

Set your API key as an environment variable:

export SERPENT_API_KEY="your_api_key_here"

Querying All Engines in Parallel

Serpent API supports google, yahoo (which also covers Bing), and ddg as engine values. We query all three in parallel using Promise.all to minimize total latency.

const API_KEY = process.env.SERPENT_API_KEY;
const BASE_URL = "https://apiserpent.com/api/search";
const ENGINES = ["google", "yahoo", "ddg"];

async function queryEngine(query, engine, num = 20) {
  const params = new URLSearchParams({
    q: query,
    engine: engine,
    num: num.toString(),
    apiKey: API_KEY,
  });

  const response = await fetch(`${BASE_URL}?${params}`);
  if (!response.ok) {
    console.error(`[${engine}] HTTP ${response.status}`);
    return { engine, results: [], error: response.status };
  }

  const data = await response.json();
  const organic = data.results?.organic || [];

  return {
    engine,
    results: organic.map((r) => ({
      title: r.title,
      url: r.url,
      snippet: r.snippet,
      position: r.position,
      engine: engine,
    })),
  };
}

async function queryAllEngines(query, num = 20) {
  const promises = ENGINES.map((engine) => queryEngine(query, engine, num));
  const results = await Promise.all(promises);
  return results;
}

By running all three requests in parallel, the total wait time is determined by the slowest engine rather than the sum of all three. In practice, this means a multi-engine query takes about 5 to 8 seconds (the time for the slowest single engine) rather than 15 to 20 seconds if run sequentially.

Result Deduplication and Merging

The same URL can appear differently across engines. We need a normalization function that reduces URLs to a canonical form for matching.

function normalizeUrl(url) {
  try {
    const parsed = new URL(url);
    // Remove protocol, www prefix, trailing slash, and common tracking params
    let normalized = parsed.hostname.replace(/^www\./, "") + parsed.pathname;
    normalized = normalized.replace(/\/+$/, ""); // Remove trailing slashes
    return normalized.toLowerCase();
  } catch {
    return url.toLowerCase();
  }
}

function mergeResults(engineResults) {
  const merged = new Map(); // normalizedUrl -> merged entry

  for (const { engine, results } of engineResults) {
    for (const result of results) {
      const key = normalizeUrl(result.url);

      if (merged.has(key)) {
        // This URL appeared in a previous engine - add engine data
        const existing = merged.get(key);
        existing.engines[engine] = result.position;
        existing.engineCount++;
        // Keep the best (shortest) title and snippet
        if (result.title.length > existing.title.length) {
          existing.title = result.title;
        }
      } else {
        // First time seeing this URL
        merged.set(key, {
          url: result.url,
          normalizedUrl: key,
          title: result.title,
          snippet: result.snippet,
          engines: { [engine]: result.position },
          engineCount: 1,
        });
      }
    }
  }

  return Array.from(merged.values());
}

Each merged entry now contains an engines object showing which search engines found the page and at what position. The engineCount tells us how many engines agreed this page is relevant.

Consensus Scoring: Ranking by Cross-Engine Agreement

The consensus score combines two signals: how many engines found the page (breadth) and where each engine ranked it (position quality). Pages that rank well across all three engines get the highest scores.

function calculateConsensusScore(entry, totalEngines = 3) {
  // Position score: higher position = higher score (max 20 per engine)
  let positionScore = 0;
  for (const [engine, position] of Object.entries(entry.engines)) {
    positionScore += Math.max(0, 21 - position); // Position 1 = 20 points
  }

  // Engine coverage bonus: appearing in multiple engines is valuable
  const coverageMultiplier = entry.engineCount / totalEngines;
  // Bonus: 1.0x for 1 engine, 1.5x for 2, 2.0x for all 3
  const coverageBonus = 1 + (coverageMultiplier * (totalEngines - 1)) / totalEngines;

  // Final score
  entry.consensusScore = Math.round(positionScore * coverageBonus * 10) / 10;
  return entry;
}

function rankByConsensus(mergedResults) {
  return mergedResults
    .map((entry) => calculateConsensusScore(entry))
    .sort((a, b) => b.consensusScore - a.consensusScore);
}

This scoring system naturally surfaces pages that have broad agreement across engines. A page ranking position 3 on all three engines scores higher than a page ranking position 1 on one engine but absent from the other two. This aligns well with AI search optimization goals, since AI systems that pull from multiple indexes will favor pages with broad search consensus.

Building a Comparison Report

The comparison report shows the final consensus ranking alongside per-engine positions, making it easy to spot engine-specific discrepancies.

function generateReport(rankedResults, query) {
  console.log(`\n${"=".repeat(80)}`);
  console.log(`Multi-Engine Search Report: "${query}"`);
  console.log(`Engines: ${ENGINES.join(", ")} | Date: ${new Date().toISOString().split("T")[0]}`);
  console.log(`${"=".repeat(80)}\n`);

  console.log(
    `${"#".padStart(3)}  ${"Score".padStart(6)}  ${"G".padStart(3)}  ` +
    `${"Y".padStart(3)}  ${"D".padStart(3)}  URL`
  );
  console.log("-".repeat(80));

  rankedResults.slice(0, 30).forEach((entry, index) => {
    const rank = String(index + 1).padStart(3);
    const score = String(entry.consensusScore).padStart(6);
    const g = (entry.engines.google || "-").toString().padStart(3);
    const y = (entry.engines.yahoo || "-").toString().padStart(3);
    const d = (entry.engines.ddg || "-").toString().padStart(3);
    const url = entry.normalizedUrl.substring(0, 45);
    console.log(`${rank}  ${score}  ${g}  ${y}  ${d}  ${url}`);
  });

  // Summary statistics
  const total = rankedResults.length;
  const allThree = rankedResults.filter((r) => r.engineCount === 3).length;
  const twoEngines = rankedResults.filter((r) => r.engineCount === 2).length;
  const oneEngine = rankedResults.filter((r) => r.engineCount === 1).length;

  console.log(`\n--- Summary ---`);
  console.log(`Total unique URLs: ${total}`);
  console.log(`Found on all 3 engines: ${allThree} (${((allThree/total)*100).toFixed(0)}%)`);
  console.log(`Found on 2 engines: ${twoEngines} (${((twoEngines/total)*100).toFixed(0)}%)`);
  console.log(`Found on 1 engine only: ${oneEngine} (${((oneEngine/total)*100).toFixed(0)}%)`);
}

// Also provide JSON output for programmatic use
function toJSON(rankedResults, query) {
  return {
    query,
    timestamp: new Date().toISOString(),
    engines: ENGINES,
    totalResults: rankedResults.length,
    results: rankedResults.map((r, i) => ({
      rank: i + 1,
      consensusScore: r.consensusScore,
      url: r.url,
      title: r.title,
      engineCount: r.engineCount,
      positions: r.engines,
    })),
  };
}

Complete Working Code

Here is the complete aggregator as a single file. Save it as aggregator.js and run it with node aggregator.js "your search query".

// aggregator.js - Multi-Engine Search Aggregator
// Usage: SERPENT_API_KEY=your_key node aggregator.js "search query"

const API_KEY = process.env.SERPENT_API_KEY;
const BASE_URL = "https://apiserpent.com/api/search";
const ENGINES = ["google", "yahoo", "ddg"];

async function queryEngine(query, engine, num = 20) {
  const params = new URLSearchParams({ q: query, engine, num: String(num), apiKey: API_KEY });
  const res = await fetch(`${BASE_URL}?${params}`);
  if (!res.ok) return { engine, results: [] };
  const data = await res.json();
  return {
    engine,
    results: (data.results?.organic || []).map(r => ({
      title: r.title, url: r.url, snippet: r.snippet,
      position: r.position, engine
    }))
  };
}

function normalizeUrl(url) {
  try {
    const p = new URL(url);
    return (p.hostname.replace(/^www\./, "") + p.pathname).replace(/\/+$/, "").toLowerCase();
  } catch { return url.toLowerCase(); }
}

function mergeAndScore(engineResults) {
  const merged = new Map();
  for (const { engine, results } of engineResults) {
    for (const r of results) {
      const key = normalizeUrl(r.url);
      if (merged.has(key)) {
        const e = merged.get(key);
        e.engines[engine] = r.position;
        e.engineCount++;
        if (r.title.length > e.title.length) e.title = r.title;
      } else {
        merged.set(key, {
          url: r.url, normalizedUrl: key, title: r.title,
          snippet: r.snippet, engines: { [engine]: r.position }, engineCount: 1
        });
      }
    }
  }

  return Array.from(merged.values()).map(entry => {
    let score = 0;
    for (const pos of Object.values(entry.engines)) score += Math.max(0, 21 - pos);
    const bonus = 1 + ((entry.engineCount / ENGINES.length) * (ENGINES.length - 1)) / ENGINES.length;
    entry.consensusScore = Math.round(score * bonus * 10) / 10;
    return entry;
  }).sort((a, b) => b.consensusScore - a.consensusScore);
}

async function main() {
  const query = process.argv[2];
  if (!query) { console.log("Usage: node aggregator.js \"search query\""); return; }
  if (!API_KEY) { console.log("Set SERPENT_API_KEY environment variable"); return; }

  console.log(`Querying ${ENGINES.length} engines for: "${query}"...`);
  const engineResults = await Promise.all(ENGINES.map(e => queryEngine(query, e)));
  const ranked = mergeAndScore(engineResults);

  console.log(`\n#    Score   Google  Yahoo   DDG     URL`);
  console.log("-".repeat(80));
  ranked.slice(0, 25).forEach((r, i) => {
    const g = (r.engines.google || "-").toString().padEnd(8);
    const y = (r.engines.yahoo || "-").toString().padEnd(8);
    const d = (r.engines.ddg || "-").toString().padEnd(8);
    console.log(`${String(i+1).padEnd(5)}${String(r.consensusScore).padEnd(8)}${g}${y}${d}${r.normalizedUrl.slice(0,40)}`);
  });

  const total = ranked.length;
  const all3 = ranked.filter(r => r.engineCount === 3).length;
  console.log(`\nTotal: ${total} unique URLs | All 3 engines: ${all3} (${Math.round(all3/total*100)}%)`);
}

main().catch(console.error);

Sample Output

Querying 3 engines for: "best project management tools 2026"...

#    Score   Google  Yahoo   DDG     URL
--------------------------------------------------------------------------------
1    106.7   1       2       1       monday.com/blog/project-management/tools
2    93.3    3       1       3       pcmag.com/picks/the-best-project-management
3    88.9    2       4       2       forbes.com/advisor/business/best-project
4    71.1    5       3       7       clickup.com/blog/project-management-tools
5    53.3    4       6       -       asana.com/resources/project-management-tools
6    48.9    -       5       4       techradar.com/best/best-project-management
7    33.3    7       -       5       zapier.com/blog/best-project-management
...

Total: 42 unique URLs | All 3 engines: 8 (19%)

Extensions and Next Steps

Add News and Image Search

Serpent API supports /api/news and /api/images endpoints with the same engine parameter. You can extend the aggregator to compare news results or image results across engines using the same merge and score logic. News aggregation is particularly useful for PR monitoring and brand tracking.

Track a Domain's Cross-Engine Rankings

Add a domain filter to the aggregator to build a rank tracker that shows your positions on all three engines for each keyword:

function findDomainRankings(ranked, domain) {
  return ranked
    .filter(r => r.normalizedUrl.includes(domain))
    .map(r => ({
      url: r.url,
      google: r.engines.google || "N/A",
      yahoo: r.engines.yahoo || "N/A",
      ddg: r.engines.ddg || "N/A",
      consensusScore: r.consensusScore
    }));
}

// Usage:
const myRankings = findDomainRankings(ranked, "yourdomain.com");
console.log("Your cross-engine rankings:", myRankings);

Detect Ranking Discrepancies

Find pages where your rankings differ significantly across engines. A page ranking position 2 on Google but position 18 on Bing represents a Bing optimization opportunity—and improving your Bing ranking directly improves your chances of being cited by ChatGPT.

Schedule Automated Comparisons

Run the aggregator on a cron schedule (weekly is a good starting point) and store results in a database. Over time, you will be able to track how cross-engine consensus shifts for your target keywords and react to changes before they impact your AI visibility.

For more developer tutorials, see our guide on integrating SERP APIs with AI agents and our rank tracker tutorial.

Start Building with Serpent API

Query Google, Yahoo, Bing, and DuckDuckGo through a single API. Same format, same key, all engines. 100 free searches included.

Get Your Free API Key

Explore: SERP API · Google Search API · Pricing · Try in Playground