Parallel
About[About](https://parallel.ai/about)Search[Search](https://parallel.ai/products/search)Pricing[Pricing](https://parallel.ai/pricing)Blog[Blog](https://parallel.ai/blog)Docs[Docs](https://docs.parallel.ai/home)
[Start Building]
[Menu]

Search API

# The best web search for your AI

## The highest accuracy web search API, built from the ground up for AIs

Get Started
P
[Get Started ](https://platform.parallel.ai/)Get a Demo[Get a Demo](https://forms.fillout.com/t/sL37Ja5wWKus)
Search Background
Agent

Why Parallel

## An agent is only as good as its context

Parallel returns the best context from the web

## Declare semantic objectives, not just keywords

AI tells Parallel Search exactly what it's looking for

Get Started
P
[Get Started
P
]
(https://platform.parallel.ai/)
Agent called parallel_search
{
"objective": "Find technical guides or open source repos for implementing a transformer from scratch"
}
{
"objective": ""
}

Parallel Search

vs Human

## Get back URLs ranked for token relevancy

Parallel surfaces the most information-dense pages for the agent's next action

Get Started
P
[Get Started
P
]
(https://platform.parallel.ai/)
1
Best available EV under 50K : r/electriccars - Reddit
2
New Electric Crossovers and SUVs Under $50k - J.D. Power
3
AWD EVs Under $50,000 - MotorWeek
4
Cheapest Electric Cars of 2025 - Kelley Blue Book
5
AWD EV's Under $50,000 - YouTube

Parallel Search

Sponsored Results
2026 Subaru Solterra EV
The 2026 Toyota bZ - A Modern Masterpiece
Results
Used Electric Cars for Sale

vs Human

## Reason on compressed token efficient excerpts

Each URL is distilled into the highest-value tokens for optimal context windows

Get Started
P
[Get Started
P
]
(https://platform.parallel.ai/)
[1706.03762] Attention Is All You Needhttps://arxiv.org/abs/1706.03762

Parallel Search

[1706.03762] Attention Is All You Needhttps://arxiv.org/abs/1706.03762

vs Human

We optimize every web token in the context window

This means agent responses are more accurate and lower cost

Search Playground
P
[Search Playground](https://platform.parallel.ai/)
HLEBrowseCompWebWalkerFRAMESBatched SimpleQASimpleQA
100120140160180Cost (CPM)2224262830323436384042444682,20PARALLEL47% / 82CPMEXA24% / 138CPMTAVILY21% / 190CPMPERPLEXITY30% / 126CPMOPENAI GPT-545% / 143CPM

COST (CPM)

ACCURACY (%)

Loading chart...
Parallel
Others
BrowseComp benchmark proving Parallel's enterprise deep research API delivers 48% accuracy vs GPT-4's 1% browsing capability. Performance comparison across Cost (CPM) and Accuracy (%) shows Parallel provides the best structured deep research API for ChatGPT, Claude, and AI agents. Enterprise AI agent deep research with structured data extraction delivering higher accuracy than OpenAI, Anthropic, Exa, and Perplexity.

### About this benchmark

This benchmark[benchmark]($https://lastexam.ai/) consists of 2,500 questions developed by subject-matter experts across dozens of subjects (e.g. math, humanities, natural sciences). Each question has a known solution that is unambiguous and easily verifiable, but requires sophisticated web retrieval and reasoning. Results are reported on a sample of 100 questions from this benchmark.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

## We optimize every web token in the context window

This means agent responses are more accurate and lower cost

Search Playground
P
[Search Playground](https://platform.parallel.ai/)

### HLE Search

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 82          | 47           |
| Others    | exa          | 138         | 24           |
| Others    | tavily       | 190         | 21           |
| Others    | perplexity   | 126         | 30           |
| Others    | openai gpt-5 | 143         | 45           |

### About this benchmark

This benchmark[benchmark]($https://lastexam.ai/) consists of 2,500 questions developed by subject-matter experts across dozens of subjects (e.g. math, humanities, natural sciences). Each question has a known solution that is unambiguous and easily verifiable, but requires sophisticated web retrieval and reasoning. Results are reported on a sample of 100 questions from this benchmark.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### BrowseComp Search

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 156         | 58           |
| Others    | exa          | 233         | 29           |
| Others    | tavily       | 314         | 23           |
| Others    | perplexity   | 256         | 22           |
| Others    | openai gpt-5 | 253         | 53           |

### About the benchmark

This benchmark[benchmark]($https://openai.com/index/browsecomp/), created by OpenAI, contains 1,266 questions requiring multi-hop reasoning, creative search formulation, and synthesis of contextual clues across time periods. Results are reported on a sample of 100 questions from this benchmark.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### WebWalker-Search

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 42          | 81           |
| Others    | exa          | 107         | 48           |
| Others    | tavily       | 156         | 79           |
| Others    | perplexity   | 91          | 67           |
| Others    | openai gpt-5 | 88          | 73           |

### About this benchmark

This benchmark[benchmark]($https://arxiv.org/abs/2501.07572) is designed to assess the ability of LLMs to perform web traversal. To successfully answer the questions in the benchmark, it requires the ability to crawl and extract content from website subpages. Results are reported on a sample of 100 questions from this benchmark.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### FRAMES-Search

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 42          | 92           |
| Others    | exa          | 81          | 81           |
| Others    | tavily       | 122         | 87           |
| Others    | perplexity   | 95          | 83           |
| Others    | openai gpt-5 | 68          | 90           |

### About this benchmark

This benchmark[benchmark]($https://huggingface.co/datasets/google/frames-benchmark) contains 824 challenging multi-hop questions designed to test factuality, retrieval accuracy, and reasoning. Results are reported on a sample of 100 questions from this benchmark.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### Batched SimpleQA - Search

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 50          | 90           |
| Others    | exa          | 119         | 71           |
| Others    | tavily       | 227         | 59           |
| Others    | perplexity   | 100         | 74           |
| Others    | openai gpt-5 | 91          | 88           |

### About this benchmark

This benchmark was created by batching 3 independent questions from the original SimpleQA dataset[SimpleQA dataset]($https://openai.com/index/introducing-simpleqa/) to create 100 composite, more complex, questions.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### SimpleQA Search

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 17          | 98           |
| Others    | exa          | 57          | 87           |
| Others    | tavily       | 110         | 93           |
| Others    | perplexity   | 52          | 92           |
| Others    | openai gpt-5 | 37          | 98           |

### About this benchmark

This benchmark[benchmark]($https://openai.com/index/introducing-simpleqa/), created by OpenAI, contains 4,326 questions focused on short, fact-seeking queries across a variety of domains. Results are reported on a sample of 100 questions from this benchmark.

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

# Powered by our own proprietary web scale index

With innovations in retrieval, crawling, indexing, and reasoning

  • - Billions of pages covering the full depth and breadth of the public web
  • - Millions of pages added daily
  • - Intelligently recrawled to keep data fresh

# The knowledge of the entire public web

in a single tool call

Integrated directly, or add our MCP Server[MCP Server]($https://docs.parallel.ai/integrations/mcp/programmatic-use)

Search Playground
P
[Search Playground](https://platform.parallel.ai/)Docs
D
[Docs](https://docs.parallel.ai/search/search-quickstart)
Parallel Web Systems products
Parallel Web Systems announcements
[Copy]
curl https://api.parallel.ai/v1beta/search \
  -H "Content-Type: application/json" \
  -H "x-api-key: $PARALLEL_API_KEY" \
  -H "parallel-beta: search-extract-2025-10-10" \
  -d '{
    "objective": "Find latest information about Parallel Web Systems. Focus on new product releases, benchmarks, or company announcements.",
    "search_queries": ["Parallel Web Systems products","Parallel Web Systems announcements"],
    "max_results": 10,
    "max_chars_per_result": 10000
  }'

## Scale with unmatched price-performance

Get started with up to 16,000 free search requests

$.005 per request with 10 results + $.001 per page extracted

Get Started
P
[Get Started ](https://platform.parallel.ai/)Get a Demo[Get a Demo](https://forms.fillout.com/t/sL37Ja5wWKus)
500
11,000
5
120
Pricing

$2.500

|                   | Search API                          |
| ----------------- | ----------------------------------- |
| Inputs            | Search objective, Keywords          |
| Outputs           | Ranked URLs, Compressed excerpts    |
| Best for          | Web search tool calls for AI agents |
| Latency           | < 5s, synchronous                   |
| Basis             | —                                   |
| Rate limits       | 600 requests / min                  |
| Security          | SOC2                                |
| Price per request | $.005 for 10 results                |

Search API

Ranked web URLs with token dense compressed excerpts

Inputs
Search objective,Keywords
Outputs
Ranked URLs,Compressed excerpts
Best for

Web search tool calls for AI agents

Latency

< 5s, synchronous

Basis

—

Rate limits

600 requests / min

Security

SOC2

Price per request

$.005

for 10 results
Search playground[Search playground](https://platform.parallel.ai/play/search)
S
then
P

Search API

Ranked web URLs with token dense compressed excerpts

Price per request:
$.005
for 10 results
Search playground[Search playground](https://platform.parallel.ai/play/search)
S
then
P
Inputs:
Search objectiveKeywords
Outputs:
Ranked URLsCompressed excerpts
Best for:
Web search tool calls for AI agents
Latency:
< 5s, synchronous
Basis:
—
Rate limits:
600 requests / min
Security:
SOC2

## Every control you need

across any web page

<script></script>
<script src="/recaptcha/api.js"></script>
<script></script>
<script></script>
<embed type="application/pdf">
<script></script>
<embed type="application/pdf">
<script src="/recaptcha/api.js"></script>
<script></script>
Extracting...

### Premium content extraction

Fetch content from PDFs and sites that are JS heavy or have CAPTCHAs

Live fetch
Off
On
max_age (hours)
24
fetch_timeout (seconds)
90

### Freshness policies

Set page age triggers for live crawls, with timeout thresholds to gaurantee latency

{
"title": "Nvidia Becomes First $5 Trillion Company - WSJ",
"excerpts": [
"Last updated: 2 days ago The tech giant owes much of its $4.89 trillion market capitalization to the use of its systems to train AI models. Now it's pushing deeper into ..."
]
}

### LLM friendly outputs

Choose between dense snippets or full page contents, in markdown LLMs understand

Policy
INCLUDE
NIST.GOV

### Source control

Pick which domains are included or excluded from your web search results

## Secure and trusted

0
### Zero data retention
### Soc 2 Type 2
### No training

## FAQ

Parallel Search (API) is the highest accuracy search engine for AIs. It allows developers to build AI apps, agents, and workflows that can search for and retrieve data from the web. It can be integrated into agent workflows for deep research across multiple steps, or for more basic single-hop queries.

Declarative semantic search lets agents express intent in natural language rather than construct keyword queries. Instead of "Columbus" AND "corporate law" AND "disability", an agent specifies: "Columbus-based corporate law firms specializing in disability care." The Search API interprets meaning and context, not just keywords, making it natural to integrate into agent workflows where you already have rich context from previous reasoning steps.

Parallel is the only Search API built from the ground up for AI agents. This means that agents can specify declarative semantic objectives and Parallel returns URLs and compressed excerpts based on token relevancy. The result is extremely dense web tokens optimized to engineer your agent’s context for better reasoning at the next turn. Agents using Parallel search produce answers with higher accuracy, fewer round trips, and lower cost.

We maintain a large web index containing billions of pages. Our crawling, retrieval, and ranking systems add and update millions of pages daily to keep the index fresh.

Yes, Parallel operates a web crawler to support the quality and coverage of the index. Our crawler respects _robots.txt_ and related crawling directives. Learn more about Parallel’s crawler here[Learn more about Parallel’s crawler here]($https://docs.parallel.ai/resources/crawler).

Dense excerpts are the most query relevant content from a webpage, compressed to be extremely token efficient for an agent. These compressed excerpts reduce noise by engineering an agent’s context window to only have the most relevant tokens to reason on - leading to higher accuracy, fewer round trips, and less token use.

End-to-end latency measures total time from agent input to final output, not single-search latency. Our semantic search architecture and dense snippets reduce the number of searches required to reach quality outputs. Two high-precision searches with Parallel beat three lower-quality attempts elsewhere—saving both time and tokens.

![Company Logo](https://parallel.ai/parallel-logo-540.png)

Contact

  • hello@parallel.ai[hello@parallel.ai](mailto:hello@parallel.ai)

Products

  • Search API[Search API](https://parallel.ai/products/search)
  • Extract API[Extract API](https://docs.parallel.ai/extract/extract-quickstart)
  • Task API[Task API](https://docs.parallel.ai/task-api/task-quickstart)
  • FindAll API[FindAll API](https://docs.parallel.ai/findall-api/findall-quickstart)
  • Chat API[Chat API](https://docs.parallel.ai/chat-api/chat-quickstart)
  • Monitor API[Monitor API](https://platform.parallel.ai/play/monitor)

Resources

  • About[About](https://parallel.ai/about)
  • Pricing[Pricing](https://parallel.ai/pricing)
  • Docs[Docs](https://docs.parallel.ai)
  • Status[Status](https://status.parallel.ai/)
  • Blog[Blog](https://parallel.ai/blog)
  • Changelog[Changelog](https://docs.parallel.ai/resources/changelog)
  • Careers[Careers](https://jobs.ashbyhq.com/parallel)

Info

  • Terms[Terms](https://parallel.ai/terms-of-service)
  • Privacy[Privacy](https://parallel.ai/privacy-policy)
  • Trust Center[Trust Center](https://trust.parallel.ai/)
![SOC 2 Compliant](https://parallel.ai/soc2.svg)
LinkedIn[LinkedIn](https://www.linkedin.com/company/parallel-web/about/)Twitter[Twitter](https://x.com/p0)

Parallel Web Systems Inc. 2025