Parallel
About[About](https://parallel.ai/about)Search[Search](https://parallel.ai/products/search)Pricing[Pricing](https://parallel.ai/pricing)Blog[Blog](https://parallel.ai/blog)Docs[Docs](https://docs.parallel.ai/home)
[Start Building]
[Menu]

Pricing

# Pay as you go

Run up to 16,000 requests for free

|                   | Task API                                                  | Search API                          | Extract API                             | Chat API                     | Monitor API                                 | Find All API                                    |
| ----------------- | --------------------------------------------------------- | ----------------------------------- | --------------------------------------- | ---------------------------- | ------------------------------------------- | ----------------------------------------------- |
| Inputs            | Existing structured data, Question                        | Search objective, Keywords          | URL, Extract objective                  | Question                     | Query, Frequency                            | Query                                           |
| Outputs           | Deep research, Structured enrichments                     | Ranked URLs, Compressed excerpts    | Full page contents, Compressed excerpts | Free text, Structured JSON   | New events                                  | Matches                                         |
| Best for          | Deep research, database enrichment,or workflow automation | Web search tool calls for AI agents | URL content extraction                  | Interactive chat apps and UX | Continuous monitoring of changes on the web | Finding all entities matching specific criteria |
| Latency           | 5s - 30min, asynchronous                                  | < 5s, synchronous                   | < 3s, synchronous                       | < 5s, synchronous            | Asynchronous                                | 10min-1hr,asynchronous                          |
| Basis             | Citations, reasoning, confidence, excerpts                | —                                   | —                                       | Citations                    | Citations                                   | Citations, reasoning, confidence, excerpts      |
| Rate limits       | 2,000 requests / min                                      | 600 requests / min                  | 600 requests / min                      | 300 requests / min           | 300 requests / min                          | 25 / hr                                         |
| Security          | SOC2                                                      | SOC2                                | SOC2                                    | SOC2                         | SOC2                                        | SOC2                                            |
| Price per request | $0.005 - $2.4                                             | $.005 for 10 results                | $0.001                                  | $0.005                       | $.003                                       | $.03 - $1 per match                             |

Task API

Deep web research with structured outputs, optimized for quality & freshness

Search API

Ranked web URLs with token dense compressed excerpts

Extract API

Directly extract web page contents

Chat API

Fast, web-researched LLM completions

Monitor API

Track changes to any events on the web

Find All API

Create structured datasets from any text query

Inputs
Existing structured data,Question
Search objective,Keywords
URL,Extract objective
Question
Query,Frequency
Query
Outputs
Deep research,Structured enrichments
Ranked URLs,Compressed excerpts
Full page contents,Compressed excerpts
Free text,Structured JSON
New events
Matches
Best for

Deep research, database enrichment,or workflow automation

Web search tool calls for AI agents

URL content extraction

Interactive chat apps and UX

Continuous monitoring of changes on the web

Finding all entities matching specific criteria

Latency

5s - 30min, asynchronous

< 5s, synchronous

< 3s, synchronous

< 5s, synchronous

Asynchronous

10min-1hr,asynchronous

Basis

Citations, reasoning, confidence, excerpts

—

—

Citations

Citations

Citations, reasoning, confidence, excerpts

Rate limits

2,000 requests / min

600 requests / min

600 requests / min

300 requests / min

300 requests / min

25 / hr

Security

SOC2

SOC2

SOC2

SOC2

SOC2

SOC2

Price per request

$0.005 - $2.4

$.005

for 10 results

$0.001

$0.005

$.003

$.03 - $1

per match
Task playground[Task playground](https://platform.parallel.ai/play)
T
then
P
Search playground[Search playground](https://platform.parallel.ai/play/search)
S
then
P
Extract playground[Extract playground](https://platform.parallel.ai/play/extract)
E
then
P
Chat playground[Chat playground](https://platform.parallel.ai/play/chat)
C
then
P
Monitor playground[Monitor playground](https://platform.parallel.ai/play/monitor)
M
then
P
Find All playground[Find All playground](https://platform.parallel.ai/play/find-all)
F
then
P

Task API

Deep web research with structured outputs, optimized for quality & freshness

Price per request:
$0.005 - $2.4
Task playground[Task playground](https://platform.parallel.ai/play)
T
then
P
Inputs:
Existing structured dataQuestion
Outputs:
Deep researchStructured enrichments
Best for:
Deep research, database enrichment,or workflow automation
Latency:
5s - 30min, asynchronous
Basis:
Citations, reasoning, confidence, excerpts
Rate limits:
2,000 requests / min
Security:
SOC2

Search API

Ranked web URLs with token dense compressed excerpts

Price per request:
$.005
for 10 results
Search playground[Search playground](https://platform.parallel.ai/play/search)
S
then
P
Inputs:
Search objectiveKeywords
Outputs:
Ranked URLsCompressed excerpts
Best for:
Web search tool calls for AI agents
Latency:
< 5s, synchronous
Basis:
—
Rate limits:
600 requests / min
Security:
SOC2

Extract API

Directly extract web page contents

Price per request:
$0.001
Extract playground[Extract playground](https://platform.parallel.ai/play/extract)
E
then
P
Inputs:
URLExtract objective
Outputs:
Full page contentsCompressed excerpts
Best for:
URL content extraction
Latency:
< 3s, synchronous
Basis:
—
Rate limits:
600 requests / min
Security:
SOC2

Chat API

Fast, web-researched LLM completions

Price per request:
$0.005
Chat playground[Chat playground](https://platform.parallel.ai/play/chat)
C
then
P
Inputs:
Question
Outputs:
Free textStructured JSON
Best for:
Interactive chat apps and UX
Latency:
< 5s, synchronous
Basis:
Citations
Rate limits:
300 requests / min
Security:
SOC2

Monitor API

Track changes to any events on the web

Price per request:
$.003
Monitor playground[Monitor playground](https://platform.parallel.ai/play/monitor)
M
then
P
Inputs:
QueryFrequency
Outputs:
New events
Best for:
Continuous monitoring of changes on the web
Latency:
Asynchronous
Basis:
Citations
Rate limits:
300 requests / min
Security:
SOC2

Find All API

Create structured datasets from any text query

Price per request:
$.03 - $1
per match
Find All playground[Find All playground](https://platform.parallel.ai/play/find-all)
F
then
P
Inputs:
Query
Outputs:
Matches
Best for:
Finding all entities matching specific criteria
Latency:
10min-1hr,asynchronous
Basis:
Citations, reasoning, confidence, excerpts
Rate limits:
25 / hr
Security:
SOC2

## Enterprise

Built for scalable, reliable, enterprise workloads

Get a demo
D
[Get a demo](https://form.fillout.com/t/sL37Ja5wWKus)
  • - Volume discounts
  • - Custom data retention agreements
  • - Data Protection Agreement
  • - Volume discounts
  • - Custom data retention agreements
  • - Data Protection Agreement
  • - Custom rate limits
  • - Early access to new products
  • - Dedicated onboarding and technical support
  • - Custom rate limits
  • - Early access to new products
  • - Dedicated onboarding and technical support

## Allocate compute based on task complexity

Parallel lets you easily flex compute based on the complexity of your task

## Pricing breakdown

Basic information retrieval
Basic information retrieval
$5.001,000 requests
$5.001,000 requests

### Task API

| Processor | Cost (per 1K requests) | Best for                               | Latency    | Max Fields | Basis                                      |
| --------- | ---------------------- | -------------------------------------- | ---------- | ---------- | ------------------------------------------ |
| Lite      | $5                     | Basic information retrieval            | 5s-60s     | ~2         | Citations, Reasoning                       |
| Base      | $10                    | Simple web research                    | 15s-100s   | ~5         | Citations, Reasoning                       |
| Core      | $25                    | Complex web research                   | 1min-5min  | ~10        | Citations, Reasoning, Excerpts, Confidence |
| Core2x    | $50                    | Very complex web research              | 2min-5min  | ~10        | Citations, Reasoning, Excerpts, Confidence |
| Pro       | $100                   | Exploratory web research               | 3min-9min  | ~20        | Citations, Reasoning, Excerpts, Confidence |
| Ultra     | $300                   | Extensive deep research                | 5min-25min | ~20        | Citations, Reasoning, Excerpts, Confidence |
| Ultra2x   | $600                   | Advanced deep research with 2x compute | 5min-25min | ~25        | Citations, Reasoning, Excerpts, Confidence |
| Ultra4x   | $1,200                 | Advanced deep research with 4x compute | 8min-30min | ~25        | Citations, Reasoning, Excerpts, Confidence |
| Ultra8x   | $2,400                 | Advanced deep research with 8x compute | 8min-30min | ~25        | Citations, Reasoning, Excerpts, Confidence |

### Search API

| Processor | Cost (per 1K requests with 10 results) | Cost (per 1K additional results) | Best for             | Latency |
| --------- | -------------------------------------- | -------------------------------- | -------------------- | ------- |
| Search    | $5                                     | $1                               | High accuracy search | 2s-5s   |

### Extract API

| Processor | Cost (per 1K results) | Best for                          | Latency                        |
| --------- | --------------------- | --------------------------------- | ------------------------------ |
| Extract   | $1                    | Direct webpage content extraction | 1s-3s (cached), 60s-90s (live) |

### Chat API

| Processor | Cost (per 1K requests) | Best for                                | Latency | Basis     |
| --------- | ---------------------- | --------------------------------------- | ------- | --------- |
| Speed     | $5                     | Low-latency, web-based chat completions | <5s     | Citations |

### Monitor API

| Processor | Cost (per 1K requests) | Best for                                    | Frequency               | Basis     |
| --------- | ---------------------- | ------------------------------------------- | ----------------------- | --------- |
| Monitor   | $3                     | Continuous monitoring of changes on the web | Hourly / Daily / Weekly | Citations |

### Find All API

| Processor | Fixed Cost | Cost Per Match | Best for                                                  |
| --------- | ---------- | -------------- | --------------------------------------------------------- |
| Base      | $0.25      | $0.03          | Broad, common queries where you expect many matches       |
| Core      | $2.00      | $0.15          | Specific queries with moderate expected matches           |
| Pro       | $10.00     | $1.00          | Highly specific queries with rare or hard-to-find matches |
| Preview   | $0.10      | $0.00          | Testing queries (~10 candidates)                          |
Processor

Cost (per 1K requests)

Best for

Latency

Max Fields

Basis

Lite

$5

Basic information retrieval

5s-60s

~2

Citations, Reasoning

Base

$10

Simple web research

15s-100s

~5

Citations, Reasoning

Core

$25

Complex web research

1min-5min

~10

Citations, Reasoning, Excerpts, Confidence

Core2x

$50

Very complex web research

2min-5min

~10

Citations, Reasoning, Excerpts, Confidence

Pro

$100

Exploratory web research

3min-9min

~20

Citations, Reasoning, Excerpts, Confidence

Ultra

$300

Extensive deep research

5min-25min

~20

Citations, Reasoning, Excerpts, Confidence

Ultra2x

$600

Advanced deep research with 2x compute

5min-25min

~25

Citations, Reasoning, Excerpts, Confidence

Ultra4x

$1,200

Advanced deep research with 4x compute

8min-30min

~25

Citations, Reasoning, Excerpts, Confidence

Ultra8x

$2,400

Advanced deep research with 8x compute

8min-30min

~25

Citations, Reasoning, Excerpts, Confidence

Start building

## Scale with predictable cost

Parallel APIs are priced per request, not per token. You always know the exact cost of a query before you run it.

Developer platform
P
[Developer platform](https://platform.parallel.ai)Get a demo
D
[Get a demo](https://form.fillout.com/t/sL37Ja5wWKus)

[Get up to $5K in free credits]

Qualified startups can receive up to $5K in free credits from Parallel

Apply now[Apply now](https://form.fillout.com/t/cNxJPKmh7eus)

[Use pre-committed spend on Parallel]

Use pre-committed AWS spend on Parallel via the AWS marketplace

Get in touch[Get in touch](https://form.fillout.com/t/x5mAateBUnus)

Highest quality at every price point

State of the art across several benchmarks

HLE-SearchBrowseComp-SearchBrowseComp DeepResearch BenchWISER-Atomic
100120140160180Cost (CPM)2224262830323436384042444682,20PARALLEL47% / 82CPMEXA24% / 138CPMTAVILY21% / 190CPMPERPLEXITY30% / 126CPMOPENAI GPT-545% / 143CPM

COST (CPM)

ACCURACY (%)

Loading chart...
Parallel
Others
BrowseComp benchmark proving Parallel's enterprise deep research API delivers 48% accuracy vs GPT-4's 1% browsing capability. Performance comparison across Cost (CPM) and Accuracy (%) shows Parallel provides the best structured deep research API for ChatGPT, Claude, and AI agents. Enterprise AI agent deep research with structured data extraction delivering higher accuracy than OpenAI, Anthropic, Exa, and Perplexity.

### About this benchmark

This benchmark[benchmark]($https://lastexam.ai/) consists of 2,500 questions developed by subject-matter experts across dozens of subjects (e.g. math, humanities, natural sciences). Each question has a known solution that is unambiguous and easily verifiable, but requires sophisticated web retrieval and reasoning. Results are reported on a sample of 100 questions from this benchmark. Learn more in our latest blog[latest blog]($https://parallel.ai/blog/introducing-parallel-search).

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

## Highest quality at every price point

State of the art across several benchmarks

### HLE Search LP

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 82          | 47           |
| Others    | exa          | 138         | 24           |
| Others    | tavily       | 190         | 21           |
| Others    | perplexity   | 126         | 30           |
| Others    | openai gpt-5 | 143         | 45           |

### About this benchmark

This benchmark[benchmark]($https://lastexam.ai/) consists of 2,500 questions developed by subject-matter experts across dozens of subjects (e.g. math, humanities, natural sciences). Each question has a known solution that is unambiguous and easily verifiable, but requires sophisticated web retrieval and reasoning. Results are reported on a sample of 100 questions from this benchmark. Learn more in our latest blog[latest blog]($https://parallel.ai/blog/introducing-parallel-search).

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### BrowseComp Search LP

| Series    | Model        | Cost  (CPM) | Accuracy (%) |
| --------- | ------------ | ----------- | ------------ |
| Parallel  | parallel     | 156         | 58           |
| Others    | exa          | 233         | 29           |
| Others    | tavily       | 314         | 23           |
| Others    | perplexity   | 256         | 22           |
| Others    | openai gpt-5 | 253         | 53           |

### About the benchmark

This benchmark[benchmark]($https://openai.com/index/browsecomp/), created by OpenAI, contains 1,266 questions requiring multi-hop reasoning, creative search formulation, and synthesis of contextual clues across time periods. Results are reported on a sample of 100 questions from this benchmark. Learn more in our latest blog[latest blog]($https://parallel.ai/blog/introducing-parallel-search).

### Methodology

  • - **Evaluation**: Results are based on tests run using official Search MCP servers provided as an MCP tool to OpenAI's GPT-5 model using the Responses API. In all cases, the MCP tools were limited to only the appropriate web search tool. Answers were evaluated using an LLM as a judge (GPT 4.1).
  • - **Cost Calculation**: Cost reflects the average cost per query across all questions run. This cost includes both the search API call and LLM token cost.
  • - **Testing Dates**: Testing was conducted from November 3rd to November 5th.

### New Browsecomp (LP)

| Series    | Model      | Cost (CPM) | Accuracy  (%) |
| --------- | ---------- | ---------- | ------------- |
| Parallel  | Ultra      | 300        | 45            |
| Parallel  | Ultra2x    | 600        | 51            |
| Parallel  | Ultra4x    | 1200       | 56            |
| Parallel  | Ultra8x    | 2400       | 58            |
| Others    | GPT-5      | 488        | 38            |
| Others    | Anthropic  | 5194       | 7             |
| Others    | Exa        | 402        | 14            |
| Others    | Perplexity | 709        | 6             |

CPM: USD per 1000 requests. Cost is shown on a Log scale.

### About the benchmark

This benchmark[benchmark]($https://openai.com/index/browsecomp/), created by OpenAI, contains 1,266 questions requiring multi-hop reasoning, creative search formulation, and synthesis of contextual clues across time periods. Results are reported on a random sample of 100 questions from this benchmark. Read the blog[blog]($https://parallel.ai/blog/deep-research-benchmarks).

### Methodology

  • - Dates: All measurements were made between 08/11/2025 and 08/29/2025.
  • - Configurations: For all competitors, we report the highest numbers we were able to achieve across multiple configurations of their APIs. The exact configurations are below.
    • - GPT-5: high reasoning, high search context, default verbosity
    • - Exa: Exa Research Pro
    • - Anthropic: Claude Opus 4.1
    • - Perplexity: Sonar Deep Research reasoning effort high

### RACER (LP)

| Series   | Model      | Cost (CPM) | Win Rate vs Reference (%) |
| -------- | ---------- | ---------- | ------------------------- |
| Parallel | Ultra      | 300        | 82                        |
| Parallel | Ultra2x    | 600        | 86                        |
| Parallel | Ultra4x    | 1200       | 92                        |
| Parallel | Ultra8x    | 2400       | 96                        |
| Others   | GPT-5      | 628        | 66                        |
| Others   | O3 Pro     | 4331       | 30                        |
| Others   | O3         | 605        | 26                        |
| Others   | Perplexity | 538        | 6                         |

CPM: USD per 1000 requests. Cost is shown on a Log scale.

### About the benchmark

This benchmark[benchmark]($https://github.com/Ayanami0730/deep_research_bench) contains 100 expert-level research tasks designed by domain specialists across 22 fields, primarily Science & Technology, Business & Finance, and Software Development. It evaluates AI systems' ability to produce rigorous, long-form research reports on complex topics requiring cross-disciplinary synthesis. Results are reported from the subset of 50 English-language tasks in the benchmark. Read the blog[blog]($https://parallel.ai/blog/deep-research-benchmarks).

### Methodology

  • - Dates: All measurements were made between 08/11/2025 and 08/29/2025.
  • - Win Rate: Calculated by comparing RACE[RACE]($https://github.com/Ayanami0730/deep_research_bench) scores in direct head-to-head evaluations against reference reports.
  • - Configurations: For all competitors, we report results for the highest numbers we were able to achieve across multiple configurations of their APIs. The exact GPT-5 configuration is high reasoning, high search context, and high verbosity.
  • - Excluded API Results: Exa Research Pro (0% win rate), Claude Opus 4.1 (0% win rate).

### WISER-Atomic

| Series   | Model          | Cost (CPM) | Accuracy (%) |
| -------- | -------------- | ---------- | ------------ |
| Parallel | Core           | 25         | 77           |
| Parallel | Base           | 10         | 75           |
| Parallel | Lite           | 5          | 64           |
| Others   | o3             | 45         | 69           |
| Others   | 4.1 mini low   | 25         | 63           |
| Others   | gemini 2.5 pro | 36         | 56           |
| Others   | sonar pro high | 16         | 64           |
| Others   | sonar low      | 5          | 48           |

CPM: USD per 1000 requests. Cost is shown on a Log scale.

### About the benchmark

This benchmark, created by Parallel, contains 121 questions intended to reflect real-world web research queries across a variety of domains. Read our blog here[here]($https://parallel.ai/blog/parallel-task-api).

### Steps of reasoning

50% Multi-Hop questions
50% Single-Hop questions

### Distribution

40% Financial Research
20% Sales Research
20% Recruitment
20% Miscellaneous

![Company Logo](https://parallel.ai/parallel-logo-540.png)

Contact

  • hello@parallel.ai[hello@parallel.ai](mailto:hello@parallel.ai)

Products

  • Search API[Search API](https://parallel.ai/products/search)
  • Extract API[Extract API](https://docs.parallel.ai/extract/extract-quickstart)
  • Task API[Task API](https://docs.parallel.ai/task-api/task-quickstart)
  • FindAll API[FindAll API](https://docs.parallel.ai/findall-api/findall-quickstart)
  • Chat API[Chat API](https://docs.parallel.ai/chat-api/chat-quickstart)
  • Monitor API[Monitor API](https://platform.parallel.ai/play/monitor)

Resources

  • About[About](https://parallel.ai/about)
  • Pricing[Pricing](https://parallel.ai/pricing)
  • Docs[Docs](https://docs.parallel.ai)
  • Status[Status](https://status.parallel.ai/)
  • Blog[Blog](https://parallel.ai/blog)
  • Changelog[Changelog](https://docs.parallel.ai/resources/changelog)
  • Careers[Careers](https://jobs.ashbyhq.com/parallel)

Info

  • Terms[Terms](https://parallel.ai/terms-of-service)
  • Privacy[Privacy](https://parallel.ai/privacy-policy)
  • Trust Center[Trust Center](https://trust.parallel.ai/)
![SOC 2 Compliant](https://parallel.ai/soc2.svg)
LinkedIn[LinkedIn](https://www.linkedin.com/company/parallel-web/about/)Twitter[Twitter](https://x.com/p0)

Parallel Web Systems Inc. 2025