Pricing
# Pay as you go
Run up to 20,000 requests for free
| | Task API | Search API | Chat API | | ----------------- | --------------------------------------------------------- | ----------------------------------- | ---------------------------- | | Inputs | Existing structured data, Question | Search objective, Queries | Question | | Outputs | Deep research, Structured enrichments | Ranked URLs, Excerpts | Free text, Structured JSON | | Best for | Deep research, database enrichment,or workflow automation | Web search tool calls for AI agents | Interactive chat apps and UX | | Latency | 5s - 30min, asynchronous | <3s, synchronous | <5s, synchronous | | Basis | Citations, reasoning, confidence, excerpts | — | Citations | | Rate limits | 2,000 requests / min | 600 requests / min | 300 requests / min | | Security | SOC2 | SOC2 | SOC2 | | Price per request | $0.005 - $2.4 | $0.004 - $0.009 | $0.005 |
Task API
Deep web research with structured outputs, optimized for quality & freshness
Search API
Ranked web URLs with long, relevant content
Chat API
Fast, web-researched LLM completions
## Enterprise
Built for scalable, reliable, enterprise workloads
- - Volume discounts
- - Custom data retention agreements
- - Data Protection Agreement
- - Volume discounts
- - Custom data retention agreements
- - Data Protection Agreement
- - Custom rate limits
- - Early access to new products
- - Dedicated onboarding and technical support
- - Custom rate limits
- - Early access to new products
- - Dedicated onboarding and technical support
## Allocate compute based on task complexity
Parallel lets you easily flex compute based on the complexity of your task
## Pricing breakdown
### Task API
| Processor | Cost (per 1K requests) | Best for | Latency | Max Fields | Basis | | --------- | ---------------------- | -------------------------------------- | ---------- | ---------- | ------------------------------------------ | | Lite | $5 | Basic information retrieval | 5s-60s | ~2 | Citations, Reasoning | | Base | $10 | Simple web research | 15s-100s | ~5 | Citations, Reasoning | | Core | $25 | Complex web research | 1min-5min | ~10 | Citations, Reasoning, Excerpts, Confidence | | Pro | $100 | Exploratory web research | 3min-9min | ~20 | Citations, Reasoning, Excerpts, Confidence | | Ultra | $300 | Extensive deep research | 5min-25min | ~20 | Citations, Reasoning, Excerpts, Confidence | | Ultra2x | $600 | Advanced deep research with 2x compute | 5min-25min | ~25 | Citations, Reasoning, Excerpts, Confidence | | Ultra4x | $1,200 | Advanced deep research with 4x compute | 8min-30min | ~25 | Citations, Reasoning, Excerpts, Confidence | | Ultra8x | $2,400 | Advanced deep research with 8x compute | 8min-30min | ~25 | Citations, Reasoning, Excerpts, Confidence |
### Search API
| Processor | Cost (per 1K requests) | Best for | Latency | | --------- | ---------------------- | ------------------ | ------- | | Base | $4 | Low latency search | 1s-3s | | Pro | $9 | Fresh search | 45s-70s |
### Chat API
| Processor | Cost (per 1K requests) | Best for | Latency | Basis | | --------- | ---------------------- | --------------------------------------- | ------- | --------- | | Speed | $5 | Low-latency, web-based chat completions | <5s | Citations |
Cost (per 1K requests)
Best for
Latency
Max Fields
Basis
$5
Basic information retrieval
5s-60s
~2
Citations, Reasoning
$10
Simple web research
15s-100s
~5
Citations, Reasoning
$25
Complex web research
1min-5min
~10
Citations, Reasoning, Excerpts, Confidence
$100
Exploratory web research
3min-9min
~20
Citations, Reasoning, Excerpts, Confidence
$300
Extensive deep research
5min-25min
~20
Citations, Reasoning, Excerpts, Confidence
$600
Advanced deep research with 2x compute
5min-25min
~25
Citations, Reasoning, Excerpts, Confidence
$1,200
Advanced deep research with 4x compute
8min-30min
~25
Citations, Reasoning, Excerpts, Confidence
$2,400
Advanced deep research with 8x compute
8min-30min
~25
Citations, Reasoning, Excerpts, Confidence
Start building
## Scale with predictable cost
Parallel APIs are priced per request, not per token. You always know the exact cost of a query before you run it.
[Get up to $5K in free credits]
Qualified startups can receive up to $5K in free credits from Parallel
[Use pre-committed spend on Parallel]
Use pre-committed AWS spend on Parallel via the AWS marketplace
Highest quality at every price point
State of the art across several benchmarks
### About the benchmark
This benchmark[benchmark]($https://openai.com/index/browsecomp/), created by OpenAI, contains 1,266 questions requiring multi-hop reasoning, creative search formulation, and synthesis of contextual clues across time periods. Results are reported on a random sample of 100 questions from this benchmark. Learn more in our blog[blog]($https://parallel.ai/blog/introducing-parallel).
### Methodology
- - Dates: All measurements were made between 08/08/2025 and 08/13/2025.
- - Configurations: For all competitors, we report the highest numbers we were able to achieve across multiple configurations of their APIs. The exact configurations are below.
- - GPT-5: high reasoning, high search context, default verbosity
- - Exa: Exa Research Pro
- - Anthropic: Claude Opus 4.1
- - Perplexity: Sonar Deep Research reasoning effort high
## Highest quality at every price point
State of the art across several benchmarks
### Browsecomp (LP)
| Category | Accuracy (%) | | ---------- | ------------ | | Ultra8x | 58 | | Ultra | 45 | | Pro | 34 | | GPT-5 | 41 | | Exa | 14 | | Anthropic | 7 | | Perplexity | 6 |
### About the benchmark
This benchmark[benchmark]($https://openai.com/index/browsecomp/), created by OpenAI, contains 1,266 questions requiring multi-hop reasoning, creative search formulation, and synthesis of contextual clues across time periods. Results are reported on a random sample of 100 questions from this benchmark. Learn more in our blog[blog]($https://parallel.ai/blog/introducing-parallel).
### Methodology
- - Dates: All measurements were made between 08/08/2025 and 08/13/2025.
- - Configurations: For all competitors, we report the highest numbers we were able to achieve across multiple configurations of their APIs. The exact configurations are below.
- - GPT-5: high reasoning, high search context, default verbosity
- - Exa: Exa Research Pro
- - Anthropic: Claude Opus 4.1
- - Perplexity: Sonar Deep Research reasoning effort high
### DeepResearch Bench (LP)
| Category | Win Rate (%) | | -------- | ------------ | | Ultra8x | 82 | | Ultra | 74 | | GPT-5 | 66 |
### About the benchmark
This benchmark[benchmark]($https://github.com/Ayanami0730/deep_research_bench) contains 100 expert-level research tasks designed by domain specialists across 22 fields, primarily Science & Technology, Business & Finance, and Software Development. It evaluates AI systems' ability to produce rigorous, long-form research reports on complex topics requiring cross-disciplinary synthesis. Results are reported from the subset of 50 English-language tasks in the benchmark. Learn more in our blog[blog]($https://parallel.ai/blog/introducing-parallel).
### Methodology
- - Dates: All measurements were made between 08/08/2025 and 08/13/2025.
- - Win Rate: Calculated by comparing RACE[RACE]($https://github.com/Ayanami0730/deep_research_bench) scores in direct head-to-head evaluations.
- - Configurations: For all competitors, we report results for the highest numbers we were able to achieve across multiple configurations of their APIs. The exact GPT-5 configuration is high reasoning, high search context, and high verbosity.
- - Excluded API Results: Exa Research Pro (0% win rate), Claude Opus 4.1 (0% win rate), and Perplexity Sonar Deep Research (6% win rate).
### Search MCP Benchmark (LP)
| Series | Model | Cost (CPM) | Accuracy (%) | | --------- | ---------------------------- | ---------- | ------------ | | Parallel | GPT 4.1 w/ Prll Search MCP | 21 | 74.9 | | Parallel | o4 mini w / Prll Search MCP | 90 | 82.14 | | Parallel | o3 w / Prll Search MCP | 192 | 80.61 | | Parallel | sonnet 4 w / Prll Search MCP | 92 | 78.57 | | Native | GPT 4.1 w / Native Search | 27 | 70 | | Native | o4 mini w / Native Search | 190 | 77 | | Native | o3 w / Native Search | 351 | 79.08 | | Native | sonnet 4 w / Native Search | 122 | 68.83 | | Exa | GPT 4.1 w/ Exa Search MCP | 40 | 58.67 | | Exa | o4 mini w/ Exa Search MCP | 199 | 61.73 | | Exa | o3 w/ Exa Search MCP | 342 | 56.12 | | Exa | sonnet 4 w/ Exa Search MCP | 140 | 67.13 |
CPM: USD per 1000 requests. Cost is shown on a Linear scale.
### About the benchmark
This benchmark, created by Parallel, blends WISER-Fresh and WISER-Atomic. WISER-Fresh is a set of 76 queries requiring the freshest data from the web, generated by Parallel with o3 pro. WISER-Atomic is a set of 120 hard real-world business queries, based on use cases from Parallel customers. Read our blog here[here]($https://parallel.ai/blog/search-api-benchmark).
### Distribution
40% WISER-Fresh
60% WISER-Atomic
### Evaluation
The Parallel Search API was evaluated by comparing three different web search solutions (Parallel MCP server, Exa MCP server/tool calling, LLM native web search) across four different LLMs (GPT 4.1, o4-mini, o3, Claude Sonnet 4).
### WISER-Atomic
| Series | Model | Cost (CPM) | Accuracy (%) | | -------- | -------------- | ---------- | ------------ | | Parallel | Core | 25 | 77 | | Parallel | Base | 10 | 75 | | Parallel | Lite | 5 | 64 | | Others | o3 | 45 | 69 | | Others | 4.1 mini low | 25 | 63 | | Others | gemini 2.5 pro | 36 | 56 | | Others | sonar pro high | 16 | 64 | | Others | sonar low | 5 | 48 |
CPM: USD per 1000 requests. Cost is shown on a Log scale.
### About the benchmark
This benchmark, created by Parallel, contains 121 questions intended to reflect real-world web research queries across a variety of domains. Read our blog here[here]($https://parallel.ai/blog/parallel-task-api).
### Steps of reasoning
50% Multi-Hop questions
50% Single-Hop questions
### Distribution
40% Financial Research
20% Sales Research
20% Recruitment
20% Miscellaneous
### SimpleQA
| Series | Model | Cost (CPM) | Accuracy (%) | | -------- | ---------------- | ---------- | ------------ | | Parallel | Lite | 5 | 92 | | Parallel | Base | 10 | 94 | | Parallel | Core | 25 | 94 | | Others | o3 high | 56 | 92 | | Others | 4.1 mini high | 30 | 88 | | Others | gemini 2.5 flash | 35 | 91 | | Others | sonar | 8 | 81 | | Others | sonar pro | 13 | 84 |
CPM: USD per 1000 requests. Cost is shown on a Log scale.
### About the benchmark
This benchmark contains 4,326 questions focused on short, fact-seeking queries across a variety of domains. Read our blog here[here]($https://parallel.ai/blog/parallel-task-api).
### Steps of reasoning
100% Single-Hop questions
### Distribution
36% Culture
20% Science and Technology
16% Politics
28% Miscellaneous