# A new deep research frontier on DeepSearchQA with the Task API Harness

Tags:Benchmarks
Reading time: 7 min
DeepSearchQA: Parallel Task API benchmarks deepresearch

The Parallel Task API is the most powerful deep research agent on the market. It’s used in production agents by teams at Opendoor, Attio, Modal, Starbridge, Profound, and more.

Parallel’s Processor[Processor](https://docs.parallel.ai/task-api/guides/choose-a-processor) architecture allows for flexibility and fine-tuned application of different compute budgets depending on the complexity of a research task. The “Ultra” range of Task API Processors is state-of-the-art on DeepSearchQA:

  • - Parallel Ultra is 11% more accurate and up to 57% lower cost versus the next best, GPT-5.4.
  • - With higher compute budgets, Parallel Ultra2x, 4x, and 8x continue to push the Pareto frontier of accuracy and cost, achieving 82% accuracy at the highest end.

This blog details some of the techniques we’ve employed to achieve this state of the art research quality.

## Results

We evaluated Parallel's Task API Processors against GPT 5.4, Opus 4-6, Gemini 3.1 Pro, Exa Search Deep Reasoning, and Perplexity Sonar Pro.

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features
![](https://cdn.sanity.io/images/5hzduz3y/production/480f0d6a315c4628815ae9cb62476c7e02afd26e-4800x2700.jpg)
ProviderModelCost (CPM)Accuracy (%)
ParallelUltra 8x$240082
ParallelUltra 4x$120081
ParallelUltra 2x$60077
ParallelUltra$30070
OpenAIGPT 5.4 with code execution$70163
GoogleGemini 3.1 Pro with code execution$70762
AnthropicOpus 4-6 with PTC$36,231*58
PerplexitySonar Pro$88328
ExaSearch Deep Reasoning$1518

_CPM: USD per 1,000 requests. Cost is shown on a log scale._

_*The cost of Opus 4-6 is higher than expected due to Anthropic’s potential billing issue, where prompt caching savings for PTC are not passed on to the user._

## About DeepSearchQA

DeepSearchQA is a 900-question evaluation from Google designed to test agents on multi-step information-seeking tasks across 17 fields of expertise. Each question is a causal chain: you can't answer the second part without resolving the first, and you can't resolve the first without searching the web, reading the results, and reasoning about what to search next.

A typical query might ask: _identify every researcher who co-authored papers with a specific professor at three different institutions over a decade, then determine which of those co-authors later joined a federal advisory committee._ Simple web retrieval won't cut it. The agent needs to plan a research strategy, execute multiple searches, cross-reference results across sources, and synthesize a precise answer with zero false positives.

Here, accuracy on DeepSearchQA means "fully correct": the response must be semantically identical to the ground-truth set. The agent has to find all correct answers while including none that are wrong.

_*We use DeepSearchQA instead of BrowseComp to ensure more reliable evaluation, as some models have begun to memorize portions of the BrowseComp dataset, potentially inflating performance._

## Inside the Task API Harness

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features
![](https://cdn.sanity.io/images/5hzduz3y/production/af4801e37c89acb2b3dfa0002ef6ead13dea4c4c-900x600.gif)
Visualization of the Parallel Task API Harness


Our results stem from combining several techniques.

  • - **Code Execution: **We enable programmatic tool use instead of relying purely on text-based interactions. This allows for more robust state and context management, and prevents unnecessary context growth by keeping intermediate reasoning and tool outputs externalized.
  • - **Aggressive prompt caching: **We cache repeated prompts and intermediate results wherever possible. This is critical for both latency and cost efficiency at scale.
  • - **Budget-aware execution: **Our system dynamically adapts its behavior based on a target budget, allocating resources where they have the highest impact on quality. This ensures we consistently achieve the best possible performance for a given cost envelope.
  • - **Context compaction: **As context grows, we proactively compress and distill it to retain only the most relevant information. This helps maintain model performance while avoiding degradation from overly long contexts.
  • - **Search and extraction infrastructure: **Our Task API is built over Parallel’s proprietary Search and Extract APIs. These APIs are built from the ground up to optimize for agentic workloads, enabling higher recall and more precise retrieval across heterogeneous sources. This ensures the model operates over clean and relevant inputs, instead of noisy or redundant inputs.

Together, these techniques allow us to scale reasoning depth and reliability while maintaining efficiency, leading to state-of-the-art results on DeepSearchQA. Instead of having the model orchestrate tools through tool calling, we gave it the ability to write and execute code.

The orchestrating model generates Python that calls research tools as ordinary functions. This code runs in a sandboxed interpreter. Only the final output of each code block re-enters the model's context. Intermediate data stays in the interpreter's variable state, not in the conversation history.

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features
![](https://cdn.sanity.io/images/5hzduz3y/production/dfb06ffe55ea24e567f370fa1ce0b26be8b4d772-1920x1004.png)
Task API high-level architecture
### Consider a query that requires comparing revenue numbers from 2020-2024. In a standard agent loop:
1
2
3
4
5
Step 1: search("company X revenue 2024") → [5 results in context] Step 2: extract(url_1) → [full page content in context] Step 3: extract(url_2) → [full page content in context] Step 4: search("company X revenue 2023") → [5 more results in context] ...context grows with every step```
Step 1: search("company X revenue 2024") → [5 results in context]
Step 2: extract(url_1) → [full page content in context]
Step 3: extract(url_2) → [full page content in context]
Step 4: search("company X revenue 2023") → [5 more results in context]
...context grows with every step
```

Each step inflates the context window. By step 8, the model spends most of its capacity re-reading old results.

### With our approach this would look like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# First, use one search step to discover the URL pattern. results_2024 = search("Company X 2024 annual report") report_2024 = results_2024[0].url # e.g. https://investors.companyx.com/financials/annual-reports/2024-annual-report.pdf # Infer a reusable template from the 2024 URL and a parse function parse_fn. url_template = "https://investors.companyx.com/financials/annual-reports/{year}-annual-report.pdf" years = [2022, 2023, 2024] reports = {} for year in years: url = url_template.format(year=year) reports[year] = parse_fn(extract( question=f"What was Company X's reported revenue in {year}? Return the value and supporting quote.", urls=[url], )) return reports```
# First, use one search step to discover the URL pattern.
results_2024 = search("Company X 2024 annual report")
report_2024 = results_2024[0].url
# e.g. https://investors.companyx.com/financials/annual-reports/2024-annual-report.pdf
 
# Infer a reusable template from the 2024 URL and a parse function parse_fn.
url_template = "https://investors.companyx.com/financials/annual-reports/{year}-annual-report.pdf"
 
years = [2022, 2023, 2024]
reports = {}
 
for year in years:
url = url_template.format(year=year)
reports[year] = parse_fn(extract(
question=f"What was Company X's reported revenue in {year}? Return the value and supporting quote.",
urls=[url],
))
 
return reports
```

Multiple searches, extractions, and analyses happen in a single execution step. The full page content of both extracted pages (potentially tens of thousands of tokens) never enters the research agent's context. Only the extracted revenue figures flow back.

This compounds. A 20-step research task that would fill a 128K context window under tool calling stays under 30K tokens because intermediate data lives in the interpreter, not the conversation.

### Persistent state as working memory

Variables created in one code execution step survive to the next. When the model writes _findings["report"] = report_2024_ in iteration 3, that variable is still accessible in iteration 7 when the model needs to cross-reference revenue against employee headcount.

This creates a separation that matters: the conversation history captures the model's high-level reasoning trajectory, while the interpreter's variable state captures the raw data it has gathered. The conversation can be compacted without losing granular data.

### Inside the sandbox

Running LLM-generated code in production requires strong isolation. Our interpreter is a sandboxed Python runtime built on Rust with no access to the network, filesystem, or operating system. The only way code interacts with the outside world is through explicitly injected functions: search, extract and a handful of state management utilities.

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features
![](https://cdn.sanity.io/images/5hzduz3y/production/614565599ea2bfe5f962306c615fca92bc6e82a5-2080x1144.png)
Sandbox architecture

This boundary gives us two things. First, the model can write complex data processing logic (filtering, aggregation, string manipulation, conditional branching) without risk of side effects.

### Budget-aware execution

One of the attractive features of Parallel’s Task API is fixed and predictable costs rather than per-token pricing. We have built our agent harness to support budgets as a first-class concept. This budget isn’t just a fixed limit on the number of steps. A fixed step limit penalizes simple and complex queries equally. We track cumulative cost across iterations instead. The system monitors token spend across all LLM calls, both the orchestrating model and any sub-model invocations, and injects budget warnings when remaining spend drops below a threshold. When the budget is nearly exhausted, the model synthesizes its findings and produces a final answer with whatever it has gathered.

A simple factual query might terminate in 2 iterations and cost a few cents. A complex multi-source comparison might run for 15 iterations across a larger budget. The architecture adapts to the difficulty of the question rather than imposing a uniform ceiling. This is also how we offer the full range of Processors from Ultra ($300 CPM) through Ultra8x ($2,400 CPM): higher-tier Processors get a larger budget, which lets the agent pursue more research paths before synthesizing.

### Context compaction

Even with code execution keeping intermediate data out of context, long research sessions accumulate history: the model's reasoning, code blocks, and execution summaries. When this history approaches context limits, we trigger compaction, a summarization pass that condenses earlier conversation turns while preserving key findings and the current research trajectory.

The persistent variable state in the interpreter is unaffected by compaction (it lives in the interpreter, not the conversation). This means the model can sustain research across many more iterations than its raw context window would suggest.

## Why we picked this architecture

Most deep research systems follow a familiar loop: an LLM generates a plan, calls a tool, reads the result, and repeats. This works for simple queries but degrades as research complexity grows. We evaluated several architectural patterns for complex research workflows, each with different trade-offs across context efficiency, granularity of information access, and adaptability.

ArchitectureCost efficiencyFine-grained Research ControlAdaptabilityReason
Naive Agent Loop (LLM → tool → result → LLM → …)❌❌✅Maximally flexible, but context grows at every step. The model ends up spending too much capacity re-reading intermediate results.
Agent Loop with context compression✅❌🟠Keeps context manageable, but important details are often compressed away. This makes it harder to revisit evidence or pursue subtle lines of inquiry.
Static Plan + sub-agents✅🟠❌Works well for predictable workflows, but cannot adapt cleanly once the plan is set.
Agent Loop with sub-agents✅🟠🟠Better runtime flexibility than static planning, but still suffers from coordination and memory fragmentation across agents.
Task API Harness (Parallel)✅✅✅Preserves detailed evidence access while keeping the intermediate state out of the model context. It can adapt mid-run, branch when needed, and scale without bloating the prompt.
### Try it
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import parallel client = parallel.Client(api_key="your-api-key") task = client.task_runs.create( objective="Identify every researcher who co-authored papers " "with Dr. Maria Chen at Stanford, MIT, and Caltech " "between 2010 and 2020, then determine which of those " "co-authors later joined the NIH Advisory Committee " "to the Director.", processor="ultra8x", ) print(task.output)```
import parallel
 
client = parallel.Client(api_key="your-api-key")
 
task = client.task_runs.create(
objective="Identify every researcher who co-authored papers "
"with Dr. Maria Chen at Stanford, MIT, and Caltech "
"between 2010 and 2020, then determine which of those "
"co-authors later joined the NIH Advisory Committee "
"to the Director.",
processor="ultra8x",
)
 
print(task.output)
```

## About the Parallel Task API

The Task API is a general-purpose web research agent API. Define what you need in natural language or structured JSON, and it handles research, synthesis, and structured output with citations and confidence levels. Processors range from Lite (basic lookups, $5/1K) through Ultra8x (the hardest deep research, $2,400/1K).

## About Parallel Web Systems

Parallel builds web infrastructure for AI. Our APIs, including Search, Extract, Task, FindAll, and Monitor, give AI agents structured, grounded access to the open web, powered by a rapidly growing proprietary index of the global internet.

Parallel turns human workflows that took days into agentic workflows that take seconds. Fortune 100s and leading frontier Harvey, Manus, Starbridge, and Profound rely on Parallel for legal grounding, fact-checking, contract monitoring, and high-quality content generation.

Get started at platform.parallel.ai[platform.parallel.ai](http://platform.parallel.ai)

Parallel avatar

By Parallel

April 7, 2026

## Related Posts56

How Modal saves tens of thousands annually by building in-house GTM pipelines with Parallel

- [How Modal saves tens of thousands annually by building in-house GTM pipelines with Parallel](https://parallel.ai/blog/case-study-modal)

Tags:Case Study
Reading time: 4 min
Opendoor and Parallel Case Study

- [How Opendoor uses Parallel as the enterprise grade web research layer powering its AI-native real estate operations](https://parallel.ai/blog/case-study-opendoor)

Tags:Case Study
Reading time: 6 min
Introducing stateful web research agents with multi-turn conversations

- [Introducing stateful web research agents with multi-turn conversations](https://parallel.ai/blog/task-api-interactions)

Tags:Product Release
Reading time: 3 min
Parallel is now live on Tempo via the Machine Payments Protocol (MPP)

- [Parallel is live on Tempo, now available natively to agents with the Machine Payments Protocol](https://parallel.ai/blog/tempo-stripe-mpp)

Tags:Partnership
Reading time: 4 min
Kepler | Parallel Case Study

- [How Parallel helped Kepler build AI that finance professionals can actually trust](https://parallel.ai/blog/case-study-kepler)

Tags:Case Study
Reading time: 5 min
Introducing the Parallel CLI

- [Introducing the Parallel CLI](https://parallel.ai/blog/parallel-cli)

Tags:Product Release
Reading time: 3 min
Profound + Parallel Web Systems

- [How Profound helps brands win AI Search with high-quality web research and content creation powered by Parallel](https://parallel.ai/blog/case-study-profound)

Tags:Case Study
Reading time: 4 min
How Harvey is expanding legal AI internationally with Parallel

- [How Harvey is expanding legal AI internationally with Parallel](https://parallel.ai/blog/case-study-harvey)

Tags:Case Study
Reading time: 3 min
Tabstack + Parallel Case Study

- [How Tabstack by Mozilla enables agents to navigate the web with Parallel’s best-in-class web search](https://parallel.ai/blog/case-study-tabstack)

Tags:Case Study
Reading time: 5 min
Parallel | Vercel

- [Parallel Web Tools and Agents now available across Vercel AI Gateway, AI SDK, and Marketplace](https://parallel.ai/blog/vercel)

Tags:Product Release
Reading time: 3 min
Product release: Authenticated page access for the Parallel Task API

- [Authenticated page access for the Parallel Task API](https://parallel.ai/blog/authenticated-page-access)

Tags:Product Release
Reading time: 3 min
Introducing structured outputs for the Monitor API

- [Introducing structured outputs for the Monitor API](https://parallel.ai/blog/structured-outputs-monitor)

Tags:Product Release
Reading time: 3 min
Product release: Research Models with Basis for the Parallel Chat API

- [Introducing research models with Basis for the Parallel Chat API](https://parallel.ai/blog/research-models-chat)

Tags:Product Release
Reading time: 2 min
Parallel + Cerebras

- [Build a real-time fact checker with Parallel and Cerebras](https://parallel.ai/blog/cerebras-fact-checker)

Tags:Cookbook
Reading time: 5 min
DeepSearch QA: Task API

- [Parallel Task API achieves state-of-the-art accuracy on DeepSearchQA](https://parallel.ai/blog/deepsearch-qa)

Tags:Benchmarks
Reading time: 3 min
Product release: Granular Basis

- [Introducing Granular Basis for the Task API](https://parallel.ai/blog/granular-basis-task-api)

Tags:Product Release
Reading time: 3 min
How Amp’s coding agents build better software with Parallel Search

- [How Amp’s coding agents build better software with Parallel Search](https://parallel.ai/blog/case-study-amp)

Tags:Case Study
Reading time: 3 min
Latency improvements on the Parallel Task API

- [Latency improvements on the Parallel Task API ](https://parallel.ai/blog/task-api-latency)

Tags:Product Release
Reading time: 3 min
Product release: Extract

- [Introducing Parallel Extract](https://parallel.ai/blog/introducing-parallel-extract)

Tags:Product Release
Reading time: 2 min
FindAll API - Product Release

- [Introducing Parallel FindAll](https://parallel.ai/blog/introducing-findall-api)

Tags:Product Release,Benchmarks
Reading time: 4 min
Product release: Monitor API

- [Introducing Parallel Monitor](https://parallel.ai/blog/monitor-api)

Tags:Product Release
Reading time: 3 min
Parallel raises $100M Series A to build web infrastructure for agents

- [Parallel raises $100M Series A to build web infrastructure for agents](https://parallel.ai/blog/series-a)

Tags:Fundraise
Reading time: 3 min
How Macroscope reduced code review false positives with Parallel

- [How Macroscope reduced code review false positives with Parallel](https://parallel.ai/blog/case-study-macroscope)

Reading time: 2 min
Product release - Parallel Search API

- [Introducing Parallel Search](https://parallel.ai/blog/introducing-parallel-search)

Tags:Benchmarks
Reading time: 7 min
Benchmarks: SealQA: Task API

- [Parallel processors set new price-performance standard on SealQA benchmark](https://parallel.ai/blog/benchmarks-task-api-sealqa)

Tags:Benchmarks
Reading time: 3 min
Introducing LLMTEXT, an open source toolkit for the llms.txt standard

- [Introducing LLMTEXT, an open source toolkit for the llms.txt standard](https://parallel.ai/blog/LLMTEXT-for-llmstxt)

Tags:Product Release
Reading time: 7 min
Starbridge + Parallel

- [How Starbridge powers public sector GTM with state-of-the-art web research](https://parallel.ai/blog/case-study-starbridge)

Tags:Case Study
Reading time: 4 min
Building a market research platform with Parallel Deep Research

- [Building a market research platform with Parallel Deep Research](https://parallel.ai/blog/cookbook-market-research-platform-with-parallel)

Tags:Cookbook
Reading time: 4 min
How Lindy brings state-of-the-art web research to automation flows

- [How Lindy brings state-of-the-art web research to automation flows](https://parallel.ai/blog/case-study-lindy)

Tags:Case Study
Reading time: 3 min
Introducing the Parallel Task MCP Server

- [Introducing the Parallel Task MCP Server](https://parallel.ai/blog/parallel-task-mcp-server)

Tags:Product Release
Reading time: 4 min
Introducing the Core2x Processor for improved compute control on the Task API

- [Introducing the Core2x Processor for improved compute control on the Task API](https://parallel.ai/blog/core2x-processor)

Tags:Product Release
Reading time: 2 min
How Day AI merges private and public data for business intelligence

- [How Day AI merges private and public data for business intelligence](https://parallel.ai/blog/case-study-day-ai)

Tags:Case Study
Reading time: 4 min
Full Basis framework for all Task API Processors

- [Full Basis framework for all Task API Processors](https://parallel.ai/blog/full-basis-framework-for-task-api)

Tags:Product Release
Reading time: 2 min
Building a real-time streaming task manager with Parallel

- [Building a real-time streaming task manager with Parallel](https://parallel.ai/blog/cookbook-sse-task-manager-with-parallel)

Tags:Cookbook
Reading time: 5 min
How Gumloop built a new AI automation framework with web intelligence as a core node

- [How Gumloop built a new AI automation framework with web intelligence as a core node](https://parallel.ai/blog/case-study-gumloop)

Tags:Case Study
Reading time: 3 min
Introducing the TypeScript SDK

- [Introducing the TypeScript SDK](https://parallel.ai/blog/typescript-sdk)

Tags:Product Release
Reading time: 1 min
Building a serverless competitive intelligence platform with MCP + Task API

- [Building a serverless competitive intelligence platform with MCP + Task API](https://parallel.ai/blog/cookbook-competitor-research-with-reddit-mcp)

Tags:Cookbook
Reading time: 6 min
Introducing Parallel Deep Research reports

- [Introducing Parallel Deep Research reports](https://parallel.ai/blog/deep-research-reports)

Tags:Product Release
Reading time: 2 min
BrowseComp / DeepResearch: Task API

- [A new pareto-frontier for Deep Research price-performance](https://parallel.ai/blog/deep-research-benchmarks)

Tags:Benchmarks
Reading time: 4 min
Building a Full-Stack Search Agent with Parallel and Cerebras

- [Building a Full-Stack Search Agent with Parallel and Cerebras](https://parallel.ai/blog/cookbook-search-agent)

Tags:Cookbook
Reading time: 5 min
Webhooks for the Parallel Task API

- [Webhooks for the Parallel Task API](https://parallel.ai/blog/webhooks)

Tags:Product Release
Reading time: 2 min
Introducing Parallel: Web Search Infrastructure for AIs

- [Introducing Parallel: Web Search Infrastructure for AIs ](https://parallel.ai/blog/introducing-parallel)

Tags:Benchmarks,Product Release
Reading time: 6 min
Introducing SSE for Task Runs

- [Introducing SSE for Task Runs](https://parallel.ai/blog/sse-for-tasks)

Tags:Product Release
Reading time: 2 min
A new line of advanced Processors: Ultra2x, Ultra4x, and Ultra8x

- [A new line of advanced Processors: Ultra2x, Ultra4x, and Ultra8x ](https://parallel.ai/blog/new-advanced-processors)

Tags:Product Release
Reading time: 2 min
Introducing Auto Mode for the Parallel Task API

- [Introducing Auto Mode for the Parallel Task API](https://parallel.ai/blog/task-api-auto-mode)

Tags:Product Release
Reading time: 1 min
A linear dithering of a search interface for agents

- [A state-of-the-art search API purpose-built for agents](https://parallel.ai/blog/search-api-benchmark)

Tags:Benchmarks
Reading time: 3 min
Parallel Search MCP Server in Devin

- [Parallel Search MCP Server in Devin](https://parallel.ai/blog/parallel-search-mcp-in-devin)

Tags:Product Release
Reading time: 2 min
Introducing Tool Calling via MCP Servers

- [Introducing Tool Calling via MCP Servers](https://parallel.ai/blog/mcp-tool-calling)

Tags:Product Release
Reading time: 2 min
Introducing the Parallel Search MCP Server

- [Introducing the Parallel Search MCP Server ](https://parallel.ai/blog/search-mcp-server)

Tags:Product Release
Reading time: 2 min
Starting today, Source Policy is available for both the Parallel Task API and Search API - giving you granular control over which sources your AI agents access and how results are prioritized.

- [Introducing Source Policy](https://parallel.ai/blog/source-policy)

Tags:Product Release
Reading time: 1 min
The Parallel Task Group API

- [The Parallel Task Group API](https://parallel.ai/blog/task-group-api)

Tags:Product Release
Reading time: 1 min
State of the Art Deep Research APIs

- [State of the Art Deep Research APIs](https://parallel.ai/blog/deep-research)

Tags:Benchmarks
Reading time: 3 min
Introducing the Parallel Search API

- [Parallel Search API is now available in alpha](https://parallel.ai/blog/parallel-search-api)

Tags:Product Release
Reading time: 2 min
Introducing the Parallel Chat API - a low latency web research API for web based LLM completions. The Parallel Chat API returns completions in text and structured JSON format, and is OpenAI Chat Completions compatible.

- [Introducing the Parallel Chat API ](https://parallel.ai/blog/chat-api)

Tags:Product Release
Reading time: 1 min
Parallel Web Systems introduces Basis with calibrated confidences - a new verification framework for AI web research and search API outputs that sets a new industry standard for transparent and reliable deep research.

- [Introducing Basis with Calibrated Confidences ](https://parallel.ai/blog/introducing-basis-with-calibrated-confidences)

Tags:Product Release
Reading time: 4 min
The Parallel Task API is a state-of-the-art system for automated web research that delivers the highest accuracy at every price point.

- [Introducing the Parallel Task API](https://parallel.ai/blog/parallel-task-api)

Tags:Product Release,Benchmarks
Reading time: 4 min
![Company Logo](https://parallel.ai/parallel-logo-540.png)

Contact

  • hello@parallel.ai[hello@parallel.ai](mailto:hello@parallel.ai)

Products

  • Search API[Search API](https://docs.parallel.ai/search/search-quickstart)
  • Extract API[Extract API](https://docs.parallel.ai/extract/extract-quickstart)
  • Task API[Task API](https://docs.parallel.ai/task-api/task-quickstart)
  • FindAll API[FindAll API](https://docs.parallel.ai/findall-api/findall-quickstart)
  • Chat API[Chat API](https://docs.parallel.ai/chat-api/chat-quickstart)
  • Monitor API[Monitor API](https://docs.parallel.ai/monitor-api/monitor-quickstart)

Resources

  • About[About](https://parallel.ai/about)
  • Pricing[Pricing](https://parallel.ai/pricing)
  • Docs[Docs](https://docs.parallel.ai)
  • Blog[Blog](https://parallel.ai/blog)
  • Changelog[Changelog](https://docs.parallel.ai/resources/changelog)
  • Careers[Careers](https://jobs.ashbyhq.com/parallel)

Info

  • Terms of Service[Terms of Service](https://parallel.ai/terms-of-service)
  • Customer Terms[Customer Terms](https://parallel.ai/customer-terms)
  • Privacy[Privacy](https://parallel.ai/privacy-policy)
  • Acceptable Use[Acceptable Use](https://parallel.ai/acceptable-use-policy)
  • Trust Center[Trust Center](https://trust.parallel.ai/)
  • Report Security Issue[Report Security Issue](mailto:security@parallel.ai)
LinkedIn[LinkedIn](https://www.linkedin.com/company/parallel-web/about/)Twitter[Twitter](https://x.com/p0)GitHub[GitHub](https://github.com/parallel-web)
All Systems Operational
![SOC 2 Compliant](https://parallel.ai/soc2.svg)

Parallel Web Systems Inc. 2026