Parallel
About[About](https://parallel.ai/about)Search[Search](https://parallel.ai/products/search)Pricing[Pricing](https://parallel.ai/pricing)Blog[Blog](https://parallel.ai/blog)Docs[Docs](https://docs.parallel.ai/home)
[Log in / Sign up]
[Menu]

# Build a real-time fact checker with Parallel and Cerebras

This guide demonstrates how to build a complete fact-checking application that extracts verifiable claims from any text or URL and validates them against live web sources. By the end, you'll have a streaming fact checker with a polished UI that highlights claims directly in the content as they're verified in real-time.

Tags:Cookbook
Reading time: 5 min
GithubTry the app
Parallel + Cerebras

Fact-checking is critical to a wide range of business and academic fields. With today’s latest AI models, chips, and programmable web search, developers can quickly and easily add high-quality, ultra-fast fact-checking to virtually any workflow or application.

Content Fact Checker by Cerebras and Parallel
![Content Fact Checker by Cerebras and Parallel](https://cdn.sanity.io/images/5hzduz3y/production/83bec8faa28f38715023f012bd18f5b39afb7707-1500x1080.gif)

## Key features

  • - **Two Input Modes**: Paste text directly or fetch content from any URL
  • - **Claim Extraction**: LLM-powered identification of verifiable factual claims
  • - **Web Verification**: Each claim is searched and validated against live web sources
  • - **Real-Time Streaming**: Results stream to the UI as claims are extracted and verified
  • - **Source Citations**: Every verdict includes linked source references with excerpts
  • - **Visual Highlighting**: Claims are highlighted directly in the content with color-coded verdicts

## Architecture

The fact checker implements a multi-phase pipeline:

1. Content Ingestion: Accept text input or extract content from a URL

2. Claim Extraction: LLM identifies verifiable factual claims with exact source spans

3. Parallel Verification: Each claim is searched and analyzed concurrently

4. Real-Time Streaming: Results flow to the frontend via Server-Sent Events

This architecture enables sub-second feedback as claims are identified and surfaced in the UI, while verification happens in parallel to minimize total latency.

## Technology stack

  • - Parallel TypeScript SDK[Parallel TypeScript SDK]($https://www.npmjs.com/package/parallel-web) for Search and Extract APIs
  • - Cerebras[Cerebras]($https://inference-docs.cerebras.ai/?utm_source=DevX&utm_campaign=parallel) for ultra-fast inference (gpt-oss-120B; up to 3000 tokens/second)
  • - Vercel AI SDK for LLM [Vercel AI SDK for LLM ]($https://ai-sdk.dev/docs/introduction)orchestration and streaming
  • - Cloudflare Workers[Cloudflare Workers]($https://workers.cloudflare.com/) (for serverless deployment
  • - Pure HTML/JavaScript/CSS for the frontend
A diagram outlining the technical architecture of the Parallel and Cerebras fact checker
![A diagram outlining the technical architecture of the Parallel and Cerebras fact checker](https://cdn.sanity.io/images/5hzduz3y/production/41a1206b19f725a5e707fa9f9462b741ee5fc085-1992x2991.jpg)

## Why this architecture

**Parallel's Search API for efficient web search**

Traditional fact-checking involves multiple steps: searching for relevant pages, scraping each page, extracting relevant content, and then analyzing it. Parallel's Search API collapses this into a single call that returns the most relevant content from multiple sources, already formatted for LLM consumption.

### A Parallel Search API request
1
2
3
4
5
6
7
8
const searchResult = await parallel.beta.search({ objective: `Find reliable sources to verify or refute this claim: "${claim}"`, search_queries: [claim], processor: "base", max_results: 5, max_chars_per_result: 2000, }```
const searchResult = await parallel.beta.search({
objective: `Find reliable sources to verify or refute this claim: "${claim}"`,
search_queries: [claim],
processor: "base",
max_results: 5,
max_chars_per_result: 2000,
}
 
```

This returns structured results with titles, URLs, and relevant excerpts, ready to feed directly into an LLM for claim analysis. This allows the app to efficiently find any mention of the claim on the web, across multiple sources, and feed the relevant excerpts surrounding the claim to an LLM for review. For example, the system can highlight that it is unsure, due to multiple reputable sources differing on the underlying claim; similarly, it can highlight that the claim is incorrect, because while several sources indicate that the claim may have some grounding, a primary source contradicts it.

### Cerebras for fast inference

Real-time fact checking places unusually strict demands on inference latency. Claims must be identified, contextualized, and evaluated quickly enough that users can see verification results appear as they read.

Cerebras powers this experience by delivering extremely high-throughput, low-latency inference. Models like **gpt-oss-120B**, hosted on Cerebras systems, is today’s leading open-weight model developed by a U.S. company, widely used for its strong reasoning and coding capabilities. Based on benchmarks from Artificial Analysis, most vendors today run gpt-oss-120B in the ~100–300 tokens/second range, reflecting typical NVIDIA H100 performance. Cerebras substantially exceed this range at ~3000 tokens/second.

Output Speed as Shown on Artificial Analysis

Output Speed: gpt-oss-120B(high)
![Output Speed: gpt-oss-120B(high)](https://cdn.sanity.io/images/5hzduz3y/production/f9ff292736999def451fe4a0a19acac096ef7532-6000x1776.png)

For this fact checking UX, Cerebras powers the application’s ability to first, highlight the parts of a given text that are claims, almost immediately after receiving the extracted content, then analyze web search outputs related to the claim. This allows the system to make decisions about several claims almost immediately after receiving each piece of information, resulting in a responsive, interactive user experience.

## Getting started

To start, you'll need API keys from:

  • - Parallel[Parallel]($https://platform.parallel.ai/) – for Search and Extract APIs
  • - Cerebras[Cerebras]($https://cloud.cerebras.ai/?utm_source=DevX&utm_campaign=parallel) – for LLM inference
### Clone and install the cookbook
1
2
3
4
5
6
7
8
9
10
11
# Clone and install git clone https://github.com/parallel-web/parallel-cookbook cd typescript-recipes/parallel-fact-checker-cerebras npm install # Configure API keys (create .dev.vars file) echo "PARALLEL_API_KEY=your_key_here" >> .dev.vars echo "CEREBRAS_API_KEY=your_key_here" >> .dev.vars # Run locally npm run dev```
# Clone and install
git clone https://github.com/parallel-web/parallel-cookbook
cd typescript-recipes/parallel-fact-checker-cerebras
npm install
 
# Configure API keys (create .dev.vars file)
echo "PARALLEL_API_KEY=your_key_here" >> .dev.vars
echo "CEREBRAS_API_KEY=your_key_here" >> .dev.vars
 
# Run locally
npm run dev
```

## Implementation

This section walks through the key parts of the implementation. The full source is in `worker.ts`.

**1. Extracting Content from URLs**

When a user provides a URL, we use Parallel's Extract API to fetch and parse the page:

### A Parallel Extract API request
1
2
3
4
5
6
const extractResult = await parallel.beta.extract({ urls: [url], objective: "Extract the main article content and key claims", full_content: true, });```
const extractResult = await parallel.beta.extract({
urls: [url],
objective: "Extract the main article content and key claims",
full_content: true,
});
 
```

The Extract API handles fetching, parsing, and cleaning—returning just the content, not the HTML boilerplate.

**2. Identifying Claims**

The LLM extracts verifiable claims using a structured output format. As a small prompt-tuning improvement, we ask for **exact quotes** from the source text so we can highlight them in the UI.

### Identifying claims
1
2
3
4
5
6
7
8
9
10
11
12
13
const factsResult = streamText({ model: cerebras("gpt-oss-120b"), system: `Extract verifiable claims. Output format: FACT: [EXACT QUOTE from text] ||| [claim to verify] The quote before ||| must match the source exactly (for highlighting).`, prompt: content, });```
const factsResult = streamText({
 
model: cerebras("gpt-oss-120b"),
 
system: `Extract verifiable claims. Output format:
 
FACT: [EXACT QUOTE from text] ||| [claim to verify]
 
The quote before ||| must match the source exactly (for highlighting).`,
 
prompt: content,
 
});
```

As the LLM streams its response, we parse each `FACT:` line and immediately send it to the frontend—claims appear in the UI as they're discovered.
**3. Searching for Evidence**

Each claim is verified using Parallel's Search API. One call returns relevant excerpts from multiple sources:

### Using Parallel Search API to fetch sources
1
2
3
4
5
6
7
const searchResult = await parallel.beta.search({ objective: `Find reliable sources to verify or refute this claim: "${fact.text}"`, search_queries: [fact.text], processor: "base", max_results: 5, max_chars_per_result: 2000, });```
const searchResult = await parallel.beta.search({
objective: `Find reliable sources to verify or refute this claim: "${fact.text}"`,
search_queries: [fact.text],
processor: "base",
max_results: 5,
max_chars_per_result: 2000,
});
```

The Search API is designed for LLM consumption— it returns structured excerpts, not raw HTML, saving you from building a scraping pipeline.

**4. Rendering Verdicts**

The LLM analyzes the search results and returns a verdict:

### A system prompt for responding with a verdict
1
2
3
4
5
6
7
8
9
10
11
12
13
const verdict = await streamText({ model: cerebras("gpt-oss-120b"), system: `Analyze evidence and respond with: VERDICT: [VERIFIED/FALSE/UNSURE] EXPLANATION: [1-2 sentences]`, prompt: `Claim: "${claim}"\n\nEvidence: ${JSON.stringify(searchResults)}`, });```
const verdict = await streamText({
 
model: cerebras("gpt-oss-120b"),
 
system: `Analyze evidence and respond with:
 
VERDICT: [VERIFIED/FALSE/UNSURE]
 
EXPLANATION: [1-2 sentences]`,
 
prompt: `Claim: "${claim}"\n\nEvidence: ${JSON.stringify(searchResults)}`,
 
});
```

We parse the verdict and send it to the frontend along with source citations.

**5. Streaming with SSE**

All results stream to the browser using Server-Sent Events. The helper is as follows:

### Server-sent events streaming
1
2
3
4
5
6
7
function sendSSE(controller: ReadableStreamDefaultController, data: object) { controller.enqueue(encoder.encode(`data: ${JSON.stringify(data)}\n\n`)); } // Usage sendSSE(controller, { type: "fact_extracted", fact }); sendSSE(controller, { type: "fact_verdict", factId, status, explanation, references });```
function sendSSE(controller: ReadableStreamDefaultController, data: object) {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(data)}\n\n`));
}
 
// Usage
sendSSE(controller, { type: "fact_extracted", fact });
sendSSE(controller, { type: "fact_verdict", factId, status, explanation, references });
```

The frontend listens for these events and updates the UI in real-time.
**6. Concurrent Verification**

Claims are verified concurrently to improve the user experience. With concurrency, several claims can be verified within the latency window of a single claim.

### Concurrent claims
1
2
3
4
5
await Promise.all( claims.map(claim => verifyFact(claim, parallel, cerebras, controller)) );```
await Promise.all(
 
claims.map(claim => verifyFact(claim, parallel, cerebras, controller))
 
);
```

For example, with 10 claims, this completes in ~3-5 seconds instead of 30+ seconds sequentially.

## SSE event reference

**phase**: Processing phase changed (extracting, verifying)

**content_chunk: **Streamed content chunk (URL mode)

**content_complete: **Formatted content is ready

**fact_extracted**: New claim identified (highlighted in grey on the UI)

**fact_status: **Claim status update (eg., “searching”)

**fact_verdict:** Final verdict with explanation and sources (highlighted red, orange or green)

**complete:** All processing finished

**error:** Error occurred

## Resources

  • - Live Demo[Live Demo]($https://oss.parallel.ai/agents/cerebras-fact-checker)
  • - Source Code[Source Code]($https://github.com/parallel-web/parallel-cookbook/tree/main/typescript-recipes/parallel-fact-checker-cerebras)
  • - Parallel API Documentation[Parallel API Documentation]($https://docs.parallel.ai/)
  • - Parallel Search API[Parallel Search API]($https://docs.parallel.ai/search-api)
  • - Cerebras Documentation[Cerebras Documentation]($http://inference-docs.cerebras.ai/introduction?utm_source=DevX&utm_campaign=parallel)
  • - Vercel AI SDK[Vercel AI SDK]($https://ai-sdk.dev/)



Parallel avatar

By Parallel

January 8, 2026

## Related Posts42

DeepSearch QA: Task API
Parallel avatar

- [Parallel Task API achieves state-of-the-art accuracy on DeepSearchQA](https://parallel.ai/blog/deepsearch-qa)

Tags:Benchmarks
Reading time: 3 min
Product release: Granular Basis
Parallel avatar

- [Introducing Granular Basis for the Task API](https://parallel.ai/blog/granular-basis-task-api)

Tags:Product Release
Reading time: 3 min
How Amp’s coding agents build better software with Parallel Search
Parallel avatar

- [How Amp’s coding agents build better software with Parallel Search](https://parallel.ai/blog/case-study-amp)

Tags:Case Study
Reading time: 3 min
Latency improvements on the Parallel Task API
Parallel avatar

- [Latency improvements on the Parallel Task API ](https://parallel.ai/blog/task-api-latency)

Tags:Product Release
Reading time: 3 min
Product release: Extract
Parallel avatar

- [Introducing Parallel Extract](https://parallel.ai/blog/introducing-parallel-extract)

Tags:Product Release
Reading time: 2 min
FindAll API - Product Release
Parallel avatar

- [Introducing Parallel FindAll](https://parallel.ai/blog/introducing-findall-api)

Tags:Product Release,Benchmarks
Reading time: 4 min
Product release: Monitor API
Parallel avatar

- [Introducing Parallel Monitor](https://parallel.ai/blog/monitor-api)

Tags:Product Release
Reading time: 3 min
Parallel raises $100M Series A to build web infrastructure for agents
Parallel avatar

- [Parallel raises $100M Series A to build web infrastructure for agents](https://parallel.ai/blog/series-a)

Tags:Fundraise
Reading time: 3 min
How Macroscope reduced code review false positives with Parallel
Parallel avatar

- [How Macroscope reduced code review false positives with Parallel](https://parallel.ai/blog/case-study-macroscope)

Reading time: 2 min
Product release - Parallel Search API
Parallel avatar

- [Introducing Parallel Search](https://parallel.ai/blog/introducing-parallel-search)

Tags:Benchmarks
Reading time: 7 min
Benchmarks: SealQA: Task API
Parallel avatar

- [Parallel processors set new price-performance standard on SealQA benchmark](https://parallel.ai/blog/benchmarks-task-api-sealqa)

Tags:Benchmarks
Reading time: 3 min
Introducing LLMTEXT, an open source toolkit for the llms.txt standard
Parallel avatar

- [Introducing LLMTEXT, an open source toolkit for the llms.txt standard](https://parallel.ai/blog/LLMTEXT-for-llmstxt)

Tags:Product Release
Reading time: 7 min
Starbridge + Parallel
Parallel avatar

- [How Starbridge powers public sector GTM with state-of-the-art web research](https://parallel.ai/blog/case-study-starbridge)

Tags:Case Study
Reading time: 4 min
Building a market research platform with Parallel Deep Research
Parallel avatar

- [Building a market research platform with Parallel Deep Research](https://parallel.ai/blog/cookbook-market-research-platform-with-parallel)

Tags:Cookbook
Reading time: 4 min
How Lindy brings state-of-the-art web research to automation flows
Parallel avatar

- [How Lindy brings state-of-the-art web research to automation flows](https://parallel.ai/blog/case-study-lindy)

Tags:Case Study
Reading time: 3 min
Introducing the Parallel Task MCP Server
Parallel avatar

- [Introducing the Parallel Task MCP Server](https://parallel.ai/blog/parallel-task-mcp-server)

Tags:Product Release
Reading time: 4 min
Introducing the Core2x Processor for improved compute control on the Task API
Parallel avatar

- [Introducing the Core2x Processor for improved compute control on the Task API](https://parallel.ai/blog/core2x-processor)

Tags:Product Release
Reading time: 2 min
How Day AI merges private and public data for business intelligence
Parallel avatar

- [How Day AI merges private and public data for business intelligence](https://parallel.ai/blog/case-study-day-ai)

Tags:Case Study
Reading time: 4 min
Full Basis framework for all Task API Processors
Parallel avatar

- [Full Basis framework for all Task API Processors](https://parallel.ai/blog/full-basis-framework-for-task-api)

Tags:Product Release
Reading time: 2 min
Building a real-time streaming task manager with Parallel
Parallel avatar

- [Building a real-time streaming task manager with Parallel](https://parallel.ai/blog/cookbook-sse-task-manager-with-parallel)

Tags:Cookbook
Reading time: 5 min
How Gumloop built a new AI automation framework with web intelligence as a core node
Parallel avatar

- [How Gumloop built a new AI automation framework with web intelligence as a core node](https://parallel.ai/blog/case-study-gumloop)

Tags:Case Study
Reading time: 3 min
Introducing the TypeScript SDK
Parallel avatar

- [Introducing the TypeScript SDK](https://parallel.ai/blog/typescript-sdk)

Tags:Product Release
Reading time: 1 min
Building a serverless competitive intelligence platform with MCP + Task API
Parallel avatar

- [Building a serverless competitive intelligence platform with MCP + Task API](https://parallel.ai/blog/cookbook-competitor-research-with-reddit-mcp)

Tags:Cookbook
Reading time: 6 min
Introducing Parallel Deep Research reports
Parallel avatar

- [Introducing Parallel Deep Research reports](https://parallel.ai/blog/deep-research-reports)

Tags:Product Release
Reading time: 2 min
BrowseComp / DeepResearch: Task API
Parallel avatar

- [A new pareto-frontier for Deep Research price-performance](https://parallel.ai/blog/deep-research-benchmarks)

Tags:Benchmarks
Reading time: 4 min
Building a Full-Stack Search Agent with Parallel and Cerebras
Parallel avatar

- [Building a Full-Stack Search Agent with Parallel and Cerebras](https://parallel.ai/blog/cookbook-search-agent)

Tags:Cookbook
Reading time: 5 min
Webhooks for the Parallel Task API
Parallel avatar

- [Webhooks for the Parallel Task API](https://parallel.ai/blog/webhooks)

Tags:Product Release
Reading time: 2 min
Introducing Parallel: Web Search Infrastructure for AIs
Parallel avatar

- [Introducing Parallel: Web Search Infrastructure for AIs ](https://parallel.ai/blog/introducing-parallel)

Tags:Benchmarks,Product Release
Reading time: 6 min
Introducing SSE for Task Runs
Parallel avatar

- [Introducing SSE for Task Runs](https://parallel.ai/blog/sse-for-tasks)

Tags:Product Release
Reading time: 2 min
A new line of advanced Processors: Ultra2x, Ultra4x, and Ultra8x
Parallel avatar

- [A new line of advanced Processors: Ultra2x, Ultra4x, and Ultra8x ](https://parallel.ai/blog/new-advanced-processors)

Tags:Product Release
Reading time: 2 min
Introducing Auto Mode for the Parallel Task API
Parallel avatar

- [Introducing Auto Mode for the Parallel Task API](https://parallel.ai/blog/task-api-auto-mode)

Tags:Product Release
Reading time: 1 min
A linear dithering of a search interface for agents
Parallel avatar

- [A state-of-the-art search API purpose-built for agents](https://parallel.ai/blog/search-api-benchmark)

Tags:Benchmarks
Reading time: 3 min
Parallel Search MCP Server in Devin
Parallel avatar

- [Parallel Search MCP Server in Devin](https://parallel.ai/blog/parallel-search-mcp-in-devin)

Tags:Product Release
Reading time: 2 min
Introducing Tool Calling via MCP Servers
Parallel avatar

- [Introducing Tool Calling via MCP Servers](https://parallel.ai/blog/mcp-tool-calling)

Tags:Product Release
Reading time: 2 min
Introducing the Parallel Search MCP Server
Parallel avatar

- [Introducing the Parallel Search MCP Server ](https://parallel.ai/blog/search-mcp-server)

Tags:Product Release
Reading time: 2 min
Starting today, Source Policy is available for both the Parallel Task API and Search API - giving you granular control over which sources your AI agents access and how results are prioritized.
Parallel avatar

- [Introducing Source Policy](https://parallel.ai/blog/source-policy)

Tags:Product Release
Reading time: 1 min
The Parallel Task Group API
Parallel avatar

- [The Parallel Task Group API](https://parallel.ai/blog/task-group-api)

Tags:Product Release
Reading time: 1 min
State of the Art Deep Research APIs
Parallel avatar

- [State of the Art Deep Research APIs](https://parallel.ai/blog/deep-research)

Tags:Benchmarks
Reading time: 3 min
Introducing the Parallel Search API
Parallel avatar

- [Parallel Search API is now available in alpha](https://parallel.ai/blog/parallel-search-api)

Tags:Product Release
Reading time: 2 min
Introducing the Parallel Chat API - a low latency web research API for web based LLM completions. The Parallel Chat API returns completions in text and structured JSON format, and is OpenAI Chat Completions compatible.
Parallel avatar

- [Introducing the Parallel Chat API ](https://parallel.ai/blog/chat-api)

Tags:Product Release
Reading time: 1 min
Parallel Web Systems introduces Basis with calibrated confidences - a new verification framework for AI web research and search API outputs that sets a new industry standard for transparent and reliable deep research.
Parallel avatar

- [Introducing Basis with Calibrated Confidences ](https://parallel.ai/blog/introducing-basis-with-calibrated-confidences)

Tags:Product Release
Reading time: 4 min
The Parallel Task API is a state-of-the-art system for automated web research that delivers the highest accuracy at every price point.
Parallel avatar

- [Introducing the Parallel Task API](https://parallel.ai/blog/parallel-task-api)

Tags:Product Release,Benchmarks
Reading time: 4 min
![Company Logo](https://parallel.ai/parallel-logo-540.png)

Contact

  • hello@parallel.ai[hello@parallel.ai](mailto:hello@parallel.ai)

Products

  • Search API[Search API](https://parallel.ai/products/search)
  • Extract API[Extract API](https://docs.parallel.ai/extract/extract-quickstart)
  • Task API[Task API](https://docs.parallel.ai/task-api/task-quickstart)
  • FindAll API[FindAll API](https://docs.parallel.ai/findall-api/findall-quickstart)
  • Chat API[Chat API](https://docs.parallel.ai/chat-api/chat-quickstart)
  • Monitor API[Monitor API](https://platform.parallel.ai/play/monitor)

Resources

  • About[About](https://parallel.ai/about)
  • Pricing[Pricing](https://parallel.ai/pricing)
  • Docs[Docs](https://docs.parallel.ai)
  • Blog[Blog](https://parallel.ai/blog)
  • Changelog[Changelog](https://docs.parallel.ai/resources/changelog)
  • Careers[Careers](https://jobs.ashbyhq.com/parallel)

Info

  • Terms of Service[Terms of Service](https://parallel.ai/terms-of-service)
  • Customer Terms[Customer Terms](https://parallel.ai/customer-terms)
  • Privacy[Privacy](https://parallel.ai/privacy-policy)
  • Acceptable Use[Acceptable Use](https://parallel.ai/acceptable-use-policy)
  • Trust Center[Trust Center](https://trust.parallel.ai/)
![SOC 2 Compliant](https://parallel.ai/soc2.svg)
LinkedIn[LinkedIn](https://www.linkedin.com/company/parallel-web/about/)Twitter[Twitter](https://x.com/p0)GitHub[GitHub](https://github.com/parallel-web)
All Systems Operational

Parallel Web Systems Inc. 2026