A Chrome extension that analyzes online news articles to surface framing patterns, source attribution, language signals, and alternative coverage perspectives.
Built as a product-focused case study demonstrating UX strategy, information architecture, multi-stage AI pipeline design, and decision-making under technical constraints.
Read Between activates on any news article page and provides structured analysis across seven key dimensions:
Rather than labeling content as “biased,” the tool presents analytical signals in a neutral, structured format that supports informed interpretation.
The extension runs analysis in sequential stages, with local processing running in parallel where it doesn’t depend on AI output:
| Stage | Trigger | Output |
|---|---|---|
| DOM Parse | On page load (instant) | Headline, publication, date, author, article text |
| Preprocessing | Before any AI call | Boilerplate removal, deduplication, whitespace normalisation |
| Stage 1 — AI | Blocks UI until complete | Reported points, missing context summary |
| Local Regex | Parallel with Stage 1 | Sources & attribution, tone language signals |
| Stage 2 — AI | Background after Stage 1 | Narrative structure summary, notable rhetorical choices |
| Stage 3 — AI + Web | Background after Stage 1 | Similar coverage from other publishers |
Stage 1 uses structured JSON output with a strict schema enforced at the API level. Stages 2 and 3 run asynchronously and populate their cards once resolved without requiring user interaction.
Up to 4 neutral, fact-focused bullet points summarising what the article actually claims. Produced by Stage 1 (GPT-4.1). Falls back to regex sentence scoring on API failure.
Up to 5 named speakers matched to direct quotes using a paragraph-anchored regex algorithm. No AI call — entirely local processing via sourceFinder.ts. Handles direct attribution (Name said), intro attribution (Name, description, said), post-quote attribution ("..." Name said), and pronoun resolution (she said → look-back to previous paragraph’s named speaker). Days of the week, months, countries, and generic nouns are blocked from matching as names.
Contextual gaps the article does not address — identified by Stage 1. The model is constrained to surface only what is absent from within the text, with no outside facts introduced.
Narrative framing analysis produced by Stage 2 in the background. Evaluates source balance, headline vs. body emphasis, and structural framing choices.
Three word-level regex scans against curated dictionaries (30+ emotional words, 26+ moral framing words, 24+ certainty words). Notable rhetorical observations are added by Stage 2.
Author name, publisher, and author page URL extracted entirely from DOM meta tags and element selectors. No AI call.
Up to 3 articles on the same story from different publishers, found via live web search in Stage 3. One result per publisher, with URL integrity enforced against web search annotations.
The extension detects the accessibility level of the article content before analysis:
| State | Condition | UI Behaviour |
|---|---|---|
full_access |
Full article text available | All 7 cards rendered |
partial_preview |
Truncated content, soft paywall | All cards rendered with content warnings where applicable |
paywalled |
Hard paywall detected, < 200 words available | Up to 3 summary points + Find Similar Coverage + paywall notice |
Paywall detection uses a combination of word count thresholds and DOM signal scanning (overlay elements, subscribe-prompt text patterns).
| Layer | Technology |
|---|---|
| UI Framework | React 18 |
| Language | TypeScript |
| Build System | Webpack 5 |
| Extension Platform | Chrome Manifest V3 (Side Panel) |
| AI Provider | OpenAI GPT-4.1 |
| AI Features Used | Structured output, web_search tool (Stage 3) |
| Storage | Chrome Extension Storage API (local) |
| Styling | CSS Modules |
Key implementation details:
.envThe extension accounts for and communicates the following states:
Designing for these states was prioritized to maintain product trust across the full range of real-world browsing scenarios.
Clone the repository
npm install
.env file in the project root and add your OpenAI API key:
OPENAI_API_KEY=your_key_here
npm run build
Open Chrome and go to chrome://extensions
Enable Developer Mode (toggle in the top-right corner)
Click Load Unpacked and select the /build directory
og:type validation)