Back to work

Case Study

Stakeholder Lens — AI Roadmap Translator

RoleDesigner & Builder
StackClaude API · Parallel Prompts · Vanilla JS · Vercel
BuiltMarch 2026

The Problem

Every product manager writes the roadmap once and then rewrites it four times. The engineering version needs scope, sequencing, and open questions. Sales needs outcomes and objection-handling angles. Exec wants strategic alignment and risk flags. Customers want plain-language benefits and rough timing.

The underlying roadmap is the same. The translation work is manual, repetitive, and often inconsistent — each version drifts from the source in different ways. This is exactly the kind of structured, rule-based transformation that AI handles well.

What I Built

A single-page tool that takes one roadmap input and generates up to four audience-specific views in parallel. No login. No install. Paste your roadmap, add product context, select your audiences, and get tailored communication in seconds.

01User pastes a roadmap in any format — bullets, tables, free text
02User adds product context: name, target customer, stage, business priorities
03User selects which audiences to generate for (Engineering, Sales, Exec, Customers)
04Tool fires one Claude API call per audience in parallel using Promise.allSettled
05Each view streams into its own card — colour-coded, copyable, immediately usable

Prompt Architecture

Each audience gets its own prompt, not a single prompt generating all four. This was a deliberate choice: one prompt trying to serve four audiences produces blended, hedged output. Four separate prompts, each with a clear audience mandate, produce output that actually sounds right for that reader.

  • Shared system prompt: Sets the overall context: product name, target customer, stage, and business priorities. Also establishes the core rule — never invent features or outcomes not present in the roadmap. Every view must be grounded in what's actually there.
  • Audience-specific prompt injection: Each audience has a dedicated prompt block that defines what that reader cares about, what language to use, and what to include or avoid. Engineering gets scope and sequencing. Sales gets outcomes and talking points. Exec gets strategy and risk. Customers get plain-language benefits.
  • Parallel execution: All selected audience calls fire simultaneously via Promise.allSettled. Partial failures are handled gracefully — if one view errors, the others still render. This keeps latency low and makes the tool feel responsive even with 4 simultaneous API calls.

Design Decisions

  • One prompt per audience, not one prompt for all: A single prompt generating all four views would be shorter to write but would produce generic output. Dedicated prompts allow each audience view to be genuinely tuned — the engineering view reads like engineering communication, not like a slightly modified exec brief.
  • Parallel calls, not sequential: Sequential API calls would make the tool feel slow — each view waiting for the last. Parallel calls via Promise.allSettled mean all views generate in the time it takes the slowest one. The visual result (all cards loading simultaneously) also feels more satisfying to use.
  • Context-first prompt design: Product context (name, customer, stage, priorities) is injected into every system prompt, not just the user message. This ensures Claude grounds the translation in actual business context rather than producing generic roadmap reformatting. The output adapts to the product, not just the audience.
  • Server-side API key: The Claude API key lives in a Vercel environment variable, proxied through a serverless function. Anyone can use the tool without needing their own key. This is how production AI tools are actually architected — the key stays server-side.

What It Demonstrates

PM communication depth
The tool encodes how roadmap communication actually differs across audiences — not a generic reformatter, but a reflection of real stakeholder dynamics
LLM API work
Parallel prompt execution, context injection, structured system/user prompt separation, error handling at the individual call level
Delivery judgment
Architecture tradeoffs that reflect production thinking: parallel vs. sequential, prompt separation vs. consolidation, server-side key management
·