Back to work

Case Study

AI Implementation Scoping Tool

RoleDesigner & Builder
StackClaude API · Vanilla JS · Vercel
BuiltMarch 2026

The Problem

AI implementation projects fail at a predictable point: the gap between “we want to use AI” and “here's what to actually build.” Every AI engagement starts the same way — a business problem, a blank page, and a set of discovery questions that determine whether the project gets scoped correctly or spends months heading in the wrong direction.

The first deliverable of any implementation engagement is a scoping document. The same discovery structure applies every time: what's the current process, what systems and data exist, what does success look like. This is repeatable. I wanted to encode it as a tool — and demonstrate it working live.

What I Built

A single-page tool that runs a two-phase AI implementation discovery session. No login. No install. Open the URL, describe your problem, get a scoping document in under two minutes.

01User describes a business problem in a textarea
02Claude generates 3 targeted discovery questions (JSON, single API call)
03User answers each question in a conversational interface
04Full context passed to Claude — generates a structured scoping document in markdown
05Output covers: AI approach, tool stack, process table, effort/impact, phased rollout, risks, metrics, next steps

Prompt Architecture

Two prompts, two different jobs. Keeping them separate was a deliberate choice.

  • Question generation prompt: Instructs Claude to return a JSON array of exactly 3 strings — nothing else. The app validates the response shape before displaying anything. If the array doesn't have exactly 3 items, it errors and prompts a retry rather than showing a broken state.
  • Scoping document prompt: Specifies the exact markdown structure — section headers, table formats, phase names. Not to constrain Claude's reasoning, but to ensure output is immediately usable. A scoping document where sections move around is harder to act on. The format is opinionated because the use case is.

Design Decisions

  • Two prompts, not one: Combining question generation and document generation into a single prompt would trade reliability for brevity. Two clean system prompts, each with a single output job, produce more consistent results than one complex multi-purpose prompt.
  • Pre-generate all 3 questions at once: An adaptive approach — call Claude once per question, adjusting based on prior answers — would be more dynamic but means 3× the API calls and latency. Pre-generating from the initial problem description keeps question balance predictable (process / systems / success criteria) and loads in one round trip.
  • Vanilla JS, no framework: This tool has one job. Frameworks add build steps and deployment complexity for no real gain here. Vanilla JS is auditable (any developer reads app.js in 5 minutes), instantly deployable via Vercel, and forces clean state management — everything lives in one object.
  • Server-side API key: The Claude API key lives in a Vercel environment variable, never in the browser. A lightweight serverless function proxies requests. This means anyone can use the tool without needing their own key — and the architecture reflects how production AI tools are actually built.

What It Demonstrates

LLM API work
Prompt design, structured output handling, two-phase state management, error handling
AI product thinking
The tool structure encodes how implementation scoping actually works — not just a chat wrapper
Delivery judgment
Architecture choices reflect opinionated tradeoffs: vanilla JS, pre-generated questions, server-side key
·