Back to work

Case Study

AI Children's Book Platform

RoleAI Product Owner
Outcome1,000+ books sold in Germany
StackChatGPT · DALL-E · Midjourney · Print-on-demand
View sample book →

The Problem

Every child deserves to be the hero of their own story — not just a name swapped into a generic template, but a book built around them: their look, their personality, the challenges they're navigating. Yet truly personalised children's books didn't exist at scale. Custom illustration was expensive and slow. Generic personalisation was shallow and kids saw through it.

The core insight was pedagogical: children process difficult situations more effectively when they can observe themselves navigating the same challenge from a safe, story-based distance. The book becomes a tool — not just entertainment.

What We Built

A direct-to-consumer platform where parents submitted a personalised request through a web app — details about their child, their appearance, personality traits, and situations they were working through. The platform then generated a complete illustrated book featuring the child as the protagonist, facing and overcoming a relevant challenge.

01Parent completes a detailed customisation request via web app
02AI generates story text and full illustration set
03Digital proof sent to parent for approval
04Upon approval: printed and shipped directly to the customer
05Customer receives a professionally printed personalised book

My Role: AI Product Owner

I owned the product end to end — not as an engineer, but as the person responsible for making the AI system produce a consistent, high-quality product at scale.

  • Prompt architecture: Designed the story generation system (ChatGPT) — structured to accept variable child inputs and produce narratively coherent, age-appropriate stories
  • Visual consistency system: Engineered the character generation pipeline using DALL-E, establishing a method for producing a consistent child character across multiple scenes
  • Scenery and composition: Used Midjourney for high-quality backgrounds, then composited character into scene
  • Quality curation: Reviewed and selected outputs, building judgement on what made a book feel premium vs. generic
  • Product flow design: Defined the approval workflow and digital-to-print handoff process

The Hard Problem: Character Consistency

Generating a story is straightforward. Generating beautiful scenery is straightforward. The bottleneck was making the same child character appear consistently across 15–20 illustrations in a single book.

Early AI image models weren't built for this. Each generation produced a slightly different version of the character — different face shape, hair tone, proportions. At volume, this broke the sense of a coherent narrative.

How we solved it: Developed a character archetype strategy — identifying refined character descriptions that could be consistently reproduced across generations. Children with similar characteristics were mapped to a calibrated archetype prompt, maintaining visual consistency without generating entirely unique characters for every book.

The trade-off was depth of customisation vs. production consistency. We leaned toward consistency — a coherent book beats a maximally unique but visually inconsistent one.

Results

1,000+
individual books sold in Germany
D2C
fully automated fulfilment pipeline

What I'd Build Differently Today

The biggest friction was the two-step pipeline: generate character separately, generate scenery separately, composite manually. This was the main production bottleneck.

Today I'd design the pipeline to generate character within the scene in a single pass — using fine-tuned character models to embed the child directly into the environment. This would cut manual layout time significantly and improve compositional quality.

The story generation I'd also rebuild as a structured multi-agent pipeline — one agent for narrative structure, one for age-appropriate language, one for scene-by-scene illustration briefs — rather than a single monolithic prompt.

·