The Fix Pass

Testing exposed a set of real UX issues:

  • Choice buttons had empty labels due to field mismatch.
  • I, Robot cover didn’t show because only PNGs were loaded.
  • 21 nodes had short placeholder text.
  • Endings were missing showPersonality, so results didn’t appear.
  • Personality descriptions were third-person and felt impersonal.

All fixed in one pass.

This was a good reminder that “it builds” does not mean “it plays.” Most of the issues were not runtime crashes. They were quality bugs that erode trust: empty buttons, missing covers, and robotic descriptions that feel disconnected from the player.

The fastest fixes were the ones with clear, mechanical causes (field mismatch, missing flag). The slowest were narrative fixes. Expanding 21 nodes meant creating consistent tone and pacing, which is more than a quick patch.

Concrete Fixes I Applied

Here’s the repair list that actually moved the experience forward:

  • Updated markdown generation to read the correct choice text field.
  • Expanded short nodes to 2–4 sentences so scenes feel complete.
  • Ensured endings show the personality reveal block.
  • Converted all descriptions to second-person voice for stronger immersion.
  • Regenerated cards so the new copy appears in the visuals.

Each fix on its own is small. Together they change how “finished” the product feels.

The Card Problem

Gemini can’t reliably “edit” an existing personality card image. The solution was to regenerate cards using the existing card as a visual template, then convert PNG to WebP for size.

I built a batch script to regenerate all cards with the new second-person copy.

This also exposed a product insight: the card experience is the shareable artifact, so the visual quality matters as much as the narrative. If card generation is flaky, the entire social loop breaks.

Why This Keeps Happening

Most of these bugs weren’t deep technical problems. They were “pipeline gaps.” The generator produced output that was structurally valid, but it didn’t enforce quality rules like minimum text length, or required fields for the ending UI.

That’s why I’m shifting toward “quality at source.” If the generator enforces the rules before content is accepted, then we avoid the entire cleanup phase.

Insight

Most of these issues were avoidable with better validation at the source. So I documented a quality pipeline: validate structure, auto-fix common issues, then review before deployment.

If we generate “correct by default,” the time spent on manual cleanups collapses. That’s the goal for the next story packs: build once, ship once.

Next

I want to bake the fixes into the generator itself: validate after generation, reject low-quality output, and only then proceed to assets. That way every new story pack starts with a higher floor.