The short version
Every day, our pipeline spends about 57,500 tokens across eight agents and a referee to produce one chapter. Anthropic's Claude writes the prose. Google's Gemini handles structured validation and a second opinion on canon. OpenAI's TTS model narrates the chapter for subscribers. A human curator reads the draft before it goes live.
None of those facts should be hidden, so we publish them in our AI Disclosure and repeat them here whenever we talk about how the game is made.
Why multiple models
People sometimes ask why we do not just let Claude do everything — Claude is, after all, the writer, and the writer is the part readers actually see. The answer is that most of the work in making a chapter is not the writing. It is the bookkeeping. It is remembering that the character who swore revenge in Chapter 14 is now in the same room as the character they swore revenge against, and that the artifact they wanted was lost in Chapter 21, and that the faction they lead lost half its territory last week.
Claude is very good at writing. It is also very expensive to use for checking. Gemini can hold the whole canonical sub-graph in its context window at a fraction of the cost, and it is ruthless at spotting inconsistencies we would otherwise have to fix by hand. Using it as a referee means the writer can focus on what the writer is good at.
The graph
Underneath the pipeline is a Neo4j graph of roughly 12,000 entities: every named character, every faction, every territory, every artifact, every plot thread, every open secret. Each chapter starts by pulling a sub-graph of the entities that are relevant to today's story — usually a few hundred — and handing that sub-graph to the writer and referee as context.
The graph is the canonical source of truth. The prose we publish is downstream of it. If the graph says a character died, the prose cannot contradict that, and the referee enforces the rule. This is also how we make the companion API safe to expose to developers: the public API reads directly from the same graph, so anything a companion can see is something a player could already find by reading carefully.
The curator step
Step 10 is a human being. Specifically, for now, it is the founder of Aegis Brightsmark. Every chapter is reviewed before it publishes. The curator can approve, reject and regenerate, or make small inline edits.
We get asked why we bother. Why not publish automatically and just revert anything that goes wrong? The answer is simple: trust compounds. One published chapter that includes something harmful, or inconsistent, or disrespectful, is much more expensive to recover from than one chapter that publishes a few hours late. The curator step is cheap insurance against the cases the safety filters and the referee miss.
What we do not use AI for
We do not use AI to decide who wins a vote. That is deterministic math. We do not use AI to decide investigation rarities. That is a seeded PRNG against a published table. We do not send personal data to any model. What the models see is aggregated and anonymous. We do not generate content about real people. We do not train on player writing or theories.
Those constraints are not marketing language — they are hard-coded into the pipeline and checked on every run.
What the pipeline reads
Finally, a note on inputs. On a typical day, the writer agent receives:
- The sub-graph of relevant entities (about 400 nodes and 2,000 edges on average).
- The previous chapter in full, for continuity.
- A summary of the last seven days of votes, investigations, and operations.
- A list of open secrets flagged for potential discovery.
- The curator's editorial notes, if any are pinned to the current arc.
That is it. No player names, no email addresses, no coven chat transcripts, nothing personal. The entire system is designed so that the reader experience depends on player actions in aggregate, not on any individual's identity.
What is next
We are working on opening up more of this process to the community. The dashboard that tracks pipeline health, token cost, and run-by-run outcomes is already built internally; we will share a read-only version as soon as it is polished. And we will keep publishing posts like this one whenever we make a meaningful change to the stack.
If you have questions about how any of this works, write to us. Transparency about AI is load-bearing for a game like this — we would rather over-explain than under.