LLM Curator (Claude-Backed Advisor)
Overview
The LlmCurator connects your settlement to Claude via the claude CLI, giving the curator genuine reasoning about aesthetics, social dynamics, and cultural strategy. Unlike the Dummy Curator which follows a fixed decision tree, the LLM curator reads the full settlement state -- elf personalities, aesthetic distances, compositions, revel history, climate -- and produces context-aware commands with explanatory reasoning.
It is available in the native TUI build only. On WASM (browser), LlmCurator is a thin stub that delegates to DummyCurator internally.
Setup
Prerequisites
- Install the
claudeCLI and ensure it is on yourPATH. The curator spawns it as a subprocess. - The default model is
haiku(fast, cheap). The model name is stored inself.modeland displayed in the UI as the curator label.
Verifying
Launch the native TUI build with cargo run. If claude is found, the curator label in the UI will show "haiku" instead of "rules". If spawning fails, the curator logs a warning and permanently falls back to the dummy curator for the rest of the session.
Platform Behavior
| Target | Behavior |
|---|---|
Native (cargo run) | Full LLM curator with background thread |
WASM (wasm32-unknown-unknown) | Stub that wraps DummyCurator; label shows "rules" |
How It Works
Background Thread Architecture
The LLM curator is non-blocking. It uses a dedicated background thread to call claude so the game loop never stalls waiting for a response.
Game tick
|
v
should_consult() -- check for pending result or day boundary / event trigger
|
v
consult() -- if pending result ready, return it
-- otherwise, build request, send to background thread
-- return DummyCurator fallback commands for this tick
|
v
[Background thread]
|-- Receives ConsultRequest via mpsc channel
|-- Spawns `claude -p` subprocess
|-- Pipes state message via stdin
|-- Parses JSON envelope from stdout
|-- Stores ConsultResult in Arc<Mutex<Option<...>>>
|
v
Next consult() call picks up the result
Key properties:
- The background thread lives for the entire game session (spawned once in the constructor).
- Only one request can be in flight at a time (the channel is unbounded, but the thread processes sequentially).
- While waiting for the LLM, the dummy curator's logic runs as fallback, so the settlement is never unmanaged.
- If the background thread panics or the channel closes,
permanently_fallbackis set totrue.
Budget and Rate Limiting
The LLM curator enforces two limits to control API costs:
| Parameter | Value | Description |
|---|---|---|
max_per_day | 5 | Maximum LLM consultations per game day |
min_consult_gap | 50 ticks | Minimum ticks between sending new requests (half a day) |
The daily budget resets when tick / DAY_LENGTH changes. When the budget is exhausted or the gap has not elapsed, the curator returns dummy fallback commands.
When It Consults
The LLM curator has a richer trigger set than the dummy curator:
| Trigger | Condition |
|---|---|
| Day boundary | tick / 100 changes (same as dummy) |
| Pending result | Background thread has a result ready -- always process immediately |
| Revel ended | RevelEnded event |
| Art completed | ArtCompleted event |
| Inspiration crisis | InspirationCrisis event |
| Resource critical | ResourceLow event with amount < 5 |
| Spirit anger | SpiritStateChanged event where new state is "Anger" |
Player Messages
When called with consult_with_message, the LLM curator:
- Checks for a pending background result first (returns it if available).
- Appends the player's message to the system prompt as a "Patron's Direct Message" section.
- Sends the request to the background thread.
- Returns
Message("Considering your request...")as an immediate acknowledgment.
The LLM sees the player's exact words and is instructed to "respond to this directive in your reasoning and adjust your decisions accordingly."
Structured Output with --json-schema
The curator uses Claude's structured output mode to guarantee parseable responses.
CLI Invocation
claude -p \
--model haiku \
--output-format json \
--json-schema <schema> \
--system-prompt <system_prompt> \
--tools ""
< state_message
The --tools "" flag disables all tools, forcing pure structured output. The state message is piped via stdin.
Response Envelope
Claude returns a JSON envelope:
{
"type": "result",
"subtype": "success",
"result": "",
"structured_output": { ... },
"total_cost_usd": 0.01
}
The curator reads structured_output first (primary path). If absent, it falls back to parsing result as a JSON string (older format compatibility).
Tool Schema Reference
The schema defines a CuratorResponse object with commands (array) and reasoning (string). Each command is tagged by its action field.
CuratorResponse
{
"commands": [ ... ],
"reasoning": "string"
}
Command Variants
| Action | Fields | Valid Values |
|---|---|---|
assign_role | name, role | role: "Gatherer", "Builder", "Composer", "Unassigned" |
set_resource_priority | priority (array) | "Food", "Wood", "Stone", "FineWood" |
queue_build | kind | "Dwelling", "Garden", "Workshop", "FeastHall" |
clear_forest | x, y (integers) | Map coordinates |
schedule_revel | (none) | |
message | text | Free-form string |
Unknown values (e.g., a role of "Knight") are silently skipped with a log warning. The parse_commands function filters invalid entries rather than rejecting the entire response.
Example LLM Response
{
"commands": [
{"action": "assign_role", "name": "Elowen", "role": "Composer"},
{"action": "set_resource_priority", "priority": ["Food", "Wood"]},
{"action": "message", "text": "Assigning Elowen as Composer -- she has the highest music skill and her emotional style aligns with our patron's taste."}
],
"reasoning": "Best musician gets Composer role."
}
The reasoning field is appended as an additional Message command when non-empty, so it appears in the event log.
The System Prompt
The system prompt establishes the curator's personality as "part colony manager, part artistic director." Key sections:
| Section | Content |
|---|---|
| Aesthetic Axes | Explains the four-axis model (structure, tradition, emotion, social) and how aesthetic distance drives departure |
| Patron's Direction | Inserts the current ArtisticDirection value (Balanced / FavorMastery / FavorOriginality / FavorEmotion) |
| Tools | Documents each command with usage guidance |
| Critical Awareness | Lists situations to watch: discontented elves, aesthetic outliers, creative blocks, spirit anger, food crises, missing compositions, stale revels |
| Style | "Be brief but specific. Name the elves you're acting on and explain why." |
Climate-Aware Context
The state message sent to the LLM includes full climate data:
- Season name, day within season, year
- Current weather and temperature
- Climate-specific alerts (e.g., "Winter food stores running low" when food < 20 in winter, "Settlement sheltering from storm" during storms)
This allows the LLM to make season-appropriate decisions that the dummy curator's fixed rules cannot.
State Snapshot Contents
The LLM receives a text-serialized SettlementState with:
- Resources: current amounts and daily production rates
- Elf roster: skills, morale, satisfaction, aesthetic label and distance, social graph (top 5 friends/rivals by name), portfolio size, aspirations, fandom, status flags (DISCONTENTED, BLOCKED)
- Compositions: last 15 with full quality breakdown, aesthetic position, properties, patron-favorite flag
- Revel history: last 5 with highlight composition, attendee/performance counts, average scores
- Buildings: counts by type, queue size
- Spirit: state label and meter (0-100)
- Current policies: all role assignments and resource priority order
- Patron context: favorite compositions, interesting elves
Compositions are capped at 15 and revel history at 5 to control token usage.
Interactions
- How the Curator Works -- the overall curator architecture, command dispatch, and
CulturalAdvisortrait. - Dummy Curator -- the rule-based fallback that runs when the LLM is unavailable or over budget.
- Needs & Mood -- morale and satisfaction values visible in the state snapshot.
Tips
Effective Prompting (Player Messages)
The patron message is injected verbatim into the system prompt. To get good results:
- Be specific about elves by name: "Focus on Elowen's development as a composer" works better than "make better music."
- Reference the aesthetic axes: "I want a more avant-garde settlement" tells the LLM to favor low-tradition elves.
- Ask for explanations: The LLM always emits a
reasoningfield, but a pointed question like "Why is Theron discontented?" produces richer analysis. - Give artistic direction: "Schedule a revel featuring emotional compositions" guides the LLM's revel timing and implicitly its composer promotion decisions.
Cost Management
- The default model is
haiku-- fast and inexpensive. - At 5 consultations per game day, a typical session costs fractions of a cent.
- The
min_consult_gapof 50 ticks prevents rapid-fire requests even when many events trigger in quick succession. - Token costs are tracked in
total_input_tokensandtotal_output_tokenson the curator struct (not yet surfaced in the UI).
Fallback Behavior
- When the LLM request is in flight, the dummy curator runs. This means the first day's decisions are always rule-based (bootstrap roles, initial priority).
- If
claudeis not on your PATH, the curator permanently switches to dummy mode after the first failed spawn. Check your terminal for the warning: "LLM curator: failed to spawn claude." - LLM results that contain only unknown commands (all filtered out) are treated as empty and trigger a fallback cycle.