How SITREPs Are Generated
A SITREP is a structured situation report synthesised by an AI pipeline from a curated evidence base. Every report is reproducible from its inputs, lists every source it leaned on, and records the models that produced it. This page walks through what actually happens when a new SITREP is produced.
The pipeline at a glance
1 · Trigger
A run starts one of three ways:
- Scheduled — GitHub Actions cron at 00:00 UTC daily.
- Manual —
/generate_sitrepto the Telegram bot, or Run workflow in the GitHub UI. - Event-driven — on demand when a flash event justifies an off-schedule publish.
Every run takes a coverage window (default 24 hours) — the period the SITREP is responsible for analysing.
2 · Grounding
The pipeline pulls an evidence pack covering the window: Exa web search against a curated source allowlist (Western wires, Israeli media, Arab/Gulf outlets, Iranian state media, OSINT, think-tank commentary, official government channels) plus direct RSS feeds where the outlet publishes one. The allowlist is the same set visible on the Sources page.
Each item is deduplicated and tagged with its source slug so the writer can cite specifically. Nothing outside the allowlist enters the pack — this is the single biggest lever on what the report can and cannot say.
3 · Triage
A triage model reads the evidence pack and decides, per theatre (Iran, Lebanon/Hezbollah, Gaza, West Bank, Multilateral), whether anything significant happened. It emits:
- Urgency — Flash, Priority, Routine, or Background.
- Flagged theatres — which regions warrant a section.
- Heartbeat flag — set if the window is genuinely quiet.
If the window is unflagged and background urgency, the pipeline exits without publishing (a "heartbeat"). This stops the archive from filling with content-free filler SITREPs on quiet days. Short-window re-runs with no new signal are also suppressed.
4 · Research
For each flagged theatre, a research pass does deeper retrieval — broader queries, cross-checks between outlets, first-party statements — to produce a per-theatre evidence brief. This is where conflicting accounts get surfaced rather than flattened.
5 · Writer
The writer model composes the structured SITREP from the research briefs. It is constrained to produce:
- BLUF — a bottom-line-up-front paragraph.
- Top lines — exactly three bullet highlights.
- Situational report — a short topline framing.
- Per-theatre sections — only for flagged regions; rest marked NOSIG.
- OSINT indicators — three observables worth monitoring.
- Predictions — three falsifiable, time-bounded predictions with confidence levels.
- Tags, source refs, location refs — all drawn from catalog slugs, not invented.
- Urgency, CAMEO QuadClass, Goldstein, magnitude — structured event metadata for later analysis.
Invented sources, hallucinated tag slugs, and bare assertions without citation are rejected by schema validation — the run fails rather than publishing garbage.
6 · Publish, audio, broadcast
The validated SITREP lands in Postgres and becomes live at /update/<permalink>. Two follow-up steps run:
- Audio — the report is normalised for TTS, synthesised with Microsoft edge-tts, concatenated with ffmpeg, and uploaded to Cloudflare R2. The podcast RSS feed at /podcast.xml picks it up automatically.
- Broadcast — a summary + permalink is posted to the Telegram channel. The bot also replies to the triggering user if the run was manual.
What SITREPs are not
- Not reporting. The pipeline adds zero primary reporting. Everything comes from the cited sources.
- Not forecasting. The predictions block is a structured lookahead with explicit confidence — not a calibrated probability.
- Not neutral. The source allowlist, the triage model's judgement of significance, and the writer's framing all carry bias. The design goal is to make that bias inspectable — every source is listed, every model attributed — not to pretend it isn't there.
Source code
The pipeline code and everything downstream of it is open. See danielrosehill/SITREP_ISR for the agent (LangGraph + Python), the Next.js frontend, audio generation, and Telegram integration.
For the list of sources drawn on during grounding, see Sources. For the simulations architecture — a separate system that publishes AI council deliberations — see How simulations work.