Documentation Index
Fetch the complete documentation index at: https://kiro-learn.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What is the event buffer?
The event buffer is a per-project staging area that sits between event ingestion and memory extraction. When the collector receives an event from a shim, it stores the event in the database immediately — but extraction (the LLM-powered step that turns raw events into structured memory records) happens asynchronously, in batches. The buffer holds events that are waiting to be extracted. It decouples the speed of ingestion from the latency of extraction, so the shim never blocks waiting for an LLM response.Why buffering exists
Extraction is slow. Each batch of events is sent to an LLM (via kiro-cli → Amazon Bedrock), which takes seconds to respond. Meanwhile, events keep arriving — a single agent turn can produce dozens of tool-use events in rapid succession. Without a buffer, the system would need to either:- Block ingestion until extraction finishes (unacceptable — the shim must exit immediately), or
- Extract one event at a time (wasteful — batching is more efficient for LLM calls).
How events flow into the buffer
Events pass through the collector pipeline before reaching the buffer:- Deduplication — reject events already seen (by
event_id). - Privacy scrub — strip
<private>...</private>tags, replacing content with[REDACTED]. - Storage — persist the full event in the database.
- Buffer append — project a lightweight entry and append it to the project’s buffer file.
event_id, namespace, kind, body, timestamp, and surface. Fields like schema_version, content_hash, and the full source block are dropped to keep buffer files lean.
Per-project isolation
Each project gets its own buffer file — an append-only NDJSON (newline-delimited JSON) file stored at:- Extraction processes each project independently — a slow extraction in one project doesn’t block another.
- Buffer size thresholds are tracked per project.
- Clearing a buffer after successful extraction only affects one project.
What triggers extraction
The buffer watcher monitors each project’s buffer and fires an extraction trigger when either condition is met:| Trigger | Default threshold | Why |
|---|---|---|
| Size threshold | 256 KiB | Enough events have accumulated to make a batch worthwhile. |
| Idle timer | 5 seconds | No new events have arrived — extract what’s there rather than waiting indefinitely. |
What triggers compaction
Compaction fires when a buffer grows beyond the compaction threshold (default: 1 MiB). This is a safety valve — if extraction can’t keep up with ingestion (perhaps the LLM is slow or failing), the buffer would grow without bound. When compaction fires:- The compaction worker reads existing memory records for the project.
- It sends them to an LLM for summarization — condensing many records into fewer, higher-level summaries.
- The oldest records are evicted deterministically.
- The buffer file is atomically replaced with the compacted content.
Relationship to the workers
The buffer connects two async workers:| Worker | Triggered by | What it does |
|---|---|---|
| Extraction worker | Size threshold or idle timer | Reads the buffer snapshot, frames entries as XML, sends to the LLM, stores resulting memory records, clears the buffer on success. |
| Compaction worker | Compaction threshold | Summarizes existing memory records via LLM, evicts old records, atomically replaces the buffer with compacted content. |
How the buffer is cleared
After the extraction worker successfully processes a batch:- The LLM returns structured memory records.
- The records are stored in the database.
- The buffer file is deleted (cleared).
- The watcher resets its byte counter for the project.
Resilience
The buffer is designed to handle failures gracefully:- Crash recovery — NDJSON format means a partial write only corrupts the last line. On the next read, corrupt lines are skipped with a warning.
- Concurrent access — the atomic replace operation uses POSIX file locking (
flock) to prevent data loss when compaction and ingestion happen simultaneously. Any events appended during compaction are captured in a “catch-up window” and replayed into the new buffer. - Circuit breaker — after 3 consecutive extraction failures, the watcher disables extraction for that project and logs a warning. Events continue to buffer safely.
You can inspect a project’s buffer directly by reading the NDJSON file at
~/.kiro-learn/buffers/<project_id>/buffer.ndjson. Each line is a JSON object you can pipe through jq for readability.Related pages
Collector
The daemon that appends events to the buffer
Extraction
The worker that reads buffer snapshots and creates memory records
Compaction
The pressure valve when buffers grow too large
Projects
How per-project buffer isolation is defined
Database
Where events and memory records are persisted