Aug 15, 2025 · Louis M. Morgner · Knowledge

Simple Memory for AI Coaching

Simple Memory for AI Coaching

Abstract

Plain-text context blobs don’t scale. They get noisy, redundant, and hard for models to use. This whitepaper proposes Simple Memory: a lean structure with only three primitives—Facts, Episodes, and Schema—plus a tiny scoring function and an intent-based context packer. It keeps memory human-legible, cheap to maintain, and reliably useful for coaching tasks.

Design Principles

  • Simplicity > completeness. Fewer types, fewer rules.
  • Legibility. Everything should read like a tidy notebook a human could edit.
  • Composability. Small, typed pieces that assemble into task-specific context.
  • Auditability. Every suggestion can point to its sources.
  • User sovereignty. The user can inspect, pin, merge, or delete anything.

The Model (only 3 primitives)

1) Fact (atomic claim)

A normalized, time-stamped statement extracted from raw text.


{

  "id": "fact_7b2",

  "s": "Louis",

  "p": "sleep_hours_avg_7d",

  "o": "6.2",

  "time": "2025-08-08",

  "source_ids": ["log_2025-08-08-22-05"],

  "confidence": 0.86,

  "domain": "health",

  "topic": "sleep"

}

Why: Models use crisp atoms better than prose. Facts compress history and support quick lookups.

2) Episode (chunked event)

A small story with a timestamp, short summary, linked facts, and optional affect/metrics deltas.


{

  "id": "ep_314",

  "title": "Late-night coding before demo",

  "summary": "Worked past 23:00 to ship feature; next-morning energy low.",

  "start": "2025-08-09T21:30:00",

  "end": "2025-08-09T23:40:00",

  "participants": ["Louis"],

  "facts": ["fact_7b2", "fact_7b9"],

  "affect": {"valence": -0.3, "arousal": 0.6},

  "domain": "work",

  "topic": "reflecta"

}

Why: The coach often needs narrative context, not just numbers.

3) Schema (durable self-model)

Short, stable bullets the coach should respect: values, preferences, policies, goals.


{

  "id": "sch_22",

  "kind": "policy",                // value | preference | policy | goal

  "text": "Avoid deep work after 21:00 on weekdays.",

  "status": "confirmed",           // proposed | confirmed | outdated

  "evidence": ["ep_314", "fact_7b2"],

  "domain": "health",

  "topic": "energy",

  "version": 3

}

Why: This is the “truth you live by.” It steers plans and recommendations.

Foldering (not a heavy graph, just shallow trees)

Use two tags to keep things tidy:

  • domain: work, health, relationships, finance, learning, etc.
  • topic: freeform subfolder (e.g., reflecta, sleep, family).

That’s it. No general graph is required. If you want relationships, store them as links on items (links: rel:"impacts", to:"sch_22")—optional and sparse.

Memory Layers (pragmatic tiers)

  • L0 Raw: logs, transcripts, files (append-only cold storage).
  • L1 Facts & Episodes: atoms + small stories (your working set).
  • L2 Schema: durable bullets curated from L1 (small, precious).

No recursive clustering, no deep hierarchies. Keep it shallow and obvious.

Scoring & Forgetting (RIF: Recency, Importance, Frequency)

Every item maintains a tiny score to guide retrieval and compression:

recency      = exp(-Δt / τ)      // time decay

importance   = user_rating ∨ inferred (goal impact, strong affect)

frequency    = log(1 + hits) / K

priority     = 0.5*recency + 0.3*importance + 0.2*frequency

Low-priority facts collapse into episode summaries; outdated schema gets marked outdated (kept for history, not shown by default).

Retrieval: Intent-Based Context Packs

Don’t “stuff everything similar.” Route by intent and assemble a pack with strict sections the model can use predictably.

Supported intents → memory mix

  • Plan: current goals/policies + recent episodes + blocking facts.
  • Reflect: relevant schema + contrasting episodes (before/after) + metrics deltas.
  • Recall: the 3–5 best-fit episodes + top supporting facts.
  • Decide: relevant policies/goals + similar past decision episodes + outcome facts.
  • Nudge/Coach: 1–2 policies + 1 metric trend + one actionable suggestion.

If the model needs more, it can request specific IDs to “page in” details.

Update Loop (end-to-end)

  • Ingest: normalize new input → extract facts; detect episodes by time/affect/topic shifts.
  • Propose schema: when patterns repeat, draft a short policy/value/preference with evidence links.
  • User review: user accepts/edits/declines proposals; pin or rate importance.
  • Score: update RIF; compress low-value leaves; mark stale schema as outdated.
  • Retrieve: assemble packs by intent; answer; cite item IDs used.