Agent MemoryMemory ArchitectureProspective MemoryScheduling

Prospective Memory: Teaching Agents to Remember Future Intentions

Ryan Musser
Ryan Musser
Founder

The memory of things you have not done yet

Most memory research is about the past. But humans have a separate, equally critical memory system for the future. Remembering to take your medication. Remembering to pick up bread on the way home. Remembering, when you next see John, to tell him about the meeting. This is prospective memory, and it is fundamentally different from retrospective memory in interesting ways.

The biology

Prospective memory has two flavors:

  • Time-based PM: triggered at specific times. "At 3pm, take the pill."
  • Event-based PM: triggered by environmental cues. "When I see John, tell him about the meeting."

The rostral prefrontal cortex (Brodmann area 10) is specifically implicated in prospective memory. Damage there impairs the ability to follow through on intentions even when the underlying retrospective memory is intact: people remember the plan when reminded, but cannot self-cue the recall at the right moment.

The McDaniel and Einstein multiprocess framework proposes that some PM relies on spontaneous retrieval (the cue automatically reactivates the intention) while other PM requires effortful monitoring (the system actively watches for the cue). The two strategies have different costs. Spontaneous retrieval is cheap but only works when the cue is highly distinctive. Effortful monitoring is expensive but works when cues are subtle.

Time-based PM tends to require monitoring: humans are notoriously bad at noticing the passage of time without external aids. Event-based PM with strong cues ("when you see John") tends to be more reliable, which is why writing tasks on visible sticky notes works better than mental promises.

The technology

Traditional task scheduling (cron, Celery, Airflow, event-driven architectures) is mature infrastructure that solves time-based and event-based PM at the system level. The "solved" version of prospective memory has been a normal piece of distributed systems for decades. The interesting part is making the agent do this autonomously, not just executing prescheduled jobs.

The AI-native frontier:

  • RecallAgent (Mem0 + Claude Agent SDK, 2025) implements prospective memory with a background polling loop for due reminders and learns from user behavior. If users consistently confirm work reminders around 10am, the system starts suggesting that time slot for new reminders.
  • MemU (NevaMind-AI, 2025) is a framework for "24/7 proactive agents" that infers user intent and acts without explicit commands, the closest analog to spontaneous retrieval in prospective memory.
  • Mem0 with scheduled operations can attach future-dated triggers to memories, so the memory effectively has a "remind me when X" attached to it.
  • Operating systems and assistants (Apple Reminders' "When I get home," Google Assistant's location-based reminders) implement event-based PM with location and time triggers, which is the pattern AI agents are now adopting.

The pattern in production agents is roughly: a memory store with timestamps and event tags, plus a polling or trigger loop that watches for matches, plus an LLM judgment layer that decides whether to surface the reminder now or wait. That stack is essentially McDaniel and Einstein's multiprocess framework restated as engineering.

Where the gap is

Time-based prospective memory at the infrastructure level is solved (cron, schedulers, etc.). AI-native prospective memory is nascent. The interesting frontier is autonomous intention formation: agents that generate their own future intentions from a conversation ("the user said they would think about it; I should follow up in three days") and act on them at the right moment without explicit user commands.

Effortful monitoring versus spontaneous retrieval is also under-developed. Most systems either poll constantly (expensive) or wait for an explicit trigger (misses subtle cues). A system that watches for cues only when the prospective memory is "primed" by recent conversation would be a cleaner biological mirror.

Practical implication: if your agent is supposed to follow up, schedule, or remind, do not assume the LLM will remember to do it without help. Wire up explicit prospective memory: a queue of pending intentions, with timestamps and event triggers, polled by a background loop. The LLM can decide what to add to the queue and how to act on each item, but the queue itself should be infrastructure, not a hope.

Series footer

← Previous: Memory Schemas · Series anchor · Next: Hippocampal Indexing →

Prospective Memory: Teaching Agents to Remember Future Intentions | TypeGraph