Agent MemoryMemory ArchitectureEpisodic MemoryKnowledge Graphs

Episodic Memory: Time Cells, Place Cells, and Bi-Temporal Graphs

Ryan Musser
Ryan Musser
Founder

The "remember when" system

Close your eyes and think about your last birthday. You can probably picture the room, the people, what you ate, even the way you felt. That kind of recall is its own distinct type of memory. It is not raw facts (those live in semantic memory) and it is not skills (those live in procedural memory). It is a personal experience tagged with time, place, and emotion. Endel Tulving coined the term episodic memory for it in 1972 and argued that its defining feature is autonoetic consciousness, the subjective sense of mentally traveling back to re-experience an event.

The biology

The hippocampus is the central encoding structure for episodic memory. Three cell types do most of the heavy lifting:

  • Place cells, discovered by John O'Keefe in 1971, fire when an animal is in a specific location. They give episodes a spatial coordinate.
  • Time cells (Eichenbaum, 2014) fire at specific moments in a temporal sequence. They give episodes a clock.
  • Grid cells (Moser et al., 2008) provide a hexagonal coordinate system for navigation, and together with place cells they form a kind of internal GPS.

The hippocampus performs relational binding, stitching disparate elements (people, places, objects, emotions) into a single coherent episode through neural synchrony. This is why a smell can drag back the entire scene of a childhood kitchen: the binding mechanism re-activates everything that was originally bound together.

The classic case study for episodic memory is Patient K.C., who had bilateral hippocampal damage. He retained essentially all of his semantic knowledge (he could answer factual questions, use language fluently, drive a car) but lost every personal episodic memory and could not form new ones. He could tell you what year he was born but could not recall a single specific moment from any year of his life. K.C. demonstrated that episodic and semantic memory are dissociable: they live in different places and can fail independently.

The technology

The most sophisticated production analog is Graphiti, an open-source temporal knowledge graph by Zep. Graphiti's episode subgraph records raw conversational data as timestamped nodes. A bi-temporal model tracks four timestamps per fact: system time (when the fact entered the database) and real-world validity intervals (when the fact actually started and stopped being true). This separation enables temporal queries like "what did the user believe in March?" or "what was true at 2pm yesterday?" Graphiti reports 94.8% accuracy on the Deep Memory Retrieval benchmark versus MemGPT's 93.4%.

Letta's recall memory logs all conversation messages searchable by date or text, providing a basic episodic system. LangMem stores episodic memories as distilled few-shot examples ("past interactions that the agent can retrieve for similar future situations"). Mem0 primarily extracts semantic facts but also stores timestamped, versioned entries with episodic character, with the team reporting a 26% accuracy improvement over OpenAI's memory on the LOCOMO benchmark.

Pink et al. (Feb 2025, arXiv:2502.06975) made the explicit argument that "Episodic Memory is the Missing Piece for Long-Term LLM Agents," and proposed a 5-property framework for what an episodic memory system must do: time-ordered storage, contextual binding, autonoetic re-experiencing, episode boundary detection, and consolidation toward semantic memory.

The shape of the engineering is recognizable. A graph (or graph-like store) holds episode nodes. Each node has timestamps and links to entities (people, places, things) that participated. Retrieval can be by time range, by entity, by similarity, or by combinations of all three. That is a faithful technical mirror of place + time + relational binding.

Where the gap is

Production systems are good at storing episodes. They are still bad at re-experiencing them. Autonoetic re-experiencing, the subjective "I am there again" feeling that distinguishes episodic from semantic memory, has no real implementation. When an LLM retrieves a past episode, it gets a string of facts, not a re-instantiated context that it can mentally inhabit and reason from.

Episode boundary detection is also unsolved. The brain knows roughly when one episode ends and another begins, partly because of perceptual breaks (you walk through a doorway, the scene changes). Agents tend to either treat every message as its own episode or treat entire sessions as one episode, with little judgment in between.

Practical implication: if you need temporal queries and entity history, Graphiti and bi-temporal graphs are production-grade. If you need the agent to actually relive past situations and reason from inside them, you are still in research territory.

Series footer

← Previous: Attention Gating · Series anchor · Next: Semantic Memory →

Episodic Memory: Time Cells, Place Cells, and Bi-Temporal Graphs | TypeGraph