context.memory() – MemoryFacade API Reference¶
MemoryFacade coordinates HotLog (fast recent events), Persistence (durable JSONL), and Indices (derived KV views).
It is accessed via node_context.memory but aggregates functionality from several internal mixins.
Methods for vector-memory storage and search is under development.
0. Memory Event¶
Each memory is stored as Event. The memory method can return either Event or its dictionary depending on the behavior. Not all attributes are used in every memory-related functions.
Event
A structured event log entry stored in memory. This dataclass represents a single event in the system's event log, capturing execution context, semantic information, and optional metadata about the event.
Attributes:
| Name | Type | Description |
|---|---|---|
event_id |
str
|
Unique identifier for this event. |
ts |
str
|
Timestamp when the event occurred. |
run_id |
str
|
Identifier for the execution run containing this event. |
scope_id |
str
|
Identifier for the execution scope. |
user_id |
str | None
|
Optional identifier for the user associated with the event. |
org_id |
str | None
|
Optional identifier for the organization. |
client_id |
str | None
|
Optional identifier for the client. |
session_id |
str | None
|
Optional identifier for the session. |
kind |
EventKind
|
Logical type of the event (e.g., "chat_user", "tool_start"). |
stage |
str | None
|
Optional phase indicator (e.g., "user", "assistant", "system", "tool"). |
text |
str | None
|
Primary human-readable content of the event (short, may be truncated). |
tags |
list[str] | None
|
Low-cardinality labels for filtering and searching. |
data |
dict[str, Any] | None
|
Arbitrary JSON payload containing event-specific data. |
metrics |
dict[str, float] | None
|
Numeric metrics associated with the event. |
graph_id |
str | None
|
Optional identifier for the graph context. |
node_id |
str | None
|
Optional identifier for the node context. |
tool |
str | None
|
Tool topic associated with the event. Deprecated: use topic instead. |
topic |
str | None
|
Topic classification for the event. |
severity |
int
|
Severity level of the event (1=low, 2=medium, 3=high). Defaults to 2. |
signal |
float
|
Signal strength indicating estimated importance or relevance. Defaults to 0.0. |
inputs |
list[Value] | None
|
Optional input values associated with the event. |
outputs |
list[Value] | None
|
Optional output values associated with the event. |
app_id |
str | None
|
Reserved for schema compatibility. |
agent_id |
str | None
|
Reserved for schema compatibility. |
embedding |
list[float] | None
|
Reserved for future vector payload usage. |
pii_flags |
dict[str, bool] | None
|
Reserved for future PII marker usage. |
version |
int
|
Schema version for tracking schema evolution. Defaults to 2. |
1. Core Recording¶
Basic event logging and raw data access for general messages/memory.
record_raw(*, base, text, ...)
Record an unstructured event with optional preview text and metrics.
This method generates a stable event ID, populates standard fields
(e.g., run_id, scope_id, severity, signal), and appends the
event to both the HotLog and Persistence layers. Additionally, it
records a metering event for tracking purposes.
Examples:
Basic usage with minimal fields:
await context.memory().record_raw(
base={"kind": "user_action", "severity": 2},
text="User clicked a button."
)
Including metrics and additional fields:
await context.memory().record_raw(
base={"kind": "tool_call", "stage": "execution", "severity": 3},
text="Tool executed successfully.",
metrics={"latency": 0.123, "tokens_used": 45}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base
|
dict[str, Any]
|
A dictionary containing event fields such as |
required |
text
|
str | None
|
Optional preview text for the event. If None, it is derived
from the |
None
|
metrics
|
dict[str, float] | None
|
Optional dictionary of numeric metrics (e.g., latency, token usage) to include in the event. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The fully constructed and persisted |
record(kind, data, tags, ...)
Record an event with common fields.
This method standardizes event creation by populating fields such as
kind, severity, tags, and metrics. It also supports optional
references for inputs and outputs, and allows for signal strength
overrides.
Examples:
Basic usage for a user action:
await context.memory().record(
kind="user_action",
data={"action": "clicked_button"},
tags=["ui", "interaction"]
)
Recording a tool execution with metrics:
await context.memory().record(
kind="tool_call",
data={"tool": "search", "query": "weather"},
metrics={"latency": 0.123, "tokens_used": 45},
severity=3
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kind
|
str
|
Logical kind of event (e.g., |
required |
data
|
Any
|
JSON-serializable content or string providing event details. |
required |
tags
|
list[str] | None
|
A list of string labels for categorization. Defaults to None. |
None
|
severity
|
int
|
An integer (1-3) indicating importance. Defaults to 2. |
2
|
stage
|
str | None
|
Optional stage of the event (e.g., |
None
|
inputs_ref
|
Optional references for input values. Defaults to None. |
None
|
|
outputs_ref
|
Optional references for output values. Defaults to None. |
None
|
|
metrics
|
dict[str, float] | None
|
A dictionary of numeric metrics (e.g., latency, token usage). Defaults to None. |
None
|
signal
|
float | None
|
Manual override for the signal strength (0.0 to 1.0). If None, it is calculated heuristically. |
None
|
text
|
str | None
|
Optional preview text override. If None, it is derived from |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The fully constructed and persisted |
recent(*, kinds, limit, ...)
Retrieve recent events.
This method fetches a list of recent events, optionally filtered by kinds.
Examples:
Return Event objects (default):
events = await context.memory().recent(limit=20)
Return normalized dict payloads:
rows = await context.memory().recent(limit=20, return_event=False)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kinds
|
list[str] | None
|
A list of event kinds to filter by. Defaults to None. |
None
|
limit
|
int
|
The maximum number of events to retrieve. Defaults to 50. |
50
|
level
|
ScopeLevel | None
|
Optional scope level to filter events by. If provided, only events associated with the specified scope level will be returned. |
None
|
return_event
|
bool
|
If True return |
True
|
Returns:
| Type | Description |
|---|---|
list[Any]
|
list[Any]: List of Event objects or normalized dictionaries. |
Notes
This method interacts with the underlying HotLog service to fetch events associated with the current timeline. The events are returned in chronological order, with the most recent events appearing last in the list. Memory out of the limit will be discarded in the HotLog layer (but persistent in the Persistence layer). Memory in persistence cannot be retrieved via this method.
Scope Level Filtering
- level="scope" or None: entire memory scope / timeline (current behavior).
- level="session": only events for this session_id.
- level="run": only events for this run_id.
- level="user": only events for this user/client.
- level="org": only events for this org.
recent_persisted(*, kinds, limit, ...)
Retrieve events from the persistence layer (full history) for this timeline.
This is a higher-latency, deeper history path than recent().
Examples:
Query persisted events and return Event objects:
events = await context.memory().recent_persisted(limit=100)
Query persisted events and return normalized dict payloads:
rows = await context.memory().recent_persisted(limit=100, return_event=False)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kinds
|
list[str] | None
|
Optional event kinds filter. |
None
|
tags
|
list[str] | None
|
Optional tag filter. |
None
|
limit
|
int
|
Maximum rows to return. |
50
|
level
|
ScopeLevel | None
|
Optional scope level filter. |
None
|
since
|
str | None
|
Optional lower timestamp bound. |
None
|
until
|
str | None
|
Optional upper timestamp bound. |
None
|
offset
|
int
|
Offset for pagination. |
0
|
return_event
|
bool
|
If True return Event objects; otherwise dict payloads. |
True
|
Returns:
| Type | Description |
|---|---|
list[Any]
|
list[Any]: Event rows or normalized dictionaries. |
search(query, kinds, ...)
Search events using scoped indices with hotlog fallback.
This method uses index-backed retrieval when available, then falls back to lexical filtering over recent events.
Examples:
Semantic search with default settings:
events = await context.memory().search(query="deployment failure", limit=10)
Lexical-only search for tagged events:
events = await context.memory().search(
query="timeout",
tags=["tool", "error"],
use_embedding=False,
level="run",
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query
|
str | None
|
Optional query string. If None, returns filtered recent events. |
None
|
kinds
|
list[str] | None
|
Optional event kind filters. |
None
|
tags
|
list[str] | None
|
Optional required tags. |
None
|
limit
|
int
|
Maximum number of events to return. |
100
|
use_embedding
|
bool
|
If True, prefer index-backed semantic/hybrid search. |
True
|
level
|
ScopeLevel | None
|
Optional scope level constraint. |
None
|
time_window
|
str | None
|
Optional backend time-window hint. |
None
|
mode
|
SearchMode | None
|
Optional explicit backend search mode. |
None
|
Returns:
| Type | Description |
|---|---|
list[Event]
|
list[Event]: Matching events in relevance/fallback order. |
get_event(event_id, ...)
Retrieve a specific event by ID.
The lookup first checks hotlog, then falls back to persistence when supported by the configured backend.
Examples:
Fetch a known event:
evt = await context.memory().get_event("evt_123")
Handle missing events:
evt = await context.memory().get_event("evt_missing")
if evt is None:
...
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event_id
|
str
|
Unique event identifier to resolve. |
required |
Returns:
| Type | Description |
|---|---|
Event | None
|
Event | None: The resolved event, or None when not found. |
2. Chat Operations¶
Convenience method for recording chat-related memory events.
record_chat(role, text, ...)
Record a single chat turn in a normalized format.
This method automatically handles timestamping, standardizes the role,
and dispatches the event to the configured persistence layer.
Examples:
Basic usage for a user message:
await context.memory().record_chat("user", "Hello graph!")
Recording a tool output with extra metadata:
await context.memory().record_chat(
"tool",
"Search results found.",
data={"query": "weather", "hits": 5}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
role
|
Literal['user', 'assistant', 'system', 'tool']
|
The semantic role of the speaker. Must be one of:
|
required |
text
|
str
|
The primary text content of the message. |
required |
tags
|
list[str] | None
|
A list of string labels for categorization. The tag |
None
|
data
|
dict[str, Any] | None
|
Arbitrary JSON-serializable dictionary containing extra context (e.g., token counts, model names). |
None
|
severity
|
int
|
An integer (1-3) indicating importance. (1=Low, 2=Normal, 3=High). |
2
|
signal
|
float | None
|
Manual override for the signal strength (0.0 to 1.0). If None, it is calculated heuristically. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The fully persisted |
record_chat_user(text, *, ...)
Record a user chat turn in a normalized format.
This method automatically handles timestamping, standardizes the role,
and dispatches the event to the configured persistence layer.
Examples:
Basic usage for a user message:
await context.memory().record_chat_user("Hello, how are you doing?")
Recording a user message with extra metadata:
await context.memory().record_chat_user(
"I need help with my account.",
tags=["support", "account"],
data={"issue": "login failure"}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The primary text content of the user's message. |
required |
tags
|
list[str] | None
|
A list of string labels for categorization. The tag |
None
|
data
|
dict[str, Any] | None
|
Arbitrary JSON-serializable dictionary containing extra context (e.g., user metadata, session details). |
None
|
severity
|
int
|
An integer (1-3) indicating importance. (1=Low, 2=Normal, 3=High). Defaults to 2. |
2
|
signal
|
float | None
|
Manual override for the signal strength (0.0 to 1.0). If None, it is calculated heuristically. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The fully persisted |
record_chat_assistant(text, *, ...)
Record an assistant chat turn in a normalized format.
This method automatically handles timestamping, standardizes the role,
and dispatches the event to the configured persistence layer.
Examples:
Basic usage for an assistant message:
await context.memory().record_chat_assistant("How can I assist you?")
Recording an assistant message with extra metadata:
await context.memory().record_chat_assistant(
"Here are the search results.",
tags=["search", "response"],
data={"query": "latest news", "results_count": 10}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The primary text content of the assistant's message. |
required |
tags
|
list[str] | None
|
A list of string labels for categorization. The tag |
None
|
data
|
dict[str, Any] | None
|
Arbitrary JSON-serializable dictionary containing extra context (e.g., token counts, model names). |
None
|
severity
|
int
|
An integer (1-3) indicating importance. (1=Low, 2=Normal, 3=High). |
2
|
signal
|
float | None
|
Manual override for the signal strength (0.0 to 1.0). If None, it is calculated heuristically. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The fully persisted |
record_chat_system(text, *, ...)
Record a system message in a normalized format.
This method automatically handles timestamping, standardizes the role,
and dispatches the event to the configured persistence layer.
Examples:
Basic usage for a system message:
await context.memory().record_chat_system("System initialized.")
Recording a system message with extra metadata:
await context.memory().record_chat_system(
"Configuration updated.",
tags=["config", "update"],
data={"version": "1.2.3"}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The primary text content of the system message. |
required |
tags
|
list[str] | None
|
A list of string labels for categorization. The tag |
None
|
data
|
dict[str, Any] | None
|
Arbitrary JSON-serializable dictionary containing extra context (e.g., configuration details, system state). |
None
|
severity
|
int
|
An integer (1-3) indicating importance. (1=Low, 2=Normal, 3=High). Defaults to 1. |
1
|
signal
|
float | None
|
Manual override for the signal strength (0.0 to 1.0). If None, it is calculated heuristically. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The fully persisted |
recent_chat(*, limit, roles)
Retrieve the most recent chat turns as a normalized list.
This method fetches the last limit chat events of type chat.turn
and returns them in a standardized format. Each item in the returned
list contains the timestamp, role, text, and tags associated with the
chat event. If tags is provided, over-fetch and filter because HotLog doesn't filter by tags.
Returned messages are chronological, with the most recent last.
Examples:
Fetch the last 10 chat turns:
recent_chats = await context.memory().recent_chat(limit=10)
Fetch the last 20 chat turns for specific roles:
recent_chats = await context.memory().recent_chat(
limit=20, roles=["user", "assistant"]
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
The maximum number of chat events to retrieve. Defaults to 50. |
50
|
roles
|
Sequence[str] | None
|
An optional sequence of roles to filter by (e.g., |
None
|
tags
|
Sequence[str] | None
|
An optional sequence of tags to filter by. If provided, the method will over-fetch and filter results to include only those that have at least one of the specified tags. |
None
|
level
|
str | None
|
Optional scope level to filter events by (e.g., "session", "run", "user", "org"). If provided, the search will be constrained to events associated with the specified scope level. |
None
|
use_persistence
|
bool
|
Whether to include events from the full persistence layer (True) or just the hotlog (False). Defaults to False. |
False
|
return_event
|
bool
|
If True, return |
False
|
Returns:
| Type | Description |
|---|---|
list[Any]
|
list[Any]: Event list when |
list[Any]
|
with the following keys: - "ts": The timestamp of the event. - "role": The role of the speaker (e.g., "user", "assistant"). - "text": The text content of the chat message. - "tags": A list of tags associated with the event. |
3. Tool-related Memory¶
Tool memory is a convenient record-retrival interface for typed tool results.
record_tool_result(tool, inputs, ...)
Record a tool execution result in normalized event form.
Examples:
Record a tool result:
evt = await context.memory().record_tool_result(
tool="planner",
inputs=[{"q": "status"}],
outputs=[{"ok": True}],
message="Planner completed.",
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tool
|
str
|
Tool identifier. |
required |
inputs
|
list[dict[str, Any]] | None
|
Optional list of input payload dictionaries. |
None
|
outputs
|
list[dict[str, Any]] | None
|
Optional list of output payload dictionaries. |
None
|
tags
|
list[str] | None
|
Optional tags for filtering/search. |
None
|
metrics
|
dict[str, float] | None
|
Optional numeric metrics. |
None
|
message
|
str | None
|
Optional human-readable summary text. |
None
|
severity
|
int
|
Event severity. |
3
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
Persisted tool-result event. |
recent_tool_results(*, tool, limit, ...)
Retrieve recent tool-result events for a specific tool.
Examples:
Return Event objects:
rows = await context.memory().recent_tool_results(tool="planner", limit=5)
Return normalized dictionaries:
rows = await context.memory().recent_tool_results(
tool="planner",
limit=5,
return_event=False,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tool
|
str
|
Tool name to filter by. |
required |
limit
|
int
|
Maximum number of results. |
10
|
return_event
|
bool
|
If True return Event objects; otherwise normalized dictionaries. |
True
|
Returns:
| Type | Description |
|---|---|
list[Any]
|
list[Any]: Event rows or normalized dict payloads. |
4. State-related Memory¶
State memory allows the save and retrieve of Python classes across the memory. State can be any JSON-serializable object.
record_state(key, value, ...)
Record a structured state snapshot event.
This method normalizes the value into a serializable payload and
appends a state event tagged with both state and state:{key}.
Examples:
Record a basic state snapshot:
await context.memory().record_state(
key="planner",
value={"step": "draft", "attempt": 1},
)
Record state with custom metadata:
await context.memory().record_state(
key="session_config",
value={"temperature": 0.2},
tags=["runtime"],
meta={"source": "bootstrap"},
severity=1,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
str
|
Logical state key (for example, |
required |
value
|
Any
|
Value to snapshot; converted to a serializable representation. |
required |
tags
|
list[str] | None
|
Optional additional tags appended to default state tags. |
None
|
meta
|
dict[str, Any] | None
|
Optional metadata stored in the event payload. |
None
|
severity
|
int
|
Event severity to store with the snapshot. |
2
|
signal
|
float | None
|
Optional signal override for the event. |
None
|
kind
|
str
|
Event kind. Defaults to |
'state.snapshot'
|
stage
|
str | None
|
Optional event stage. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Event |
Event
|
The persisted state snapshot event. |
latest_state(key, *)
Fetch the most recent state value for a key.
This method finds the newest matching state snapshot and returns only
its value field from the stored payload.
Examples:
Read latest planner state:
latest = await context.memory().latest_state("planner")
Read from persisted user-level history:
latest = await context.memory().latest_state(
"session_config",
level="user",
user_persistence=True,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
str
|
Logical state key to retrieve. |
required |
tags
|
Sequence[str] | None
|
Optional additional required tags. |
None
|
level
|
ScopeLevel | None
|
Optional scope level filter. |
None
|
user_persistence
|
bool
|
If True, query persistence; otherwise use hotlog. |
False
|
kind
|
str
|
Event kind used for state snapshots. |
'state.snapshot'
|
Returns:
| Type | Description |
|---|---|
Any | None
|
Any | None: The latest stored state value, or None if unavailable. |
state_history(key, *, tags, ...)
Fetch state snapshot history for a key.
This method returns full Event rows so callers can inspect state
values, metadata, timestamps, and tags together.
Examples:
Load the latest 20 snapshots:
events = await context.memory().state_history("planner", limit=20)
Load persisted user-level snapshots:
events = await context.memory().state_history(
"session_config",
level="user",
use_persistence=True,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
str
|
Logical state key to retrieve history for. |
required |
tags
|
Sequence[str] | None
|
Optional additional required tags. |
None
|
limit
|
int
|
Maximum number of events to return. |
50
|
level
|
ScopeLevel | None
|
Optional scope level filter. |
None
|
kind
|
str
|
Event kind used for state snapshots. |
'state.snapshot'
|
use_persistence
|
bool
|
If True, query persistence; otherwise use hotlog. |
False
|
Returns:
| Type | Description |
|---|---|
list[Event]
|
list[Event]: Matching state snapshot events in chronological order. |
search_state(query, *, key, ...)
Search indexed state snapshot events.
This method applies state-specific filters and delegates search to the scoped index backend. If no backend exists, it returns an empty list.
Examples:
Search all state snapshots:
results = await context.memory().search_state(query="temperature", top_k=5)
Search a specific state key in a time window:
results = await context.memory().search_state(
query="planner",
key="session_config",
time_window="7d",
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query
|
str
|
Free-text query string. |
required |
key
|
str | None
|
Optional logical state key filter. |
None
|
tags
|
Sequence[str] | None
|
Optional additional required tags. |
None
|
top_k
|
int
|
Maximum number of scored results to return. |
10
|
time_window
|
str | None
|
Optional relative time-window expression. |
None
|
created_at_min
|
float | None
|
Optional lower timestamp bound. |
None
|
created_at_max
|
float | None
|
Optional upper timestamp bound. |
None
|
Returns:
| Type | Description |
|---|---|
list[EventSearchResult]
|
list[EventSearchResult]: Scored search matches with resolved events. |
5. Memory Distillation¶
distill_long_term(scope_id, *, summary_tag, ...)
Distill long-term memory summaries based on specified criteria.
This method generates a long-term memory summary by either using a
Long-Term Summarizer or an LLM-based Long-Term Summarizer, depending
on the use_llm flag. The summaries are filtered and configured
based on the provided arguments.
Examples:
Using the default summarizer:
result = await context.memory().distill_long_term(
include_kinds=["note", "event"],
max_events=100
)
result = await context.memory().distill_long_term(
use_llm=True,
summary_tag="custom_summary",
min_signal=0.5
)
Args:
summary_tag: A tag to categorize the generated summary. Defaults
to `"session"`.
summary_kind: The kind of summary to generate. Defaults to
`"long_term_summary"`.
include_kinds: A list of memory kinds to include in the summary.
If None, all kinds are included.
include_tags: A list of tags to filter the memories. If None, no
tag filtering is applied.
max_events: The maximum number of events to include in the
summary. Defaults to 200.
min_signal: The minimum signal threshold for filtering events.
If None, the default signal threshold is used.
use_llm: Whether to use an LLM-based summarizer. Defaults to False.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
dict[str, Any]: A dictionary containing the generated summary. |
Example return value
{
"uri": "file://mem/scope_123/summaries/long_term/2023-10-01T12:00:00Z.json",
"summary_kind": "long_term_summary",
"summary_tag": "session",
"time_window": {"start": "2023-09-01", "end": "2023-09-30"},
"num_events": 150,
"included_kinds": ["note", "event"],
"included_tags": ["important", "meeting"],
}
distill_meta_summary(scope_id, *, summary_kind, ...)
Generate a meta-summary by distilling existing summary events.
This method creates a meta-summary by processing existing long-term summaries. It uses an LLM-based summarizer to generate a higher-level summary based on the provided arguments.
Examples:
Using the default configuration:
result = await context.memory().distill_meta_summary(
source_kind="long_term_summary",
source_tag="session",
)
Customizing the summary kind and tag:
result = await context.memory().distill_meta_summary(
summary_kind="meta_summary",
summary_tag="weekly",
max_summaries=10,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source_kind
|
str
|
The kind of source summaries to process. Defaults to
|
'long_term_summary'
|
source_tag
|
str
|
A tag to filter the source summaries. Defaults to
|
'session'
|
summary_kind
|
str
|
The kind of meta-summary to generate. Defaults to
|
'meta_summary'
|
summary_tag
|
str
|
A tag to categorize the generated meta-summary.
Defaults to |
'meta'
|
max_summaries
|
int
|
The maximum number of source summaries to process. Defaults to 20. |
20
|
min_signal
|
float | None
|
The minimum signal threshold for filtering summaries. If None, the default signal threshold is used. |
None
|
use_llm
|
bool
|
Whether to use an LLM-based summarizer. Defaults to True. |
True
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
dict[str, Any]: A dictionary containing the generated meta-summary. |
Example return value
{
"uri": "file://mem/scope_123/summaries/meta/2023-10-01T12:00:00Z.json",
"summary_kind": "meta_summary",
"summary_tag": "meta",
"time_window": {"start": "2023-09-01", "end": "2023-09-30"},
"num_source_summaries": 15,
}
load_recent_summaries(scope_id, *, summary_tag, limit)
Load the most recent JSON summaries for the specified scope and tag.
This method retrieves up to limit summaries from the DocStore
based on the provided scope_id and summary_tag. Summaries are
identified using the following pattern:
mem/{scope_id}/summaries/{summary_tag}/{ts}.
Examples:
Load the last three session summaries:
summaries = await context.memory().load_recent_summaries(
summary_tag="session",
limit=3
)
Load the last two project summaries:
summaries = await context.memory().load_recent_summaries(
summary_tag="project",
limit=2
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
summary_tag
|
str
|
The tag used to filter summaries (e.g., "session", "project"). Defaults to "session". |
'session'
|
limit
|
int
|
The maximum number of summaries to return. Defaults to 3. |
3
|
Returns:
| Type | Description |
|---|---|
list[dict[str, Any]]
|
list[dict[str, Any]]: A list of summary dictionaries, ordered from oldest to newest. |
load_last_summary(scope_id, *, summary_tag)
Load the most recent JSON summary for the specified memory scope and tag.
This method retrieves the latest summary document from the DocStore
based on the provided scope_id and summary_tag. Summaries are
identified using the following pattern:
mem/{scope_id}/summaries/{summary_tag}/{ts}.
Examples:
Load the last session summary:
summary = await context.memory().load_last_summary(scope_id="user123", summary_tag="session")
Load the last project summary:
summary = await context.memory().load_last_summary(scope_id="project456", summary_tag="project")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scope_id
|
str | None
|
Optional scope identifier. If None, uses the facade scope. |
None
|
summary_tag
|
str
|
The tag used to filter summaries (e.g., "session", "project"). Defaults to "session". |
'session'
|
summary_kind
|
str
|
Summary event kind to load. |
'long_term_summary'
|
level
|
ScopeLevel | None
|
Scope level used for persisted retrieval. |
'scope'
|
Returns:
| Type | Description |
|---|---|
dict[str, Any] | None
|
dict[str, Any] | None: The most recent summary as a dictionary, or None if no summary is found. |
soft_hydrate_last_summary(scope_id, *, summary_tag, summary_kind)
Load the most recent summary for the specified scope and tag, and log a hydrate event.
This method retrieves the latest summary document for the configured
memory scope and summary_tag. If a summary is found, it logs a hydrate
event into the current run's hotlog and persistence layers.
Examples:
Hydrate the last session summary:
summary = await context.memory().soft_hydrate_last_summary(
summary_tag="session"
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
summary_tag
|
str
|
The tag used to filter summaries (e.g., "session", "project"). Defaults to "session". |
'session'
|
summary_kind
|
str
|
The kind of summary (e.g., "long_term_summary", "project_summary"). Defaults to "long_term_summary". |
'long_term_summary'
|
level
|
ScopeLevel | None
|
Scope level used to locate the latest summary. |
'scope'
|
Returns:
| Type | Description |
|---|---|
dict[str, Any] | None
|
dict[str, Any] | None: The loaded summary dictionary if found, otherwise None. |
Side Effects
Appends a hydrate event to HotLog and Persistence for the current timeline.
6. Utilities¶
chat_history_for_llm(*, limit, include_system_summary, ...)
Build a ready-to-send OpenAI-style chat message list.
This method constructs a dictionary containing a summary of previous context and a list of chat messages formatted for use with OpenAI-style chat models. It includes options to limit the number of messages and incorporate long-term summaries.
Examples:
Basic usage with default parameters:
history = await context.memory().chat_history_for_llm()
Including a system summary and limiting messages:
history = await context.memory().chat_history_for_llm(
limit=10, include_system_summary=True
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
The maximum number of recent chat messages to include. Defaults to 20. |
20
|
include_system_summary
|
bool
|
Whether to include a system summary of previous context. Defaults to True. |
True
|
summary_tag
|
str
|
The tag used to filter summaries. Defaults to "session". |
'session'
|
summary_scope_id
|
str | None
|
An optional scope ID for filtering summaries. Defaults to None. |
None
|
max_summaries
|
int
|
The maximum number of summaries to load. Defaults to 3. |
3
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
dict[str, Any]: A dictionary with the following structure: - "summary": A combined long-term summary or an empty string. - "messages": A list of chat messages, each represented as a dictionary with "role" and "content" keys. |
Example of returned structure:
{
"summary": "Summary of previous context...",
"messages": [
{"role": "system", "content": "Summary of previous context..."},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi there! How can I help?"}
]
}
build_prompt_segments(*, recent_chat_limit, include_long_term, ...)
Assemble memory context for prompts, including long-term summaries, recent chat history, and recent tool usage.
Examples:
Build prompt segments with default settings:
segments = await context.memory().build_prompt_segments()
Include recent tool usage and filter by a specific tool:
segments = await context.memory().build_prompt_segments(
include_recent_tools=True,
tool="search",
tool_limit=5
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
recent_chat_limit
|
int
|
The maximum number of recent chat messages to include. Defaults to 12. |
12
|
include_long_term
|
bool
|
Whether to include long-term memory summaries. Defaults to True. |
True
|
summary_tag
|
str
|
The tag used to filter long-term summaries. Defaults to "session". |
'session'
|
max_summaries
|
int
|
The maximum number of long-term summaries to include. Defaults to 3. |
3
|
include_recent_tools
|
bool
|
Whether to include recent tool usage. Defaults to False. |
False
|
tool
|
str | None
|
The specific tool to filter recent tool usage. Defaults to None. |
None
|
tool_limit
|
int
|
The maximum number of recent tool events to include. Defaults to 10. |
10
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
dict[str, Any]: A dictionary containing the following keys:
|
7. Introspection¶
scope_id()
Return the effective memory scope ID for this facade.
This value usually matches the scope identifier used to derive the
timeline (for example session:..., user:..., or run:...).
Examples:
Read the scope ID:
scope_id = context.memory().scope_id()
Handle missing scope IDs defensively:
scope_id = context.memory().scope_id() or "global"
Returns:
| Type | Description |
|---|---|
str | None
|
str | None: The effective memory scope ID, or None if unavailable. |
memory_level()
Return the logical memory level requested for this facade.
The value is read from the attached scope when available.
Examples:
Read the configured memory level:
level = context.memory().memory_level()
Fallback to "scope" when unset:
level = context.memory().memory_level() or "scope"
Returns:
| Type | Description |
|---|---|
str | None
|
str | None: The memory level ( |
bucket_level()
Infer the bucket level from the current memory scope ID.
This inspects the prefix of scope_id() values such as
session:... or user:....
Examples:
Infer the bucket level:
bucket = context.memory().bucket_level()
Handle unknown/global buckets:
bucket = context.memory().bucket_level() or "unknown"
Returns:
| Type | Description |
|---|---|
str | None
|
str | None: Parsed bucket prefix ( |
timeline()
Return the timeline ID used by this memory facade.
This value is the primary partition key used when appending and reading events from hotlog and persistence.
Examples:
Read the timeline ID:
timeline_id = context.memory().timeline()
Fallback to a placeholder:
timeline_id = context.memory().timeline() or "<none>"
Returns:
| Type | Description |
|---|---|
str | None
|
str | None: The timeline identifier, or None if not initialized. |
scope_info()
Return a structured snapshot of current memory scope metadata.
The returned dictionary is intended for diagnostics and observability.
Examples:
Retrieve structured scope information:
info = context.memory().scope_info()
Access timeline and level fields:
info = context.memory().scope_info()
timeline = info.get("timeline_id")
level = info.get("memory_level")
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
dict[str, Any]: Scope and runtime identifiers (timeline, memory scope, |
dict[str, Any]
|
level, and available scope attributes such as run/session/user/org IDs). |
debug_print_scope(prefix)
Print formatted scope diagnostics to stdout.
This is a convenience helper for scripts and tests where a quick text view of memory scope fields is useful.
Examples:
Print with the default prefix:
context.memory().debug_print_scope()
Print with a custom prefix:
context.memory().debug_print_scope(prefix="[DEBUG-MEM]")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prefix
|
str
|
Prefix string prepended to each printed line. |
'[MEM]'
|
Returns:
| Name | Type | Description |
|---|---|---|
None |
None
|
This method prints diagnostics and does not return data. |