Memory for AI Applications / Implementing Long-Term Memory in AI Applications
Code Summary: Implementing Long-Term Memory in AI Applications
Long-Term Memory Implementation
Import Modules for Long-Term Memory Management
This imports modules for long-term memory: MongoDBStore and create_vector_index_config from LangGraph for storage and vector index configuration, VoyageAIEmbeddings for generating semantic embeddings, the tool decorator for creating agent tools from functions, and get_config for accessing runtime configuration including user ID.
from langgraph.store.mongodb import MongoDBStore, create_vector_index_config
from langchain_voyageai import VoyageAIEmbeddings
from langchain_core.tools import tool
from langgraph.utils.config import get_config
Embedding and Index Configuration
This configures the semantic search infrastructure by initializing VoyageAIEmbeddings which produces 1024-dimensional vectors, and create_vector_index_config() which sets up a vector search index with the following arguments: embed (the embedding model), dims=1024 (vector dimensions), relevance_score_fn="dotProduct" (similarity metric), and fields=["content"] (field to index). This enables semantic retrieval of memories based on meaning rather than exact text matching.
embedding_model = VoyageAIEmbeddings(model="voyage-4")
index_config = create_vector_index_config(embed=embedding_model, dims=1024, relevance_score_fn="dotProduct", fields=["content"])
Create a MongoDB Store
This establishes the MongoDB storage for long-term memory by connecting to MongoDB with MongoClient, specifying a database ("agent_memory_simple") and collection ("memories"), then creating a MongoDBStore. The store automatically handles embedding generation and storage, enabling persistent facts to be saved and retrieved across all conversation threads using vector similarity search.
client = MongoClient(mongodb_uri)
db = client["agent_memory_simple"]
collection = db["memories"]
store = MongoDBStore(collection=collection, index_config=index_config)
Create a Save Memory Tool
This creates a save_memory tool using the @tool decorator that accepts content as an argument. It retrieves the user_id from runtime configuration via get_config(), then calls store.put() with the following arguments: namespace=("user", user_id, "memories") (organizing memories by user for isolation), key=f"memory_{hash(content)}" (unique identifier), and value={"content": content} (actual memory content). The MongoDBStore automatically generates embeddings and stores everything, returning a confirmation message.
@tool
def save_memory(content: str) -> str:
"""Save important information to memory for the current user."""
config = get_config()
user_id = config.get("configurable", {}).get("user_id", "default_user")
store.put(namespace=("user", user_id, "memories"),
key=f"memory_{hash(content)}", value={"content": content})
return f"Memory saved: {content}"
Create a Retrieve Memories Tool
This creates a retrieve_memories tool that accepts a search string query as an argument. It retrieves the user_id from configuration, constructs the namespace tuple, then calls store.search() with the following arguments: namespace, query (converted to vector for semantic search), and limit=5 (returning top 5 similar memories). The function formats results as a bulleted list if found, otherwise returns "No relevant memories found," enabling the agent to personalize responses based on past interactions.
@tool
def retrieve_memories(query: str) -> str:
"""Retrieve relevant memories based on a query for the current user."""
config = get_config()
user_id = config.get("configurable", {}).get("user_id", "default_user")
namespace = ("user", user_id, "memories")
results = store.search(namespace, query=query, limit=5)
if results:
memories = [result.value["content"] for result in results]
return "Retrieved memories:\n" + "\n".join(f"- {mem}" for mem in
memories)
return "No relevant memories found."
Create an AI Agent that Utilizes the Memory Tools
This creates an enhanced agent with both short-term and long-term memory capabilities. The system_prompt instructs the agent to:
- Check memories using
retrieve_memories, - Personalize responses based on findings, and,
- Save new information using
save_memory.
The create_agent() call includes the following arguments: model, system_prompt, tools=[save_memory, retrieve_memories] (memory tools for autonomous use), and checkpointer (for conversation history). This combination enables the agent to maintain conversation flow within threads via the checkpointer while persisting user-specific facts across all threads via the memory tools.
system_prompt = """You are a helpful AI assistant with memory capabilities.
When a user sends you a message:
1. First, check your memory about them using retrieve_memories
2. Use what you find to personalize your response
3. If they share new information, save it using save_memory
Your memory persists across conversations!"""
agent = create_agent(model, system_prompt=system_prompt, tools=[save_memory, retrieve_memories], checkpointer=checkpointer)