Fetching latest headlines…
Stop Guessing Memory: How to Automate LangChain Memory Testing and Catch 80% of Multi-Turn Failures
NORTH AMERICA
🇺🇸 United StatesMay 11, 2026

Stop Guessing Memory: How to Automate LangChain Memory Testing and Catch 80% of Multi-Turn Failures

1 views0 likes0 comments
Originally published byDev.to

2 a.m. The customer Slack channel explodes — the support bot just asked for the same order number three times in a row. A frustrated user screams, “Do you have amnesia?” After digging through the code and the prompt, everything looks fine. Only then do we discover that ConversationBufferMemory silently dropped context in one of the turns. The LLM had no idea what was said earlier. Right then I thought: if we could catch this memory loss automatically in CI, we’d never ship a black eye like this.

Breaking down the problem

In LLM-powered apps, memory isn’t a “nice-to-have” anymore — it’s the core experience. LangChain gives us a buffet of memory implementations: ConversationBufferMemory, ConversationSummaryMemory, VectorStoreRetrieveMemory, and more. But almost no project actually tests memory accuracy seriously.

The root cause is brutally simple: memory testing is too manual. Most teams spin up a chain locally, poke it with Postman or the CLI for a few turns, visually confirm “yeah, it remembered the name I just said,” and then merge. That approach has three fatal flaws:

  1. Minimal path coverage – manual testing only walks the happy path. Branch conditions (hitting the token limit, summary memory trigger timing, interleaving messages) are left to guesswork.
  2. Zero regression protection – next week you tweak the prompt or switch the model, and the memory logic might break, but nobody will manually re‑play every historical conversation.
  3. Fuzzy verification – “looks right” is not the same as is right. Human judgement on whether memory is complete or hallucination-free has huge error margins.

Testing a stateful, long‑context agent with this hand‑crafted approach is like walking across a highway blindfolded. What we need is an automated assertion‑based memory verification scheme: given a multi‑turn dialog script, precisely verify the content, order, and key facts stored inside the memory object — and run it in CI.

Solution design

The core idea is simple: turn the LLM into a deterministic “teleprompter,” then treat the memory object as the system under test and use pytest for assertions.

Why not let the LLM judge memory itself? (e.g., call the model again: “Please check if the conversation history contains X”). Because that would make the “judge” the same hallucination machine — not reliable. What we want are pure engineering assertions: string containment, list length, message type — deterministic checks.

Tooling choices:

  • pytest: the most universal Python test framework; its fixture mechanism fits perfectly for managing memory state.
  • LangChain’s BaseMemory: we directly interact with memory.chat_memory.messages and memory.load_memory_variables(), bypassing LLM uncertainty.
  • Custom FakeLLM: inherit from LLM, return fixed text in a predetermined sequence, with zero external API dependency. Tests complete in milliseconds and are 100% repeatable.

We avoid mocking ChatOpenAI because network jitter and model randomness directly undermine assertion stability. We also don’t treat FakeListLLM as an opaque box — we need precise control over every reply, so a custom HardcodedLLM gives us the most flexibility.

Core implementation

Let’s build automated memory testing step by step. All code is runnable (requires pip install langchain langchain-core pytest).

1. Build a “teleprompter” LLM

This snippet solves the “LLM response is uncontrollable” problem — we make each invocation return a preset sequence, like playing a cassette tape.

from typing import List, Optional
from langchain_core.language_models.llms import LLM
from langchain_core.callbacks import CallbackManagerForLLMRun

class HardcodedLLM(LLM):
    """按固定序列返回的 LLM,用于自动化测试"""
    responses: List[str]
    call_count: int = 0

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
    ) -> str:
        # 如果调用次数超过预设回复数量,返回一个默认值以避免抛出异常
        if self.call_count >= len(self.responses):
            return "I don't know"
        response = self.responses[self.call_count]
        self.call_count += 1
        return response

    @property
    def _llm_type(self) -> str:
        return "hardcoded"

2. Write reusable test fixtures

This fixture eliminates the “every test assembles chain and memory from scratch” pain — we extract common initialization.

import pytest
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

@pytest.fixture
def memory() -> ConversationBufferMemory:
    """返回一个干净的内存记忆对象,每次测试独立"""
    return ConversationBufferMemory(return_messages=True)

@pytest.fixture
def chain(memory: ConversationBufferMemory, request) -> ConversationChain:
    """
    根据测试参数定制 LLM 的回复序列。
    测试函数可以用  装饰器传入预设 responses。
    """
    # 获取测试函数传递的 responses 参数,没有则使用默认值
    responses = getattr(request, "param", ["Hello", "Sure", "Done"])
    llm = HardcodedLLM(responses=responses)
    return ConversationChain(llm=llm, memory=memory)

3. First test: Single‑turn memory must exist

Here we verify the simplest scenario — after one utterance, is it immediately stored in memory?

def test_single_turn_memory_exists(chain, memory):
    chain.invoke("My name is Alice")
    messages = memory.chat_memory.messages
    assert any("Alice" in msg.content for msg in messages)

That’s it. No LLM judgement, no flakiness — just a straight string check. Run pytest and it passes in under a second.

4. Multi‑turn memory retention test

The real horror show is multi‑turn memory loss. Let’s simulate a three‑turn conversation where the bot asks for the order number, the user provides it, and later the user asks to cancel. The memory must retain the order number across turns.

@pytest.mark.parametrize("chain", [["What is your order number?",
                                    "Your order #12345 has been found.",
                                    "Sure, I'll cancel order #12345."]], indirect=True)
def test_multi_turn_memory_retains_order_number(chain, memory):
    chain.invoke("I want to cancel my order")
    chain.invoke("12345")
    chain.invoke("Please proceed with cancellation")
    messages = memory.chat_memory.messages
    # Verify the order number appears in the human message AND the AI response
    assert any("12345" in msg.content for msg in messages)
    # Verify the context wasn't truncated (we should have 6 messages: 3 human, 3 AI)
    assert len(messages) == 6

This test catches exactly the 2 a.m. bug: if ConversationBufferMemory drops messages due to token limits or misconfiguration, the assertion on message count or order number fails immediately.

5. Testing summary memory trigger logic

Summary memory is trickier — it compresses history. We need to verify that after enough conversation, the summary kicks in and the old details are still accessible.

from langchain.memory import ConversationSummaryMemory

@pytest.fixture
def summary_memory() -> ConversationSummaryMemory:
    # Use a tiny max_token_limit to force summarization quickly
    return ConversationSummaryMemory(llm=HardcodedLLM(responses=["Summary of the conversation"]),
                                     max_token_limit=10,
                                     return_messages=True)

def test_summary_memory_preserves_key_facts(chain, summary_memory):
    # Override the default chain to use our summary memory
    chain.memory = summary_memory
    chain.invoke("My name is Bob and I'm from Berlin.")
    chain.invoke("I need a hotel.")
    chain.invoke("What's my name?")
    # The summary should have captured "Bob" and "Berlin"
    memory_variables = summary_memory.load_memory_variables({})
    history = memory_variables.get("history", "")
    assert "Bob" in history
    assert "Berlin" in history

By controlling the summarization LLM’s output with our HardcodedLLM, we make the test deterministic. No matter how many times it runs, the summary text is always the same, so assertions are rock solid.

Why this matters in CI

Put these tests into your CI pipeline and you get a safety net that catches regression instantly. When you:

  • bump the LangChain version
  • swap the underlying model
  • modify the memory configuration (e.g., k for buffer window)
  • change the prompt template that influences token usage

…any memory‑breaking change fails the build before it reaches a human. The confidence gain is enormous — especially in production agents where context loss directly damages user trust.

Moving beyond the basics

Once you have the deterministic harness, you can extend it:

  • Entity extraction memory: verify that key entities are persisted accurately.
  • Token‑limit boundary tests: push conversations right to the limit and confirm graceful handling (no silent truncation).
  • Mixed memory strategies: combine buffer and summary memory and assert that both layers retain critical information.
  • Property‑based testing: use Hypothesis to generate random conversation flows and check invariants (e.g., “all names mentioned in the last N turns are still retrievable”).

Manual “click‑and‑stare” testing can’t touch that. Automated memory assertions turn a major source of production issues into a solved problem.

The next time your support bot loses its mind at 2 a.m., you’ll already have a failing test that tells you exactly where the memory broke.

Comments (0)

Sign in to join the discussion

Be the first to comment!