You’ll use this when…
- A project needs domain-specific facts (order numbers, customer info) without storing casual chatter.
- You already have a clear schema for memories and want the LLM to follow it.
- You must prevent irrelevant details from entering long-term storage.
Feature anatomy
- Prompt instructions: Describe which entities or phrases to keep. Specific guidance keeps the extractor focused.
- Few-shot examples: Show positive and negative cases so the model copies the right format.
- Structured output: Responses return JSON with a
factsarray that Mem0 converts into individual memories. - LLM configuration:
custom_fact_extraction_prompt(Python) orcustomPrompt(TypeScript) lives alongside your model settings.
Prompt blueprint
Prompt blueprint
- State the allowed fact types.
- Include short examples that mirror production messages.
- Show both empty (
[]) and populated outputs. - Remind the model to return JSON with a
factskey only.
Configure it
Write the custom prompt
Load the prompt in configuration
After initialization, run a quick
add call with a known example and confirm the response splits into separate facts.See it in action
Example: Order support memory
The output contains only the facts described in your prompt, each stored as a separate memory entry.
Example: Irrelevant message filtered out
Verify the feature is working
- Log every call during rollout and confirm the
factsarray matches your schema. - Check that unrelated messages return an empty
resultsarray. - Run regression samples whenever you edit the prompt to ensure previously accepted facts still pass.
Best practices
- Be precise: Call out the exact categories or fields you want to capture.
- Show negative cases: Include examples that should produce
[]so the model learns to skip them. - Keep JSON strict: Avoid extra keys; only return
factsto simplify downstream parsing. - Version prompts: Track prompt changes with a version number so you can roll back quickly.
- Review outputs regularly: Spot-check stored memories to catch drift early.