Memory that evolves.
Not just stores.
Hybrid search. Auto-expiring temporal facts. Feedback-driven ranking. Version history. Connect via MCP or build on the REST API.
Works with every MCP-compatible tool and any custom app via REST API
Memory that evolves with you
Hybrid search, auto-expiring temporal facts, feedback-driven ranking, and version history. Built for AI agents and custom apps.
Intelligent Memory
Your AI learns as you work. Preferences and context are saved automatically. Works with coding assistants, content tools, or any custom application.
Query:
How do I handle auth in this project?
Looking for authentication approach...
Found:
JWT token preferences saved on Mar 12
Use refresh tokens, 15min expiry...
Hybrid Search
Vector similarity plus keyword matching, fused together. "PostgreSQL" finds PostgreSQL. "How do I handle auth" finds your JWT preferences. Both work.
Memory Updates
Latest
Just now
Evolving Memory
Strategies change. Algorithms shift. MemContext auto-classifies temporal content and expires stale memories. Current truth always surfaces first.
Feedback Scoring
Mark a memory as wrong and it drops in ranking immediately. Mark it helpful and it gets a boost. Your explicit signals shape every search result.
Encrypted & Private
Your memories are yours alone. Secure API keys and strict no-training policy. Your context stays private, period.
Cross-Tool Sync
Claude, Cursor, Windsurf, Cline, or your own app via REST API. One memory for all. Save in Claude, search from your custom app.
Memory that evolves, not just stores
Every save is classified. Every search is fused. Every signal matters.
Save Pipeline
LiveSteps
Expand + Classify
2s ago– LLM rewrites content for searchability
– Routes to permanent / short / medium / long buckets
Auto-TTL
just now– Temporal content gets automatic expiry
– Permanent facts stay forever
Dedup + Version
processing– Detects duplicates and contradictions
– Old versions preserved, latest truth first
Classification Distribution
Last 30 days
Short-term
189 memories
10.1% of total
Save Pipeline
LiveSteps
Expand + Classify
2s ago– LLM rewrites content for searchability
– Routes to permanent / short / medium / long buckets
Auto-TTL
just now– Temporal content gets automatic expiry
– Permanent facts stay forever
Dedup + Version
processing– Detects duplicates and contradictions
– Old versions preserved, latest truth first
Classification Distribution
Last 30 days
Short-term
189 memories
10.1% of total
Save Pipeline
LiveSteps
Expand + Classify
2s ago– LLM rewrites content for searchability
– Routes to permanent / short / medium / long buckets
Auto-TTL
just now– Temporal content gets automatic expiry
– Permanent facts stay forever
Dedup + Version
processing– Detects duplicates and contradictions
– Old versions preserved, latest truth first
Search Pipeline
· what happens when you searchHybrid Retrieval
Vector similarity + keyword matching + query variants, fused via Reciprocal Rank Fusion.
Temporal Filter
Expired memories are excluded automatically. Only current, valid knowledge surfaces.
Feedback Scoring
Memories marked wrong drop. Helpful ones rise. Your signals shape every result.
Get started in under two minutes.
One config file. That's it. Your AI remembers everything from there.
Drop the config into Claude Code, Cursor, Codex or any MCP client.
Chat as usual — nothing else changes. No prompts, no wrappers.
Context persists. Saved and retrieved automatically, across every session.
{
"mcpServers": {
"memcontext": {
"url": "https://mcp.memcontext.in/mcp",
"headers": {
"x-api-key": "YOUR_MEMCONTEXT_API_KEY"
}
}
}
}One memory layer, any application
MemContext is infrastructure. Plug it into coding tools, content workflows, or your own product.
AI Coding Assistants
Connect via MCP. Your preferences, decisions, and project context persist across every session and every tool.
Content Generation
Store evolving strategies with auto-TTL. Expired tactics drop out. Current best practices surface first.
Custom Applications
Use the REST API to build memory into any app. CRM context, support bots, onboarding flows, personalization.
Watch how it works
From setup to first memory — see MemContext in action with your favourite AI coding agent.
Frequently Asked
Questions
Can't find the answer you're looking for? Reach out to our team.
Need more help?
MemContext is a persistent, evolving memory layer for AI agents and applications. It stores preferences, facts, decisions, and context with hybrid search, auto-expiring temporal memories, and feedback-driven ranking. Use it via MCP with coding assistants or via REST API in any custom app.
Model Context Protocol is an open standard by Anthropic for connecting AI tools to external services. It's how MemContext integrates with Claude Code, Cursor, Codex, and other AI assistants.
When you save a memory, MemContext expands it for searchability, classifies whether it's permanent or temporal, detects duplicates, and links related memories automatically. During search, it combines vector similarity with keyword matching, filters expired content, and adjusts ranking based on your feedback.
Absolutely. All memories are encrypted at rest and in transit. We never use your data to train models or share it with third parties. You own your data — export or delete anytime.
Yes. Connect any MCP-compatible tool and they share the same memory. You can also use the REST API to build custom applications on top of MemContext — content generators, support bots, CRM memory, anything.
Yes. The free plan includes 300 memories. Paid plans start at hobby tier with 2,000 memories and go up to pro with 10,000. Check the pricing page for current details.
Stop repeating yourself.
Connect once. Chat normally. Memory happens automatically.
