Persistent, evolving memory for AI

Memory that evolves.
Not just stores.

Hybrid search. Auto-expiring temporal facts. Feedback-driven ranking. Version history. Connect via MCP or build on the REST API.

Works with every MCP-compatible tool and any custom app via REST API

CursorCursor
WindsurfWindsurf
ClaudeClaude
GeminiGemini
GithubCopilotGitHub Copilot
ClineCline
QwenQwen
GooseGoose
KimiKimi
CursorCursor
WindsurfWindsurf
ClaudeClaude
GeminiGemini
GithubCopilotGitHub Copilot
ClineCline
QwenQwen
GooseGoose
KimiKimi
How It helps

Memory that evolves with you

Hybrid search, auto-expiring temporal facts, feedback-driven ranking, and version history. Built for AI agents and custom apps.

Intelligent Memory

Your AI learns as you work. Preferences and context are saved automatically. Works with coding assistants, content tools, or any custom application.

Query:

How do I handle auth in this project?

Looking for authentication approach...

Found:

JWT token preferences saved on Mar 12

Use refresh tokens, 15min expiry...

Hybrid Search

Vector similarity plus keyword matching, fused together. "PostgreSQL" finds PostgreSQL. "How do I handle auth" finds your JWT preferences. Both work.

Update Preference

Memory Updates

v1: Tabs enabled
v2: Tabs → 2 spaces

Latest

Just now

v3: Tabs → 4 spaces

Evolving Memory

Strategies change. Algorithms shift. MemContext auto-classifies temporal content and expires stale memories. Current truth always surfaces first.

Feedback Scoring

Mark a memory as wrong and it drops in ranking immediately. Mark it helpful and it gets a boost. Your explicit signals shape every search result.

Encrypted & Private

Your memories are yours alone. Secure API keys and strict no-training policy. Your context stays private, period.

Zed

Cross-Tool Sync

Claude, Cursor, Windsurf, Cline, or your own app via REST API. One memory for all. Save in Claude, search from your custom app.

Under the Hood

Memory that evolves, not just stores

Every save is classified. Every search is fused. Every signal matters.

Save Pipeline

Live

Steps

01

Expand + Classify

2s ago

– LLM rewrites content for searchability

– Routes to permanent / short / medium / long buckets

02

Auto-TTL

just now

– Temporal content gets automatic expiry

– Permanent facts stay forever

03

Dedup + Version

processing

– Detects duplicates and contradictions

– Old versions preserved, latest truth first

Search Pipeline

· what happens when you search
01

Hybrid Retrieval

Vector similarity + keyword matching + query variants, fused via Reciprocal Rank Fusion.

02

Temporal Filter

Expired memories are excluded automatically. Only current, valid knowledge surfaces.

03

Feedback Scoring

Memories marked wrong drop. Helpful ones rise. Your signals shape every result.

Setup

Get started in under two minutes.

One config file. That's it. Your AI remembers everything from there.

01

Drop the config into Claude Code, Cursor, Codex or any MCP client.

02

Chat as usual — nothing else changes. No prompts, no wrappers.

03

Context persists. Saved and retrieved automatically, across every session.

~/.claude.json
{
  "mcpServers": {
    "memcontext": {
      "url": "https://mcp.memcontext.in/mcp",
      "headers": {
        "x-api-key": "YOUR_MEMCONTEXT_API_KEY"
      }
    }
  }
}
Memories persist across sessions automatically.
Use Cases

One memory layer, any application

MemContext is infrastructure. Plug it into coding tools, content workflows, or your own product.

AI Coding Assistants

Connect via MCP. Your preferences, decisions, and project context persist across every session and every tool.

Prefers TypeScript and pnpm
Chose PostgreSQL for the DB
Uses kebab-case filenames
Set up MCP

Content Generation

Store evolving strategies with auto-TTL. Expired tactics drop out. Current best practices surface first.

Carousel posts work right now
Old poll strategy auto-expired
Post feedback shapes ranking
Read the guide

Custom Applications

Use the REST API to build memory into any app. CRM context, support bots, onboarding flows, personalization.

Profile endpoint for fast context
Feedback API for learning loops
Version history for audit trails
View API docs

Watch how it works

From setup to first memory — see MemContext in action with your favourite AI coding agent.

FAQ

Frequently Asked
Questions

Can't find the answer you're looking for? Reach out to our team.

Sign Up

MemContext is a persistent, evolving memory layer for AI agents and applications. It stores preferences, facts, decisions, and context with hybrid search, auto-expiring temporal memories, and feedback-driven ranking. Use it via MCP with coding assistants or via REST API in any custom app.

Model Context Protocol is an open standard by Anthropic for connecting AI tools to external services. It's how MemContext integrates with Claude Code, Cursor, Codex, and other AI assistants.

When you save a memory, MemContext expands it for searchability, classifies whether it's permanent or temporal, detects duplicates, and links related memories automatically. During search, it combines vector similarity with keyword matching, filters expired content, and adjusts ranking based on your feedback.

Absolutely. All memories are encrypted at rest and in transit. We never use your data to train models or share it with third parties. You own your data — export or delete anytime.

Yes. Connect any MCP-compatible tool and they share the same memory. You can also use the REST API to build custom applications on top of MemContext — content generators, support bots, CRM memory, anything.

Yes. The free plan includes 300 memories. Paid plans start at hobby tier with 2,000 memories and go up to pro with 10,000. Check the pricing page for current details.

MemContext Logo

Stop repeating yourself.

Connect once. Chat normally. Memory happens automatically.