memU — Memory infrastructure for LLMs and AI agents
Unlike traditional RAG systems that rely solely on embedding-based search, MemU supports non-embedding retrieval through direct file reading.
The LLM comprehends natural language memory files directly, enabling deep search by progressively tracking from categories → items → original resources.

Why Choose memU?
Full Traceability
Track from raw data → items → documents and back
Memory Lifecycle
Memorization → Retrieval → Self-evolution
Two Retrieval Methods
RAG-based: Fast embedding vector search
LLM-based: Direct file reading with deep semantic understanding
Self-Evolving
Adapts memory structure based on usage patterns
MemU Repositories: What They Do & How to Use
memU | memU-server | memU-ui | |
|---|---|---|---|
| Positioning | Core algorithm engine | Memory data backend service | Front-end dashboard |
| Key Features |
|
|
|
| Best For | Developers/teams who want to embed AI memory algorithms into their product | Teams that want to self-host a memory backend (internal tools, research, enterprise setups) | Developers/teams looking for a ready-to-use memory console |
| Usage | Core algorithms can be used standalone or integrated into server | Self-hostable; works together with memU | Self-hostable; integrates with memU |
memU, memU-server, and memU-ui together form a flexible memory ecosystem for LLMs and AI agents.
Join the Growing memU Open-Source Community
For more information please contact info@nevamind.ai.
GitHub Issues
Report bugs, request features, and track development. Submit an issue.
Discord
Get real-time support, chat with the community, and stay updated. Join us.
X (Twitter)
Follow for updates, AI insights, and key announcements. Follow us.
Start Your Agent’s Memory Today
One line of code is all it takes to install memU and give your AI long-term memory.