What is Cognis
Cognis is a memory layer for AI agents. It lets your agents remember things — user preferences, past conversations, decisions, facts — and recall them when they matter. When a user tells your agent “I’m vegetarian” in one session, Cognis makes sure the agent knows that next time it recommends a restaurant. When a support agent resolves a billing issue, Cognis remembers the context so the next agent in the chain doesn’t ask the same questions again.How It Works
- You send conversation messages (user/assistant pairs) to Cognis
- Cognis extracts facts automatically — names, preferences, decisions, context — using LLM-powered extraction with auto-categorization
- Your agent searches memories before responding — Cognis returns the most relevant facts using hybrid search (vector similarity + keyword matching)
- Memories persist across sessions — scoped by user, agent, and session so the right context reaches the right agent for the right user
Architecture
Core Capabilities
- Hybrid search — Matryoshka vector embeddings + BM25 keyword matching, fused with Reciprocal Rank Fusion
- Smart extraction — LLM automatically pulls discrete facts from conversations and categorizes them (identity, preferences, work, interests, etc.)
- Multi-tenant scoping —
owner_id,agent_id,session_id— isolate memories per user, per agent, per conversation - Context assembly — Combine short-term conversation history with long-term memories into a single LLM-ready context
Quick Example
- Hosted (lyzr-adk)
- Open Source (lyzr-cognis)
pip install lyzr-adk) and an open-source library (pip install lyzr-cognis). The core architecture is the same — see the comparison. There’s also a Claude Code plugin for persistent memory across coding sessions.
Get Started
Quickstart
Install and run your first memory operations in 3 minutes
Configuration
API keys, init params, search weight tuning
API Reference
Complete method signatures and return types
Cookbooks
CrewAI, LangChain, LangGraph, and Agno integrations