Skip to main content

1. Install

pip install lyzr-adk

2. Set API Keys

export LYZR_API_KEY="your-lyzr-api-key"

3. Add Memories

from lyzr import Cognis, CognisMessage

cog = Cognis()

cog.add(
    messages=[
        CognisMessage(role="user", content="My name is Alice. I love hiking and I'm vegetarian."),
        CognisMessage(role="assistant", content="Nice to meet you, Alice!"),
    ],
    owner_id="user_alice",
)
Cognis automatically extracts discrete facts from the conversation and stores them as searchable memory records.

4. Search Memories

results = cog.search(query="What does Alice eat?", owner_id="user_alice", limit=5)
for r in results:
    print(f"  {r.content}  (score: {r.score})")
# → Alice is vegetarian  (score: 0.89)

5. Get Context for Your LLM

context = cog.context(
    current_messages=[CognisMessage(role="user", content="Recommend a restaurant")],
    owner_id="user_alice",
)
# Use context in your LLM system prompt

What Just Happened

  1. Cognis stored your raw messages
  2. The LLM extracted discrete facts and auto-categorized them (identity, preferences, interests, etc.)
  3. Facts were embedded and indexed for hybrid search (vector + keyword)
  4. Search used Reciprocal Rank Fusion: 70% vector similarity + 30% BM25 keyword matching
  5. context / get_context assembled both short-term messages and long-term memories for LLM use

Next Steps

Configuration

API keys, init params, search weight tuning

Add Memories

Full method reference for storing conversations

Features

Hybrid search, categories, session management, and more

Cookbooks

CrewAI, LangChain, LangGraph, and Agno integrations