Skip to main content
The context() method assembles LLM-ready context by combining recent conversation messages (short-term) with semantically relevant memories (long-term) — all server-side. Use it instead of manually calling search() and formatting results.

Basic Usage

from lyzr import Cognis, CognisMessage

cog = Cognis()

context = cog.context(
    current_messages=[
        CognisMessage(role="user", content="What should I do this weekend?"),
    ],
    owner_id="user_alice",
    session_id="sess_001",
)
# Use in your LLM prompt
print(context)

Parameters (Hosted)

ParameterTypeDefaultDescription
current_messagesList[Dict | CognisMessage]RequiredCurrent turn messages
owner_idstrNoneUser/tenant scope
session_idstrNoneSession scope
agent_idstrNoneAgent scope
max_short_term_messagesint30Max recent messages to include
enable_long_term_memoryboolTrueInclude semantic search results
cross_sessionboolFalseSearch memories across all sessions

With an LLM

import openai
from lyzr import Cognis, CognisMessage

cog = Cognis()

user_msg = "What should I cook tonight?"
context = cog.context(
    current_messages=[CognisMessage(role="user", content=user_msg)],
    owner_id="user_alice",
    session_id="dinner_chat",
    enable_long_term_memory=True,
)

response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": f"You are a helpful assistant.\n\n{context}"},
        {"role": "user", "content": user_msg},
    ],
)

Async

context = await cog.acontext(
    current_messages=[CognisMessage(role="user", content="Hello")],
    owner_id="user_alice",
)