Docs home page
Search...
⌘K
Ask AI
Lyzr
Blog
Industries
Book Demo
Book Demo
Search...
Navigation
Responsible & Safe AI
Hallucination Manager
Introduction
Lyzr Agent Framework
Cookbooks
Join Discord
Build Paths
Studio (No-code Interface)
Lyzr Developer API (Programmatic Access)
Agents
Studio
API Endpoints
Voice Agent
Studio
Classic Knowledge Base
Studio
API Endpoints
Semantic Data Model
Studio
API Endpoints
Knowledge Graph
Studio
API Endpoints
Memory
Overview
Short-Term Memory
Long-Term Memory
Tools
Studio
Available Tools
Custom Tools
API Endpoints
Responsible & Safe AI
Responsible AI
Hallucination Manager
API Endpoints
Manager Agent
What is Manager Agent
Studio
DAG
What is DAG?
DAG vs Manager Agent
Orchestration
Overview
Workflow
Studio
API Endpoints
Lyzr Agents as MCP Servers
Overview
Setup
Usage
On this page
1. Add Responsible AI Facts
2. Enable Reflection
3. Configure Groundedness Value
4. Set Context Relevance
Responsible & Safe AI
Hallucination Manager
LyZR Studio lets you enforce safety, transparency, and accountability by configuring Responsible AI settings directly in the UI. Below are the key controls and how to set them up.
1. Add Responsible AI Facts
Use this section to supply domain-specific facts or policies. These facts act as guardrails, informing the model about critical rules it must follow.
Policy Name
: A descriptive title for your fact or rule.
Content
: The actual policy text, such as legal guidelines or brand voice constraints.
2. Enable Reflection
Reflection allows the model to self-evaluate its outputs against your Responsible AI facts before returning responses.
Toggle
Reflection
on or off.
3. Configure Groundedness Value
Groundedness controls how strictly the model must base its answers on provided sources or facts.
Groundedness Slider
: Drag between 0 (freeform) to 1 (fully grounded).
4. Set Context Relevance
Ensure the AI considers only pertinent context windows when generating responses, reducing off-topic or outdated content.
Responsible AI
Create RAI Policy
Assistant
Responses are generated using AI and may contain mistakes.