Getting Started with Lyzr Chat Agent SDK

Lyzr’s Chat Agent is powered by a state-of-the-art chatbot architecture that super abstracts all the complexity of building an advanced LLM-powered chatbot. This enables developers to focus more on data quality, prompt quality, and the application use case instead of spending countless hours stitching together various building blocks and indexes to build the backend RAG pipeline.

Lyzr’s Chat Agent integrates all the building blocks of a chatbot

What are the various methods and arguments that you could pass to Lyzr’s ChatBot class?

Methods

MethodWhat it does?
pdf_chatchat with PDF documents
website_chatautomatically scrape the website content and chat with website data
docx_chatchat with Microsoft Word documents
txt_chatchat with flat files
youtube_chatchat with youtube content (must have transcriptions)
webpage_chatautomatically scrape the webpage content and chat with webpage data

Chat with PDF

Sample Code 👇

import os
from lyzr import ChatBot

# Set your OpenAI API key
os.environ['OPENAI_API_KEY'] = 'sk-'

# Initialize the PDF Chatbot with the path to the PDF file
chatbot = ChatBot.pdf_chat(
    input_files=["PATH/TO/YOUR/PDF/FILE"],
)

# Ask a question related to the PDF content
response = chatbot.chat("Your question here")

# Print the chatbot's response
print(response.response)

# Access source nodes for additional information
for n, source in enumerate(response.source_nodes):
    print(f"Source {n+1}")
    print(source.text)

Types of Arguments

pdf_chat(
        input_dir: Optional[str] = None,
        input_files: Optional[List] = None,
        exclude_hidden: bool = True,
        filename_as_id: bool = True,
        recursive: bool = True,
        required_exts: Optional[List[str]] = None,
        system_prompt: str = None,
        query_wrapper_prompt: str = None,
        embed_model: Union[str, EmbedType] = "default",
        llm_params: dict = None,
        vector_store_params: dict = None,
        service_context_params: dict = None,
        chat_engine_params: dict = None,
        retriever_params: dict = None,
    ):
input_dir
string

Use input_dir to parse all the .pdf files from a directory.

input_files
list

Pass a list of .pdf file paths.

exclude_hidden
boolean

Set to true to ignore hidden files when using input_dir.

filename_as_id
boolean

Set to true to consider the filename as the id for indexing the parsed data.

recursive
boolean

Set to true to parse files from all subdirectories.

system_prompt
string

System-wide prompt to be prepended to all input prompts, used to guide system “decision making”.

query_wrapper_prompt
string

A specific wrapper instruction for passed-in input queries.

embed_model
string

The default embed model is OpenAI text-embedding-ada-002. Default fallback model is bge from Hugging Face.

llm_params
object

Default language model is OpenAI gpt-4-0125-preview. Default temperature is 0.

vector_store_params
object

The default vector store is Embedded Weaviate DB.

service_context_params
object

Default chunk_size is 1024 tokens. Default overlap is 20 tokens.

chat_engine_params
object

Default is none.

retriever_params
object

Default is none.

Integrations

Vector Store Integrations

Lyzr + Weaviate

Lyzr + Supabase Pgvector

Install vecs and supabase

pip install vecs supabase
vector_store_params = {
    "vector_store_type": "SupabaseVectorStore",
    "postgres_connection_string": "postgresql://<user>:<password>@<host>:<port>/<db_name>",
    "collection_name":"base_demo",
  }

Lyzr + Qdrant Vector Store

!pip install -U qdrant_client

Lyzr + LanceDB Vector Store

!pip install -U lancedb
pip install azure-search-documents==11.4.0  azure-identity

LLM Integration