Lyzr’s Chat Agent is powered by a state-of-the-art chatbot architecture that super abstracts all the complexity of building an advanced LLM-powered chatbot. This enables developers to focus more on data quality, prompt quality, and the application use case instead of spending countless hours stitching together various building blocks and indexes to build the backend RAG pipeline.
Lyzr’s Chat Agent integrates all the building blocks of a chatbot
import osfrom lyzr import ChatBot# Set your OpenAI API keyos.environ['OPENAI_API_KEY'] = 'sk-'# Initialize the PDF Chatbot with the path to the PDF filechatbot = ChatBot.pdf_chat( input_files=["PATH/TO/YOUR/PDF/FILE"],)# Ask a question related to the PDF contentresponse = chatbot.chat("Your question here")# Print the chatbot's responseprint(response.response)# Access source nodes for additional informationfor n, source in enumerate(response.source_nodes): print(f"Source {n+1}") print(source.text)
vector_store_params = { "vector_store_type": "WeaviateVectorStore", "index_name": "IndexName" # first letter should be capital}
vector_store_params = { "vector_store_type": "WeaviateVectorStore", "url": "https://sample..weaviate.network", "api_key": "DB_API_KEY", "index_name": "IndexName" # first letter should be capital}