Large Language Models
Use DataAnalyzr with an LLM of your choice
Through the use of LiteLLM in it’s backend, DataAnalyzr gives you the freedom to integrate with any large language model (LLM) of your choice.
This allows you to leverage the power of LLMs for data analysis as well as response generation.
By default, DataAnalyzr uses the gpt-4o
model from OpenAI.
You are given the option to choose two separate llms - one for analysis and one for response generation. The analysis LLM is used to generate and execute code on your data, to extract information relevant to the question and to generate visualisations. The generation LLM is used to generate insights, recommendations, and tasks.
A list of supported LLMs can be found here.
Configuration
To integrate DataAnalyzr with an LLM, follow these steps:
- Model Selection: Choose the appropriate LLM for your needs from the list here. Models vary in size, capability, and cost.
- Create LLM object: Use
LyzrLLMFactory
to create an object with the model name. Depending on the chosen LLM, credentials can be set in as environment variables or passed directly in the object. - Use LLM object: Pass the object when creating an instance of the
DataAnalyzr
class. Generation parameters for the LLM are configured by DataAnalyzr automatically.
Example Configuration
For this example, we will assume that you have chosen:
- Anthropic’s
claude-3-opus-20240229
for analysis, and - OpenAI’s
gpt-4-1106-preview
for generation.
Provider-specific Keys
If you wish to set a provider-specific key, you may do so by setting the environment variable. Following are examples for some commonly used providers:
Additional Parameters
Depending on the model, you may also need to set additional parameters. You can do so in the environment or when creating the LLM object.