Skip to main content
The Lyzr ADK supports multiple LLM providers with a variety of models. This reference lists all available providers, their models, and configuration details.

Quick Start

from lyzr import Studio

studio = Studio(api_key="your-api-key")

# Short format - auto-resolves provider
agent = studio.create_agent(
    name="Assistant",
    provider="gpt-4o"  # Auto-resolves to OpenAI
)

# Full format - explicit provider
agent = studio.create_agent(
    name="Assistant",
    provider="openai/gpt-4o"
)

Provider Formats

You can specify providers in two formats:
FormatExampleDescription
Short"gpt-4o"ADK auto-detects the provider
Full"openai/gpt-4o"Explicit provider/model
# These are equivalent
agent = studio.create_agent(provider="gpt-4o")
agent = studio.create_agent(provider="openai/gpt-4o")

OpenAI

Credential ID: lyzr_openai
ModelCapabilitySpeedContextType
gpt-4o4/54/5128K-
gpt-4o-mini2/55/5128K-
gpt-4.15/53/51M-
gpt-54/52/5400KReasoning
gpt-5-mini2/54/5400KReasoning
gpt-5-nano1/55/5400KReasoning
gpt-5.14/52/5400KReasoning
gpt-5.25/52/5400KReasoning
o33/54/5128KReasoning
o4-mini2/55/5128KReasoning

Examples

# GPT-4o - balanced performance
agent = studio.create_agent(provider="gpt-4o")

# GPT-4o Mini - fast and cost-effective
agent = studio.create_agent(provider="gpt-4o-mini")

# GPT-5 - advanced reasoning
agent = studio.create_agent(provider="gpt-5")

# O3 - reasoning model
agent = studio.create_agent(provider="o3")

Anthropic

Credential ID: lyzr_anthropic
ModelCapabilitySpeedContextType
claude-sonnet-4-54/54/5200K-
claude-opus-4-54/53/5200KReasoning
claude-sonnet-4-04/54/5200K-
claude-opus-4-05/53/5200KReasoning
claude-opus-4-15/53/5200KReasoning
claude-3-7-sonnet-latest4/54/5200K-
claude-3-5-haiku-latest3/55/5200K-

Examples

# Claude Sonnet 4.5 - balanced
agent = studio.create_agent(provider="claude-sonnet-4-5")

# Claude Opus 4.5 - advanced reasoning
agent = studio.create_agent(provider="claude-opus-4-5")

# Claude Haiku - fast
agent = studio.create_agent(provider="claude-3-5-haiku-latest")

Google

Credential ID: lyzr_google
ModelCapabilitySpeedContextType
gemini-2.0-flash3/55/51M-
gemini-2.0-flash-lite2/55/51M-
gemini-2.5-pro4/54/51MReasoning
gemini-2.5-flash4/54/51MReasoning
gemini-2.5-flash-lite2/54/51MReasoning
gemini-3-pro-preview5/54/51MReasoning

Examples

# Gemini 2.5 Pro - advanced
agent = studio.create_agent(provider="gemini-2.5-pro")

# Gemini Flash - fast with 1M context
agent = studio.create_agent(provider="gemini-2.0-flash")

# Gemini 3 Pro Preview - latest
agent = studio.create_agent(provider="gemini-3-pro-preview")

Groq

Credential ID: lyzr_groq
ModelCapabilitySpeedContextType
llama-3.3-70b-versatile2/55/5128K-
llama-3.1-8b-instant1/55/5128K-
llama-4-scout-17b-16e-instruct3/55/5131K-
llama-4-maverick-17b-128e-instruct3/55/51M-
gpt-oss-120b3/55/5131KReasoning
gpt-oss-20b2/55/5131KReasoning
kimi-k2-instruct2/54/5256K-

Examples

# Llama 3.3 70B - versatile
agent = studio.create_agent(provider="llama-3.3-70b-versatile")

# Llama 4 Maverick - 1M context
agent = studio.create_agent(provider="llama-4-maverick-17b-128e-instruct")

# Kimi K2
agent = studio.create_agent(provider="kimi-k2-instruct")

Perplexity

Credential ID: lyzr_perplexity
ModelCapabilitySpeedContextType
sonar2/54/5128K-
sonar-pro3/53/5128K-
sonar-reasoning3/54/5128KReasoning
sonar-reasoning-pro4/53/5128KReasoning
sonar-deep-research4/54/5128KReasoning
r1-17762/54/5128K-

Examples

# Sonar - basic search
agent = studio.create_agent(provider="sonar")

# Sonar Pro - enhanced search
agent = studio.create_agent(provider="sonar-pro")

# Sonar Deep Research - research tasks
agent = studio.create_agent(provider="sonar-deep-research")

AWS Bedrock

Credential ID: lyzr_aws-bedrock
ModelCapabilitySpeedContextType
amazon.nova-micro-v1:01/55/5128K-
amazon.nova-lite-v1:02/54/5300K-
amazon.nova-pro-v1:03/54/5300K-
anthropic.claude-3-5-sonnet-20241022-v2:04/54/5200K-
anthropic.claude-3-7-sonnet-20250219-v1:04/54/5200KReasoning
meta.llama3-3-70b-instruct-v1:04/53/5128K-
mistral.mistral-large-2402-v1:04/53/564K-

Examples

# Amazon Nova Pro
agent = studio.create_agent(provider="aws-bedrock/amazon.nova-pro-v1:0")

# Claude on Bedrock
agent = studio.create_agent(provider="bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0")

# Llama on Bedrock
agent = studio.create_agent(provider="aws/meta.llama3-3-70b-instruct-v1:0")

Provider Aliases

You can use these aliases when specifying providers:
AliasResolves To
openaiOpenAI
anthropicAnthropic
google, geminiGoogle
groqGroq
perplexityPerplexity
aws-bedrock, bedrock, awsAWS Bedrock

Custom Credentials

If you’ve added custom credentials through the Studio UI, use llm_credential_id:
agent = studio.create_agent(
    name="Custom Agent",
    provider="gpt-4o",
    llm_credential_id="my_custom_openai_credential"
)

Choosing a Model

By Use Case

Use CaseRecommended Models
General assistantgpt-4o, claude-sonnet-4-5, gemini-2.5-pro
Fast responsesgpt-4o-mini, gemini-2.0-flash, llama-3.3-70b-versatile
Complex reasoninggpt-5, claude-opus-4-5, o3
Large contextgemini-2.5-pro (1M), gpt-4.1 (1M), llama-4-maverick (1M)
Research taskssonar-deep-research, sonar-reasoning-pro
Cost-effectivegpt-4o-mini, gemini-2.0-flash-lite, llama-3.1-8b-instant

Capability vs Speed

High Capability, Slower:
  - gpt-5, gpt-5.2, claude-opus-4-5, gemini-3-pro-preview

Balanced:
  - gpt-4o, claude-sonnet-4-5, gemini-2.5-pro

Fast, Lower Capability:
  - gpt-4o-mini, gemini-2.0-flash, llama-3.1-8b-instant

Model Information

Each model has these attributes:
AttributeDescription
capability_scoreModel capability (1-5 scale)
speed_scoreResponse speed (1-5 scale)
context_windowMaximum context size in tokens
model_typeSpecial type (e.g., “Reasoning”)