LLM Providers
Mango supports three LLM providers out of the box. All implement the same LLMService interface, so switching providers is a one-line change.
Anthropic
pip install mango-ai[anthropic]
from mango.integrations.anthropic import AnthropicLlmService
llm = AnthropicLlmService(
api_key="YOUR_KEY", # or set ANTHROPIC_API_KEY env var
model="claude-sonnet-4-6", # default
max_tokens=4096,
)
Recommended models:
claude-sonnet-4-6— best balance of speed and accuracy (default)claude-opus-4-6— highest accuracy, higher cost
OpenAI
pip install mango-ai[openai]
from mango.integrations.openai import OpenAiLlmService
llm = OpenAiLlmService(
api_key="YOUR_KEY", # or set OPENAI_API_KEY env var
model="gpt-5.4", # default
max_completion_tokens=4096,
)
pip install mango-ai[gemini]
from mango.integrations.google import GeminiLlmService
llm = GeminiLlmService(
api_key="YOUR_KEY", # or set GOOGLE_API_KEY env var
model="gemini-3.1-pro-preview",
)
Custom LLM provider
Implement LLMService to use any LLM with tool/function calling support:
from mango.llm import LLMService, LLMResponse, Message, ToolDef
class MyCustomLlm(LLMService):
def chat(
self,
messages: list[Message],
tools: list[ToolDef],
system_prompt: str = "",
) -> LLMResponse:
# Call your LLM API here
# Return LLMResponse with text and/or tool_calls
...
def get_model_name(self) -> str:
return "my-custom-model"
LLMResponse fields:
| Field | Type | Description |
|---|---|---|
text | str | None | Text content of the response |
tool_calls | list[ToolCall] | Tool calls requested by the LLM |
model | str | Model name as returned by the provider |
input_tokens | int | Input tokens used |
output_tokens | int | Output tokens used |
has_tool_calls | bool | Property: len(tool_calls) > 0 |