MangoAgent
from mango.agent import MangoAgent
Constructor
MangoAgent(
llm_service: LLMService,
tool_registry: ToolRegistry,
db: NoSQLRunner | None = None,
agent_memory: MemoryService | None = None,
schema: dict[str, SchemaInfo] | None = None,
introspect: bool = False,
max_iterations: int = 8,
memory_top_k: int = 3,
max_turns: int = 5,
)
Methods
setup() → None
Initializes the agent. Runs schema introspection if introspect=True and db is set. Builds the system prompt. Must be called before the first ask().
Called automatically on the first ask() if not called manually.
ask(question, on_tool_call=None) → AgentResponse
async
Asks a question and returns the complete answer.
Parameters:
question: str— natural language questionon_tool_call: Callable[[str, dict, str], None] | None— optional callback called after each tool execution with(tool_name, tool_args, result_text)
Returns: AgentResponse
response = await agent.ask("How many users signed up last week?")
ask_stream(question) → AsyncGenerator[dict, None]
async generator
Same as ask() but yields events as they happen. Use for real-time UIs.
Parameters:
question: str— natural language question
Yields dicts with type field:
async for event in agent.ask_stream("..."):
match event["type"]:
case "tool_call":
# event["tool_name"], event["tool_args"]
case "tool_result":
# event["tool_name"], event["success"], event["preview"]
case "answer":
# event["text"]
case "done":
# event["iterations"], event["input_tokens"],
# event["output_tokens"], event["memory_hits"],
# event["tool_calls_made"]
new_session() → MangoAgent
Returns a new agent with the same configuration but a fresh conversation history. Schema and system prompt are reused — no re-introspection.
session = agent.new_session()
reset_conversation() → None
Clears the conversation history in place.
agent.reset_conversation()
Properties
| Property | Type | Description |
|---|---|---|
llm_service | LLMService | The configured LLM |
tool_registry | ToolRegistry | The tool registry |
db | NoSQLRunner | None | The connected database |
agent_memory | MemoryService | None | The memory service |
conversation_length | int | Number of messages in the current conversation |
AgentResponse
@dataclass
class AgentResponse:
answer: str # natural language answer
tool_calls_made: list[str] # names of tools called
input_tokens: int # total input tokens across all LLM calls
output_tokens: int # total output tokens across all LLM calls
iterations: int # number of LLM calls in this turn
memory_hits: int # number of memory examples injected