MangoAgent

MangoAgent is the central class that orchestrates the LLM ↔ tool loop. Every question goes through it.

Constructor

from mango.agent import MangoAgent

agent = MangoAgent(
    llm_service=llm,         # required
    tool_registry=tools,     # required
    db=db,                   # required
    agent_memory=memory,     # optional
    schema=None,             # optional — pre-introspected schema
    introspect=False,        # whether setup() runs introspection
    max_iterations=8,        # safety cap on tool calls per question
    memory_top_k=3,          # how many memory examples to inject
    max_turns=5,             # conversation turns to keep in history
)

Parameters

ParameterTypeDefaultDescription
llm_serviceLLMServiceLLM backend to use
tool_registryToolRegistryRegistered tools
dbNoSQLRunner | NoneNoneConnected database
agent_memoryMemoryService | NoneNoneVector store for memory
schemadict | NoneNonePre-computed schema (skips introspection)
introspectboolFalseRun schema introspection on setup()
max_iterationsint8Max tool calls before forcing a final answer
memory_top_kint3Memory examples to inject per question
max_turnsint5Conversation history turns to keep

Methods

setup()

Initializes the agent: runs schema introspection (if introspect=True) and builds the system prompt. Call once after connecting the database, before the first ask().

agent.setup()

If db is None, setup() is a no-op.

ask(question, on_tool_call=None)

Ask a question and get a complete answer.

response = await agent.ask("How many orders were placed last week?")
print(response.answer)

Optional on_tool_call callback receives (tool_name, tool_args, result_text) after each tool execution:

def log_tool(name, args, result):
    print(f"Tool: {name} | Args: {args}")

response = await agent.ask("...", on_tool_call=log_tool)

ask_stream(question)

Same as ask() but streams events as they happen via an async generator. Useful for real-time UIs.

async for event in agent.ask_stream("What are the top 10 products by revenue?"):
    if event["type"] == "tool_call":
        print(f"Calling {event['tool_name']}...")
    elif event["type"] == "answer":
        print(event["text"])

Event types:

TypeFields
tool_calltool_name, tool_args
tool_resulttool_name, success, preview
answertext
doneiterations, input_tokens, output_tokens, memory_hits, tool_calls_made

new_session()

Returns a new agent with the same configuration but a fresh conversation history. Schema and system prompt are shared — no re-introspection.

session = agent.new_session()
response = await session.ask("...")

reset_conversation()

Clears the conversation history in place.

agent.reset_conversation()

Properties

PropertyTypeDescription
llm_serviceLLMServiceThe configured LLM
tool_registryToolRegistryThe tool registry
dbNoSQLRunner | NoneThe connected database
agent_memoryMemoryService | NoneThe memory service
conversation_lengthintNumber of messages in history

Multi-turn conversations

Conversation history is preserved across ask() calls. Follow-up questions work naturally:

r1 = await agent.ask("How many orders were placed last week?")
r2 = await agent.ask("And how many of those were delivered?")
r3 = await agent.ask("Which customer placed the most?")

History is automatically pruned to max_turns complete turns to keep token usage stable.