Skip to content

10xScale Agentflow

PyPI License Python Coverage

Agentflow is a lightweight Python framework for building intelligent agents and orchestrating multi-agent workflows. It's an LLM-agnostic orchestration tool that works with any LLM providerβ€”use LiteLLM, native SDKs from OpenAI, Google Gemini, Anthropic Claude, or any other provider. You choose your LLM library; Agentflow provides the workflow orchestration.


✨ Key Features

  • 🎯 LLM-Agnostic Orchestration - Works with any LLM provider (LiteLLM, OpenAI, Gemini, Claude, native SDKs)
  • πŸ€– Multi-Agent Workflows - Build complex agent systems with your choice of orchestration patterns
  • πŸ“Š Structured Responses - Get content, optional thinking, and usage in a standardized format
  • 🌊 Streaming Support - Real-time incremental responses with delta updates
  • πŸ”§ Tool Integration - Native support for function calling, MCP, Composio, and LangChain tools with parallel execution
  • πŸ”€ LangGraph-Inspired Engine - Flexible graph orchestration with nodes, conditional edges, and control flow
  • πŸ’Ύ State Management - Built-in persistence with in-memory and PostgreSQL+Redis checkpointers
  • πŸ”„ Human-in-the-Loop - Pause/resume execution for approval workflows and debugging
  • πŸš€ Production-Ready - Event publishing (Console, Redis, Kafka, RabbitMQ), metrics, and observability
  • 🧩 Dependency Injection - Clean parameter injection for tools and nodes
  • πŸ“¦ Prebuilt Patterns - React, RAG, Swarm, Router, MapReduce, SupervisorTeam, and more

🌟 What Makes Agentflow Unique

Agentflow stands out with powerful features designed for production-grade AI applications:

πŸ—οΈ Architecture & Scalability

  1. πŸ’Ύ Checkpointer with Caching Design
    Intelligent state persistence with built-in caching layer to scale efficiently. PostgreSQL + Redis implementation ensures high performance in production environments.

  2. 🧠 3-Layer Memory System

  3. Short-term memory: Current conversation context
  4. Conversational memory: Session-based chat history
  5. Long-term memory: Persistent knowledge across sessions

πŸ”§ Advanced Tooling Ecosystem

  1. πŸ”Œ Remote Tool Calls
    Execute tools remotely using our TypeScript SDK for distributed agent architectures.

  2. πŸ› οΈ Comprehensive Tool Integration

  3. Local tools (Python functions)
  4. Remote tools (via TypeScript SDK)
  5. Agent handoff tools (multi-agent collaboration)
  6. MCP (Model Context Protocol)
  7. LangChain tools
  8. Composio tools

🎯 Intelligent Context Management

  1. πŸ“ Dedicated Context Manager
  2. Automatically controls context size to prevent token overflow
  3. Called at iteration end to avoid mid-execution context loss
  4. Fully extensible with custom implementations

βš™οΈ Dependency Injection & Control

  1. πŸ’‰ First-Class Dependency Injection
    Powered by InjectQ library for clean, testable, and maintainable code patterns.

  2. πŸŽ›οΈ Custom ID Generation Control
    Choose between string, int, or bigint IDs. Smaller IDs save significant space in databases and indexes compared to standard 128-bit UUIDs.

πŸ“Š Observability & Events

  1. πŸ“‘ Internal Event Publishing
    Emit execution events to any publisher:
  2. Kafka
  3. RabbitMQ
  4. Redis Pub/Sub
  5. OpenTelemetry (planned)
  6. Custom publishers

πŸ”„ Advanced Execution Features

  1. ⏰ Background Task Manager
    Built-in manager for running tasks asynchronously:
  2. Prefetching data
  3. Memory persistence
  4. Cleanup operations
  5. Custom background jobs

  6. 🚦 Human-in-the-Loop with Interrupts
    Pause execution at any point for human approval, then seamlessly resume with full state preservation.

  7. 🧭 Flexible Agent Navigation

    • Condition-based routing between agents
    • Command-based jumps to specific agents
    • Agent handoff tools for smooth transitions

πŸ›‘οΈ Security & Validation

  1. 🎣 Comprehensive Callback System
    Hook into various execution stages for:
    • Logging and monitoring
    • Custom behavior injection
    • Prompt injection attack prevention
    • Input/output validation

πŸ“¦ Ready-to-Use Components

  1. πŸ€– Prebuilt Agent Patterns
    Production-ready implementations:
    • React agents
    • RAG (Retrieval-Augmented Generation)
    • Swarm architectures
    • Router agents
    • MapReduce patterns
    • Supervisor teams

πŸ“ Developer Experience

  1. πŸ“‹ Pydantic-First Design
    All core classes (State, Message, ToolCalls) are Pydantic models:

    • Automatic JSON serialization
    • Type safety
    • Easy debugging and logging
    • Seamless database storage
  2. ⚑ Single Command API Launch
    Start a fully async, production-grade API with one command. Built on FastAPI, powered by Uvicorn, with robust logging, health checks, and auto-generated Swagger/Redoc docs.

  3. 🐳 Single Command Docker Image
    Generate a deployable Docker image with one commandβ€”no vendor lock-in, no platform cost. Deploy anywhere: cloud, on-prem, or edge.

  4. πŸ”’ Easy Authentication Integration
    JWT authentication by default. Extend with any provider by specifying the class pathβ€”plug-and-play security.

  5. πŸ†” Customizable ID Generation
    Control over generated IDsβ€”use smaller IDs instead of 128-bit UUIDs to save space in databases and indexes.

  6. πŸ“¦ All Core Classes as Pydantic Models
    State, Message, ToolCalls, and more are Pydantic modelsβ€”fully JSON serializable, easy to debug, log, and store.

  7. πŸ›‘οΈ Sentry Integration
    Provide a DSN in settings and all exceptions are sent to Sentry with full context for error tracking and monitoring.

πŸš€ Quick Start

Installation

Basic installation with uv (recommended):

uv pip install 10xscale-agentflow

Or with pip:

pip install 10xscale-agentflow

Optional Dependencies:

Agentflow supports optional dependencies for specific functionality:

# PostgreSQL + Redis checkpointing
pip install 10xscale-agentflow[pg_checkpoint]

# MCP (Model Context Protocol) support
pip install 10xscale-agentflow[mcp]

# Composio tools (adapter)
pip install 10xscale-agentflow[composio]

# LangChain tools (registry-based adapter)
pip install 10xscale-agentflow[langchain]

# Individual publishers
pip install 10xscale-agentflow[redis]     # Redis publisher
pip install 10xscale-agentflow[kafka]     # Kafka publisher
pip install 10xscale-agentflow[rabbitmq]  # RabbitMQ publisher

# Multiple extras
pip install 10xscale-agentflow[pg_checkpoint,mcp,composio,langchain]

Environment Setup

Set your LLM provider API key:

export OPENAI_API_KEY=sk-...  # for OpenAI models
# or
export GEMINI_API_KEY=...     # for Google Gemini
# or
export ANTHROPIC_API_KEY=...  # for Anthropic Claude

If you have a .env file, it will be auto-loaded (via python-dotenv).


πŸ“š Documentation Structure

πŸŽ“ Tutorials

Learn Agentflow step-by-step with practical examples:

πŸ“– Concepts

Deep dives into Agentflow's architecture:

πŸ“˜ API Reference

Complete API documentation for all modules:

  • Graph - StateGraph, CompiledGraph, Node, Edge, ToolNode
  • State - AgentState, ExecutionState, MessageContext
  • Checkpointer - InMemory, PostgreSQL+Redis
  • Store - BaseStore, Qdrant, Mem0
  • Publisher - Console, Redis, Kafka, RabbitMQ
  • Adapters - LiteLLM, MCP, Composio, LangChain
  • Utils - Message, Command, Callbacks, Converters
  • Prebuilt Agents - Ready-to-use patterns

πŸ’‘ Simple Example

Here's a minimal React agent with tool calling:

from dotenv import load_dotenv
from litellm import acompletion

from agentflow.checkpointer import InMemoryCheckpointer
from agentflow.graph import StateGraph, ToolNode
from agentflow.state.agent_state import AgentState
from agentflow.utils import Message
from agentflow.utils.constants import END
from agentflow.utils.converter import convert_messages

load_dotenv()


# Define a tool with dependency injection
def get_weather(
        location: str,
        tool_call_id: str | None = None,
        state: AgentState | None = None,
) -> Message:
    """Get the current weather for a specific location."""
    res = f"The weather in {location} is sunny"
    return Message.tool_message(
        content=res,
        tool_call_id=tool_call_id,
    )


# Create tool node
tool_node = ToolNode([get_weather])


# Define main agent node
async def main_agent(state: AgentState):
    prompts = "You are a helpful assistant. Use tools when needed."

    messages = convert_messages(
        system_prompts=[{"role": "system", "content": prompts}],
        state=state,
    )

    # Check if we need tools
    if (
            state.context
            and len(state.context) > 0
            and state.context[-1].role == "tool"
    ):
        response = await acompletion(
            model="gemini/gemini-2.5-flash",
            messages=messages,
        )
    else:
        tools = await tool_node.all_tools()
        response = await acompletion(
            model="gemini/gemini-2.5-flash",
            messages=messages,
            tools=tools,
        )

    return response


# Define routing logic
def should_use_tools(state: AgentState) -> str:
    """Determine if we should use tools or end."""
    if not state.context or len(state.context) == 0:
        return "TOOL"

    last_message = state.context[-1]

    if (
            hasattr(last_message, "tools_calls")
            and last_message.tools_calls
            and len(last_message.tools_calls) > 0
    ):
        return "TOOL"

    return END


# Build the graph
graph = StateGraph()
graph.add_node("MAIN", main_agent)
graph.add_node("TOOL", tool_node)

graph.add_conditional_edges(
    "MAIN",
    should_use_tools,
    {"TOOL": "TOOL", END: END},
)

graph.add_edge("TOOL", "MAIN")
graph.set_entry_point("MAIN")

# Compile and run
app = graph.compile(checkpointer=InMemoryCheckpointer())

inp = {"messages": [Message.from_text("What's the weather in New York?")]}
config = {"thread_id": "12345", "recursion_limit": 10}

res = app.invoke(inp, config=config)

for msg in res["messages"]:
    print(msg)

🎯 Use Cases & Patterns

Agentflow includes prebuilt agent patterns for common scenarios:

πŸ€– Agent Types

πŸ”€ Orchestration Patterns

πŸ”¬ Advanced Patterns

See the Prebuilt Agents Reference for complete documentation.


πŸ”§ Development

For Library Users

Install Agentflow as shown above. The pyproject.toml contains all runtime dependencies.

For Contributors

# Clone the repository
git clone https://github.com/10xhub/agentflow.git
cd agentflow

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dev dependencies
pip install -r requirements-dev.txt
# or
uv pip install -r requirements-dev.txt

# Run tests
make test
# or
pytest -q

# Build docs
make docs-serve  # Serves at http://127.0.0.1:8000

# Run examples
cd examples/react
python react_sync.py

Development Tools

The project uses: - pytest for testing (with async support) - ruff for linting and formatting - mypy for type checking - mkdocs with Material theme for documentation - coverage for test coverage reports

See pyproject.dev.toml for complete tool configurations.


πŸ—ΊοΈ Roadmap

  • βœ… Core graph engine with nodes and edges
  • βœ… State management and checkpointing
  • βœ… Tool integration (MCP, Composio, LangChain)
  • βœ… Parallel tool execution for improved performance
  • βœ… Streaming and event publishing
  • βœ… Human-in-the-loop support
  • βœ… Prebuilt agent patterns
  • 🚧 Agent-to-Agent (A2A) communication protocols
  • 🚧 Remote node execution for distributed processing
  • 🚧 Enhanced observability and tracing
  • 🚧 More persistence backends (Redis, DynamoDB)
  • 🚧 Parallel/branching strategies
  • 🚧 Visual graph editor

πŸ“„ License

MIT License - see LICENSE for details.



πŸ™ Contributing

Contributions are welcome! Please see our GitHub repository for:

  • Issue reporting and feature requests
  • Pull request guidelines
  • Development setup instructions
  • Code style and testing requirements

πŸ’¬ Support


Ready to build intelligent agents? Start with the Tutorials or dive into a Quick Example!